Contribute to Open Source. Search issue labels to find the right project for you!

General discussion about the new QML API


The plan is to build an optional higher level API to interact with horizon from QML. Using a new .pri subproject, the plan is to add a defined “QML_STELLAR_API” to conditional compile Q_DECLARE_METATYPE/qmlRegisterType/interface classes etc.. as required.

I realized that the below classes could be nice to have in any Stellar application and would be great to implement a QML declarative version of them: - StellarConfiguration, singleton to setup common settings (use public or test network, horizon server, etc.. ) and to access Stellar current variables. API version number, minimal fee, base reserve, etc… - FederationResolver, class to resolve federations - QQuickImageProvider to resolve assets icons. URI could be something like stellar://icon?issuer=issuer_public_key&code=asset_code - Repeater/Loader like requests classes. For each request class, a declarative version to request data to horizon.

You can feel free to comment them or suggest some more ideas.

Updated 22/03/2018 23:00

Representing transformation events and timelines


A synchronized set of Mentat instances collaborate to build a linear timeline of states. They do so by rebasing and/or merging their changes into the remote primary timeline.

Each instance is able to perform a set of operations — assertion, retraction, excision, and schema addition/alteration — that introduce coordination points in this timeline.

Before we discuss those, it’s worth mentioning that there are other operations that only affect whether clients can proceed:

  1. Base format changes (#587) — e.g., the introduction of a new type — can lock out clients; older clients won’t be able to interpret the wire format.
  2. Non-backwards-compatible schema changes should prevent a remote timeline from being merged locally until the client upgrades; doing so would prevent the local client from operating on its own data.

Most of a client’s operations are point-in-time:

  1. Retraction (and change, which is retraction + assertion) represents an ordered state change as a datom. There is a conceptual division here between the state before and after the change is transacted; the retraction is only meaningful if it happens-after the assertion of the datom to be retracted. This is not true for assertion; Mentat transactions that consist only of assertions can be applied in any order.
  2. Schema additions and changes similarly happen at a point on the timeline; other assertions can’t use the vocabulary until it has been asserted, and some states on the timeline are invalid when viewed through the lens of a later altered vocabulary — e.g., altering the cardinality of an attribute.

The presence of one of these operations in a branch of history is important when considering merges and rebases. We must make sure that retractions and assertions are sequenced, and ensure that schema changes occur before data changes that depend on them.

One operation — excision (#21) — affects an existing region of timeline. An excision marker is placed in the local timeline and implies the removal of data from the device’s local log.

Given that we want all clients to have the same materialized state after reaching the same point in the transaction log, it’s crucial that excision processing is reproducible.

For a single client system, this is relatively simple: there is only one timeline, and all writes are fast-forward.

For a multi-client system, where a timeline might diverge and merge back together:


we must define which of the three timeline segments — shared history, new-left, new-right — are impacted by an excision on left or right. This definition must produce the same outcome when applied by either client.

Let’s look at an example.

Imagine that two clients, A and B, both know about a URL in a browser’s history, This has ID 123. At point T1 it’s linked to three visits: 200, 201, 202. Visits are component entities.

Now A and B diverge. A adds a visit: 300. B excises all visits for 123.

This is a classic syncing problem: what happens after a merge?

Usually one of the following occurs:

  • If A reaches the server first, it adds the visit. B detects a conflict and undoes its local deletion. This surprises the user.
  • … or B deletes all four visits. Sometimes it’ll do this even if A’s visit was later than B’s excision, perhaps even months later, depending on when A and B sync. This surprises the user.
  • If B reaches the server first, it drops the first three visits. Depending on format, A will then reupload all four visits (which surprises the user), or just the new one (300).

Various approaches are used to try to make some of these operations durable, such as recording tombstones. That’s really tricky.

One of the reasons it’s tricky is that it’s not clear what the deletion means, because the excision operation (in Firefox Sync, at least) isn’t pinned to a point on a timeline — it floats at an instant in time or an order of interaction with the server, and that is very woolly indeed.

The most obvious meaning for excision is that it applies along the current parent ‘route’ back to the origin. B’s excision doesn’t apply to A’s new data. If B merges first, its excision is already recorded when A comes to merge or rebase. If A merges first, B knows how to rewrite its excision to apply only to earlier data.

(Indeed, B might well automatically record the excision only for the merged data — {:db/excise 123, :db.excise/beforeT <last parent>} — and just directly drop non-merged excised datoms as if they had never been written. This is reminiscent of Mercurial’s phases.)

This alone isn’t enough. If A transacted like this:

{:page/url ""
 :page/visit {:visit/at 1234567890123}}

the actual datoms recorded would be:

[:db/add 123 :page/visit 300 tx]
[:db/add 300 :visit/at 1234567890123 tx]
[:db/add tx :db/txInstant 1234567999999]

B excised everything about 123, and here we wouldn’t have reasserted the URL, and so the URL would be excised, leaving 300 as a visit to an unknown page. A process of reintroduction (or storing redundant data) might be necessary, or perhaps that behavior is actually desirable; it depends on how the data is modeled and how specific the retraction is.

Updated 22/03/2018 22:38 1 Comments

Difficulty in having a user enter text (for textmining)


As Jeff has found, there is little value in just using abstract text with textmining. You more often than not just get back nothing. So it makes sense to use full text: the abstract, the body, figure legends, etc.

That is not easy for a user to do. As a user, I have to spend all sorts of time going over my paper, copying and pasting. A user would rather just spend a few minutes drawing what he already knows than spending all sorts of time copying and pasting.

Discuss ways to address this.

Updated 22/03/2018 17:40

Send 520 when REACH output cannot be converted to a Factoid model


Continuation of #207 with a reduced scope

<!– FEATURE REQUEST : Delete if reporting a bug –>

Description of new feature

<!– What should the new feature do? That is, what is the goal that the user will accomplish? –> Reach


  • [ ] Determine the conditions and cases where REACH’s response cannot be converted into a factoid model
  • [ ] Determine the business logic related to processing REACH output
  • [ ] When it is determined that this REACH response can’t be converted to a model, send a 520 error in the response

(4) NLP returns a response that can’t be converted to factoid model (520) Log to the server (the inputted text + the fact that the result can’t be serialized) Tell the user that “there was a problem with processing your text” and suggest to either change text or start on an empty document business logic related to processing reach output – sometimes there are errors produced (not well defined).

Motivation for new feature

<!– Describe the use case for this new feature. Why is the goal the user will accomplish important? –>

The hypothesis is that providing users with reasons why NLP failed will allow them to cope with why the app failed to process their paper. By suggesting alternatives such as starting with an empty document, users will have a better understanding of what to do.


Updated 22/03/2018 17:23 1 Comments

How do I instrument .NET Core console app with extension SDK TelemetryModule


Hey folks,

Yesterday I set out to prove that my heartbeat feature would work nicely on any app running on a Linux box in Azure.

.NET Core doesn’t load an ApplicationInsights.config file, so I had to do it like this:

/// This is the method with the setup of the heartbeat.
public static TelemetryClient SetupClient(string ikey)
    TelemetryModules.Instance.Modules.Add(new AzureInstanceMetadataTelemetryModule()); // add before init
    var aiCfg = TelemetryConfiguration.Active;
    aiCfg.InstrumentationKey = ikey;

    // no config, so I have to setup my heartbeat setting manually...
    foreach (var module in TelemetryModules.Instance.Modules)
        if (module is IHeartbeatPropertyManager hbeatManager)
            hbeatManager.IsHeartbeatEnabled = false;
            hbeatManager.HeartbeatInterval = TimeSpan.FromSeconds(31);
            hbeatManager.IsHeartbeatEnabled = true;
    TelemetryClient cli = new TelemetryClient(aiCfg);

    return cli;

…is that expected/correct? it works, but I would love to know if there is a better way for me to do this. I did try this:

// Create a new App Insights telemetry client from a config file...
public static TelemetryClient SetupClientFromConfig(string pathToAppInsightsConfig = "ApplicationInsights.config")
    var appInsightsConfigContent = System.IO.File.ReadAllText(pathToAppInsightsConfig); 
    var aiConfig = TelemetryConfiguration.CreateFromConfiguration(appInsightsConfigContent); 
    TelemetryClient cli = new TelemetryClient(aiConfig); 

    return cli;

…but no dice. The modules specified in my ApplicationInsights.config file never get added to the TelemetryConfiguration.

Updated 22/03/2018 22:19 9 Comments

NLP Error messages in the (Client side UI)


<!– FEATURE REQUEST : Delete if reporting a bug –>

This is issue is a continuation of the unfinished work in #207

Description of new feature

<!– What should the new feature do? That is, what is the goal that the user will accomplish? –>

Based on various errors from NLP services, display relevant messages to the users in the UI

  • [ ] (1) REACH times out (504) Suggest to the user to try again later
  • [ ] (2) REACH fails (500) Suggest to the user to try again later
  • [ ] (3) NLP returns 0 entities (200) Tell the user that “no entities were recognized” and suggest to either change text or start on an empty document

Motivation for new feature

<!– Describe the use case for this new feature. Why is the goal the user will accomplish important? –>

The hypothesis is that if users receive messages related to why NLP failed, they will tolerate these failures in the Factoid app and understand what their next steps should be


Updated 22/03/2018 17:34 1 Comments

poll: what should be the default behavior for fxFlex?


This issue tracks the possibility of changing the default behavior for the fxFlex directive. Currently, the behavior is as follows:

  • When the input is an integer, this defaults to a percentage value input for fxFlex
  • When the input is any other singular value, it’s interpreted as the intended flex-basis value and results in flex: 1 1 <input> with min-width and max-width applied (assuming row format)
  • When the input is three values, it’s interpreted as a direct translation to the CSS flex property, with min/max-width applied according to the algorithm specified here
  • When the input is longer than three values, it is truncated to the first three values

I’ll list some alternative options for fxFlex behavior below, please vote for your preferred option, or feel free to add your own. If you like the current behavior, please do not add comments, but simply vote for this comment.

Updated 22/03/2018 16:52 5 Comments



What I want to solve?

I want to implement a common unit for sizes for all components to avoid having components with styles like padding={2.5} or padding={4.5}. We need some pattern for sizes, otherwise developers will use random sizes.


Components should accept sizes. If the component role is very well defined, it can accept a single size. Multiple sizes are allowed too.

<VerticalSpacing size={2} /> // Single size

<View marginSize={2} paddingSize={1} /> // Multiple sizes

The size to pixel ratio is always 5px. Sizes should be a whole number, so 0.5 is not allowed.

0 = 0px
1 = 5px
2 = 10px
3 = 15px
4 = 20px
5 = 25px
6 = 30px
7 = 35px
8 = 40px

For most apps 8 is enough, but you could use, say, 20, it’s not an issue.

Alternative Approach

We could use a naming scheme like xs, sm, md, lg, xl. The problem is that it’s not very flexible, and we’ll end up having components with sizes like xxs or xxxl.

Implementation Draft

import { Block } from "jsxstyle"
import React from "react"

const sizedProps = ["marginSize", "paddingSize"]

const sizeToPixelRatio = 5

const View = props => (
    {...Object.keys(props).reduce((result, propName) => {
      const propValue = props[propName]

      if (sizedProps.indexOf(propName) > -1) {
        if (propValue % 1 !== 0)
          throw new Error(`"${propName}": "${propValue}" is not a whole number`)
        return {
          [propName.replace("Size", "")]: propValue * sizeToPixelRatio

      return {
        [propName]: propValue
    }, {})}

export default View
Updated 22/03/2018 16:58

Remove user avatar from Toolbar


The toolbar with the avatar looks overly cluttered, and is reminiscent of a time pre-lollipop where having the application icon and title was common. The practice of having an application icon and title is discouraged according to the docs:

The use of application icon plus title as a standard layout is discouraged on API 21 devices and newer.

I know this isn’t the application icon but the cons of having the avatar icon I feel is similar to that of the application icon.

Looking at other apps with the idea of users e.g. Facebook, Instagram, Twitter; none of them include the user icon when you are scrolling the feed, they just include the username.

Note that I’m suggesting of removing all avatar icons from the toolbar, not just the ones on the user page (e.g. also the project page, gist page).

Current design Proposed design
screenshot_20180322-160704 screenshot_20180322-160827
Updated 22/03/2018 16:29

Allow unbounded -maxiters


Currently, the -maxiters options only allows to override the default maximum number of iterations by a specific number. In some situations we had in practice it would have been nice to be able to have no bound at all. Currently we just specify a rather large number but that’s a bit ugly.

What would be a good syntax? Some ideas: - -maxiters inf - -maxiters unbounded

Internally, it might make sense to encode that as INT_MAX and provide some helpers for checking a current iteration count against the bound. That would have the advantage over using an encoding by something like -1 that code (in other branches or extensions) that has not been converted would not fail catastrophically. As an alternative, one could also pass along a Boolean flag for unbounded, but that would touch quite a bit of JNI invocations, etc.

Updated 22/03/2018 16:03

VIP: Change bytes length notation



VIP: 714
Title: Change bytes length notation
Author: Jacques Wagener
Type: Standard Track
Status: Draft
Created: 2018-03-22

Simple Summary

Change the bytes decleration syntax to resemble a lists' syntax.


Change the byte length syntax on declaring a bytearray.


The motivation is critical for VIPs that add or change Vyper’s functionality. It should clearly explain why the existing Vyper functionality is inadequate to address the problem that the VIP solves as well as how the VIP is in line with Vyper’s goals and design philosopy.


Currently bytearray size are set using the the smaller-than-or-equal operator <=. This VIP will replace it to use the square bracket to indicated the byte size. From python abytearray: bytes <= 3 To python a: bytes[3]

This makes it slightly more readable and standardised - because list lengths use the same square bracket subscript notation e.g.

alist: int128[7]

Backwards Compatibility

Not backwards compatible. Fully override the old style.


Copyright and related rights waived via CC0

Updated 22/03/2018 14:17

What' the meta programming story in Fable 2.0?


I’m playing with F#’s quotation feature recently, and I find it’s really powerful because we can create a lambda function based on expression tree and eval it to a real function, this is much like macros in other languages. Unfortunately, it’s unavailable in Fable.

Here are some cases that I would use some equivalent feature in Fable:

1. Create default empty record.

type People = {
  name: string
  age: int
module People = 
  let empty = 
    { name = ""
       age = 0 }

In F#, I need to write lots' of empty record, it could actually be done by Reflection or quotations, but Fable’s reflection API is very limit, I’m not sure there is a way to do this. And since Fable 2.0 will use more lightweight types, plugins API with full access to generic type info might be a good idea.

2. Create functions dynamically

I usually create lots of Records/Classes, like BookInput (book model for input form), BookOutput(book model for output), BookEntity (book entity for ORM) and need write lots of functions to transform between them. The transform functions can be generated by quotations, but there still are some in fables that I’m not sure how to implement, like BookOutput -> BookInput.

Fable’s interop API is very powerful and convenient, but as same as No.1, without type information it could be really hard to handle FSharp values/types.

I know reflection, plugins are not included in the alpha release, but maybe it’s the time for discussion? :smile:

Updated 22/03/2018 16:56 2 Comments

Meta: make it easier for people to contribute to Bela as an open source project


Some ideas: - [ ] Write a new main that introduces the project, it’s constituent parts, how it relates to other BelaPlatform repositories and how to contribute to it. It should be a README for developers as much as users (the distinction shouldn’t ultimately be necessary). - [ ] Specifically add a contribution guide to help people write clear issues and submit pull requests (possible template). - [ ] Add new labels to issues that are community facing such as Good first issue. “This label marks tickets that are easy to get started with. The ticket should be ideal for beginners to dive into the code base.” - - [ ] A clearer Wiki article bringing together development guides. This should cover or link to other resources on how to develop the different parts of Bela and in each case describe the appropriate way to fork, branch, develop and merge changes. - [ ] Ask the forum about their experiences modifying the Bela codebase and integrate that feedback. - [ ] A blog post outlining the above changes and describing developer workflow in a tutorial-like way.

Updated 22/03/2018 13:47

음원을 분할하는 기준 선정


참고 : []

낮은 음은 20Hz 그러므로 1/20 s보다 크게 나누어야 함.

사람이 1초에 악기를 10번 이상 연주하기 힘든 것으로 가정.

Updated 22/03/2018 12:36 3 Comments

New lang strings added to en.json need to be added to other locales manually.


Every time a new lang string is introduced, locales/en.json will be updated automatically but not the others, currently contributors have to add the new lang strings to the other locales manually. This increases the effort to the development and to code review.

The first suggestion is that create a new task in Travis to check if new strings are added to locales/en.json, the rest locales should be updated accordingly.

Second suggestion and I think is the ideal one is to automate this process, we can build a git pre-commit hook if local changes contain a locale file (usually is locales/en.json) then compare locale JSONs then add missing keys to those that missed.

Updated 22/03/2018 13:42 1 Comments

Is it possible to list which virtualenvs are managed by pipenv?


In my ~/.virtualenvs folder, I have a number of environments - some are managed by pew and others by pipenv. I can’t easily tell which is which (although my guess is that the ones ending in -<random letters> are managed by pipenv), nor do I appear to be able to identify which project directory is associated with the pipenv-managed venvs.

Basically, I’m looking for a way to do housekeeping on old or experimental projects - identify and tidy up no longer used virtual environments, determine whether it’s OK to delete a project directory or if it will leave a venv behind, etc. My ~ directory is on a limited-space SSD, so I’d prefer to keep it clean.

Updated 22/03/2018 22:43 9 Comments

Drop variant name field


Currently field is not particularly useful. Moreover, it causes some confusion in code when it comes to variant representation. and should be dropped. In my opinion, we should drop this field and use variant attributes as a name. We already do it, but our model allows variants without attributes which doesn’t make much sense and we should eventually change it, but for now we could use SKU as fallback.

Updated 22/03/2018 12:35 7 Comments

[new module] script manager (add tracking to website)


This goes well with the GA tracking and Template Directory modules, I think…

Users need adding scripts like the Facebook pixel, Google tag manager, Hotjar etc etc. The best way is to add them via a plugin. A script manager module that can add, remove, edit and toggle your scripts might help users that create landing pages with Template Directory.

Suggested locations in the page’s code where the scripts should be inserted: - before </head> - after <body> - before </body> This can also be used for CSS changes…

Not sure if an option to load the script sitewide or just on specific pages would help…

Here is an plugin example:

CC: @ineagu @abaicus

Updated 22/03/2018 09:11

CloseLoop [GPS code and LO NCO loop filters]


Sir ,can you please provide me with the theoretical equation of how your costas loop is implemented and its equivalent in your code ; err64 = ext64(pe-pl:32).. ; ki.e64 = err64 << (ki=ch_CA_gain[0]:16) ; new64 = (ch_CA_FREQ:64) + ki.e64 ; (ch_CA_FREQ:64) = new64 ; kp.e64 = ki.e64 << (“ki-kp”=ch_CA_gain[1]:16) ; nco64 = new64 + kp.e64 ; wrReg nco64:32

Updated 22/03/2018 22:16 3 Comments

ColorPickerButton's modal_closed signal doesn't fire when attached via editor


hello godoters. tested on 3.0.2

1) create scene, add a ColorPickerButton node 2) connect the modal_closed to ColorPickerButton 3) when the colorpickerbutton’s modal closes, it doesn’t get signal’d

solution 1.: - C++ developer might need to edit source so it sends the right signal, @willnationsdev and poke can take a look. i think in the c++ code, it forgets to grab the reference to get_popup()

solution 2.: - you have to do $ColorPickerButton.get_popup().connect("modal_closed", self, "modal_closed_func")

tyhx! sorry if reported already

Updated 22/03/2018 09:24 1 Comments

appearance of nicks in lobby list


I like what our new friend @alketii did here:

I disliked using ZG_ to prefix usernames but hadn’t thought of a better method.

I think there needs to be a way to distinguish people logged in through regular IRC, and those logged in through the ZG client.

We could have a ‘’ next to people who are logged in through a ZG client. And then in the master server announcement (not yet created), a statement noting that only players with ‘’ next to their name are available for playing

I’m open to other ideas as well.

Updated 22/03/2018 13:53 3 Comments

구독자 글 보기에 대해서


2018-03-22 10 56 09

위 기능을 통해

각자 구독하고 있는 회원들의 글들을 다른 회원도 볼수 있는줄 알았습니다. (당연히)…. 헌데..보니 아닌것 같습니다. 본인만 본인이 구독한 회원의 글을 볼수 있는것 같습니다.

그런데 굳이 그럴필요는 없어 보입니다. 다른 회원이 제 썸씽에 접속해서 제가 구독중인 회원의 글을 따로 보는것은..

트위터나 페이스북이나 인스타 그램등에서 본인의 팔로우를 타고 돌아다니는 것에 대한 선순환 및 장점처럼 같이 생각 해보시면 좋겠습니다.

혹시 이 부분 반영할 계획이 없으시면 개인적으로 수정해서 사용할 방법이라도 있으면 알려주시면 고맙겠습니다.

Updated 22/03/2018 07:33 1 Comments

Dependent Enumeration


We have been discussing with @fritzo about getting a version of enumeration working that can do dependent variables, such as in an HMM. It seems that the current enumeration routines are pretty close to what is needed and this should be doable.

Also, @ngoodman had an example he has been using for how he solved that in webppl. It would be nice to take inspiration from that functionality for pyro. Here are example-programs where this is tackled that I would love to write down in pyro:

Leaving this here as a reminder for interested people to discuss.

Updated 22/03/2018 20:09 2 Comments

Discussion: iro.js 4.0+


This thread is for discussion about future additions and changes to iro.js, starting with the next major version (4.0.0) and beyond.

So far, the feedback I’ve received has been overwhelmingly positive (thank you!!), but I’ve also noticed a need for the library to evolve into something more featureful and customisable.

When I started working on this project I tried to keep it as minimal as possible, one thing that’s still really important to me is finding the right balance between having a low-friction, easy to comprehend API, and having enough features for it to be useful in different situations. Hence why I’d like to take some time to consider new additions and how they can be accommodated.

I’ve compiled an overview of the various features and changes I’d like to see:

  • Overall goals:

    • Transparency support (see #22)
    • Lay down groundwork for plugins and custom UI elements (like the swatch requested in #19)
    • New slider types for hue / saturation / transparency
    • Alternate layout directions, e.g left-to-right (see #19)
    • Add support for custom handles (also see #19)
    • New “group” component for composing layouts
    • Continue to improve documentation
  • ColorPicker API changes:

    • New methods: mount, unmount, reset
    • Alias marker param with handle
    • Possibly remove the anticlockwise param and make it true by default
    • New components param, which can be used for more advance layout and custom UI elements: js components: [ { component: iro.components.slider, type: "hue", height: 32, ... }, { component: myCustomComponent custom component options... } ]
  • Code improvements:

    • Tests! I really need to wrap my head around javascript testing frameworks and find a suitable way of testing that both the UI and API work correctly
    • Stop stressing quite so much about the size of the minified output, obviously keeping the library relatively minimal should still be an overall goal, but it shouldn’t come at the cost of code quality
    • Rename marker ->handle, opts -> params internally
    • Consider debouncing color update events
    • Simplify internal SVG lib, add support for use
    • The current way of defining classes is messy, but the minified output is much smaller than es6 classes (when transpiled through babel) would be, maybe a createClass helper function (akin to the one react had before 16.0) could be used instead?
    • Add helpful warnings and checks to the dev build to catch common errors, but strip them out of the production build to keep it small
    • Consider using webpack-blocks for build config
  • Side projects:

    • Flesh out the iro.js landing page ( to show off different features and configurations
    • Official iro.js component for React and Vue
    • Experiment with using Svelte and/or web components
    • Investigate using CSS variables instead of a full dynamic stylesheet writer

I should also note that development is going to be slow down for a bit, right now I’m trying to focus on other projects so I have a few extra portfolio pieces – I’d like to get an internship soon :P

Updated 22/03/2018 01:58

[discussion] Change Eigen to have no dependencies?


Why are there any dependencies for Eigen at all? It’s a header only library and the installed CMake scripts do not encode the backend choices (dependencies) that are currently used.

The Installation

set (EIGEN3_INCLUDE_DIR  "${PACKAGE_PREFIX_DIR}/include/eigen3")
set (EIGEN3_INCLUDE_DIRS "${PACKAGE_PREFIX_DIR}/include/eigen3")

EIGEN3_DEFINITIONS (as far as I know) would be for setting something like EIGEN_MAX_ALIGN_BYTES=0 (e.g. for Win32). But in reality, I rarely see any build systems that use Eigen even reference EIGEN3_DEFINITIONS.

When a user of Eigen wants to enable a specific backend, they must find / include / link with it themselves. For example, the FFT backend:

  • compiling code with preprocessor definition EIGEN_FFTW_DEFAULT
  • linking with FFTW libraries e.g. -lfftw3 -lfftw3f -lfftw3l

Even if it were possible, Spack shouldn’t change this behavior either (by somehow enabling EIGEN_FFTW_DEFAULT or linking flags in the installed CMake / pkg-config scripts). I’ve always interpreted this setup as “pay for what you use” at it’s beautiful C++ core.

Since it’s a header only library, Eigen places tuning / optimization / backend choice responsibilities on the user. For example, the user is responsible for compilation flags related to vectorization.

Why Keep the Spack Dependencies

Projects that depend on Eigen and also use / enable a specific backend will always have it available. For example, fftw will currently be installed and available for any dependent of Eigen during the build phase.

Why Get Rid of the Dependencies

It takes a long time to install a header only library, and currently not all possible backends are even encoded in Spack. For example, superlu and intel-mkl could also be variants for external solvers.

It would take some doing and testing, but I think the backend choices should be removed entirely from Eigen. The responsibility should be on the package that depends on Eigen instead. For example the vpfft package also currently depends on fftw. It doesn’t look like that package actually uses the Eigen FFT code, but if it did then that package would be responsible.

In practice, if a library is using a specific back end, their build system would already be setup to search for / use it.

Basically, to remove them, we would need to inspect the current dependents of the eigen package and make sure that they are going to work correctly still. I am more than willing to get work started on this, but ideally people familiar with a given package could also review.

Updated 22/03/2018 15:40 5 Comments

Aliasing via common reference


When features are defined in a common parent, we can relate them by proximity in that parent (reference).

With straight aliases, we request paths between features when an alias is defined between them. With aliasing via common reference, we provide two blocks (as usual) but also the parent they are both defined in, together with a cut-off distance.

The alias via common reference function returns every pair of features that are within the cut-off distance from each other in the parent.

For example, myMarkers and myMarkers2 are example datasets in pretzel-data. If we called alias via common reference with block 1A from each (the more common call would be with blocks from a genetic map, but the idea is the same), via myGenome, with 10 as the cut-off distance, we would return the pair myMarkerA and myOtherMarkerA because they are within 10bp of each other, but not between myMarkerB and myOtherMarkerB.

This then allows alignment of genetic maps defined in different marker spaces (what we call namespaces).

Updated 22/03/2018 05:58

Redesign Request: Missing Armor Deals Half Damage


This idea was very nice aesthetically and served a good purpose mechanically, but only really works in the midrange of armor stats. Check out these corner cases:

The Full-Armored squish: 100% coverage but zero AS, dies like a DnD Wizard because every shot is max damage. The Armorless tank: Have tons of Fort and buy one medium armor arm rig with an armor mat. 90% of shots do half damage, and the other 10% has to get through your AS. Total cost: dirt cheap.

I am of the opinion that it will make far more sense to set a permanent dividing line between shots that deal full or half damage.

Example section: For the sake of argument, lets say 60% of the average human is vulnerable to full damage per shot. Further suppose that a target has 75% coverage. Then:

Armor rolls of 1-60 would deal full damage, first to AS and then to health. Armor rolls of 61-75 would deal full damage to AS and half damage to health. If a shot removes the last AS, half the remaining damage is removed from health. Armor rolls of 76-100 would deal half damage to health.

If the target instead had 50% coverage, the difference would be that rolls in the 51-60 range deal full damage to health, ignoring AS. The outer ranges would function the same. Essentially, lighter armors may not cover all vulnerabilities, and heavier armors might be extensive enough to cover less vital areas.

Some implications: 1. #189 is a lot muddier since sometimes it will be better to roll low and other times it will be better to roll high on armor coverage (and other times, like in the above example, the best rolls are in the middle). This is actually already the case, but this proposal would make the muddiness more consistent across characters. 2. The dividing line between full and half damage shots provides another possible handle for balancing races. Maybe Plizards have fewer vulnerable organs than Opaleites. Maybe robots can spend a race trait to move that bar downward. 3. Technically increases dice rolling as now 100% coverage characters may still have to roll.

I would like to collect reactions from some number of devs before yellowlighting my own idea.

Updated 21/03/2018 23:36

Saving a scene containing a tool node will call PREDELETE notification, while not being deleted


Godot 3.0.2

Is this behavior intented? I thought NOTIFICATION_PREDELETE would be sent only before the node to be deleted, but it happens anyways when I save the scene, while no node gets deleted (at least I see no reason for this to happen in this situation).

Not only it calls PREDELETE, but also _init() gets called too, BEFORE the predelete… why is all this even done? I’m saving a scene containing a terrain, which is quite heavy. These steps seem to be unnecessary.

Test project:

1) Open the main scene, notice the log prints: Init Enter world Enter tree Ready 2) Save the scene right away, notice this gets printed: Init Predelete

Updated 21/03/2018 22:06

Amendments to current the two-column package overview solution


First of, I don’t mean to complain about the redesign. I too got used to the old design, but in general I encourage a different layout and other improvements.

For the current design of the two-column idea, I see some good and some bad aspects. First an example where it shines:


But then also this somewhat-worst-case:


Frankly, my eyes get completely lost looking at this mess, in good part due to the “Default” and “Type” captions in the middle of it. Of course for most other packages things look better, but I think the least that could be done is to add very clear visual border around the “properties” div. One might also play with background color, but I think a border is better.

Also, I wish to propose some further changes. I’ll lead with some ascii mockup (insert invented quote “good design will look good in ascii too”)

hackage                                                     package search [ _______ ]

$package - $synopsis

[Skip to Readme] [Home page] [Bug tracker] [Index of Exported Entities]
[.cabal file] [Changelog] [Source repo]

modules                        |        author john doe
                               |    maintainer john doe
Language.English               |         legal copyright blah, license
Language.English.Grammar       |     downloads 310348 total (289 in the last 30 days)
Language.English.Nouns         |        rating 2 (n votes) [+ ++ +++]
Language.English.Verbs         | distributions .. .. ..
Language.English.Tutorial      |      versions 0.0 , .. , 0.6.1 , .. , 0.7.3 , .. ,
                               |               0.8.2 , 0.8.3 , 0.8.4 , 0.8.5
                               |               all versions
                               |       version mypackage-0.6.4 revision 99
                               |      uploaded john doe at date
                               |       revised thomas trustie at date
                               |  dependencies base, kmettoverse, my-custom-prelude


  very understandable


  fire-rockets - dont turn on please




  markdown, nicely rendered

This retains the two-column idea, but applies several changes:

  1. From the realisation that base looks so nice because modules come first, and I often come to a package just to scroll to the module section: Put modules first.
  2. Condense several of the properties. E.g. license+copyright -> legal, could also do author+maintainer -> people
  3. Remove the “category” property as it does not add much
  4. Take all the different package-relevant links previous placed in random locations, and put them up top. Together with 1, this means that many important links from this package are reachable from the first visible screenful of the page.
  5. Reorder the properties, and put the per-package-version properties at the bottom, clearly labelled with the version the user is looking at currently. “mypackage-0.6.4 revision 99” should be bold (ah, ascii is failing me).
  6. Remove clutter from the header. Imho, most of the links there I won’t ever use when looking at a particular package. And if your argument is consistency, then the module docs are different anyways.

Optional: - Remove the version bound information, to be displayed in a separate page or expandable block. I think the displayed data is vague anyways, as it can depend on flags. - Hide many of the versions, focus on latest of each major version (or some such scheme). Rest can go on separate page.

Also, relevant for haddock: Following 4., module pages could have the exact same links in their header.

Updated 21/03/2018 23:00 4 Comments

Discussion on balancing for option swapping


There’s been some discussion on discord already, but good to have it in a more lasting form. Some points to cover:

  • Should we allow users to fully customize left click ops?
  • Should we allow users to fully customize shift click ops?
  • Is swapping ops on inventory items okay?
  • Is swapping ops on equipped items okay?
  • We want to avoid changing XP rates, but how far do we go with that? Do we just not allow specific items to be op swapped, or do we extend that to teleportation and utility items too since they have an indirect effect on XP rates?
  • What could be done to mitigate the effects of changing XP rates?
  • What precedents have been set by other clients that we could follow?
Updated 22/03/2018 06:29 7 Comments

Additional questions to the Client - REQUIREMENTS PHASE


Hello all,

I am about to send an email to our Client with additional questions regarding the requirements.

Please review them and add some comments/ additional questions etc. The below is the version of questions to be used in a final email.

  1. Czy potrzebujemy inne dodatkowe dane dotyczące prowadzących niż imię i nazwisko? Czy przypisujemy prowadzącemu np. przedmioty, które prowadzi i/lub grupy dziekanatowe, które ma pod swoją opieką? Czy są jeszcze inne dane, które nie zostały wspomniane a powinny być przypisane prowadzącemu?

  2. Czy istnieje opcja wykreślenia studenta z listy studentów? Jeśli tak, to jakie warunki muszą być spełnione aby można było tego dokonać?

  3. System ma umożliwiać generowanie poniższych raportów:

    • Lista ocen konkretnego studenta
    • Lista ocen z danego przedmiotu (według grup dziekanatowych)
    • Lista studentów warunkowych Czy są to oczekiwane raporty? Czy należy dodać inne, nie wspomniane wyżej raporty? Jeśli tak, to jakie?
  4. Jak mają być określane ramy czasowe semestru? Czy Administator ma możliwość ręcznego ustawienia dat semestru tj. od - do?

  5. Co rozumiemy przez “Zamknięcie semestru”? Czy jest to data “do”, która została ustalona jako koniec semestru, czy inna data, do której np. mogą być wprowadzane oceny?

  6. Jak ma wyglądać obsługa trybów? Czy student przechodzi na kolejny semestr w trybie NORMALNYM jeśli np. osiągnie minimalną ilość punktów ECTS do zaliczenia semestru (pyt. kto wprowadza tę wartość? Czy możemy przyjąć, ze to jest wartość stała np. 60 pkt. ECTS na semestr) PLUS zaliczy wszystkie przypisane przedmioty? Czy student przechodzi na kolejny semestr w trybie WARUNKOWYM jeśli osiągnie określony Procent z całości wymaganych ECTS na semestr PLUS ma tylko jeden niezaliczony przedmiot? Czy powyższe rozumowanie jest zgodne z oczekiwanym? Jeśli nie to proszę o sprostowanie. Dodatkowo, jakie warunki musi spełnić student aby mieć możliwość wzięcia urlopu lub powtarzania semestru.

Updated 22/03/2018 09:09 3 Comments

Javascript runtime required in production


System configuration

Sprockets or Webpacker version: webpacker ^3.3.1 React-Rails version: 2.4.4 Rect_UJS version: ^2.4.4 Rails version: ~> 5.1 Ruby version: 2.4.1

Just added react-rails to our app and we are trying to compile the assets with docker for porduction We encounter this JavaScript runtime error when trying to start the server. But this error goes away if I add gem 'therubyracer' to my gem file. My question is do we absolutely need therubyracer gem even if we are not doing any server side rendering ?? Not sure if i miss it in the readme somewhere.

From ...

WORKDIR /opt/app

RUN yum -y install git mysql-devel

COPY Gemfile Gemfile.lock package.json .npmrc yarn.lock /opt/app/
RUN bundle install --without development,test

COPY . /opt/app

# Remove anything added to build
RUN curl --silent --location | bash - && \
    curl -o /etc/yum.repos.d/yarn.repo && \
    yum install -y nodejs yarn && \
    yarn install && \
    bundle exec rake assets:precompile && \
    rm -rf node_modules && \
    yum autoremove -y nodejs && \
    yum -y clean all

Message from application: There was an error while trying to load the gem 'react-rails'.
| Gem Load Error is: Could not find a JavaScript runtime. See for a list of available runtimes.
| Backtrace for gem load error is:
| /usr/local/bundle/gems/execjs-2.7.0/lib/execjs/runtimes.rb:58:in `autodetect'
| /usr/local/bundle/gems/execjs-2.7.0/lib/execjs.rb:5:in `<module:ExecJS>'
Updated 22/03/2018 17:18 2 Comments

Add support for adding peers by IP:port



  • [X] You may try to follow the update procedure described in the README and try again before opening this issue.
  • [X] Before asking for something, please check the F.A.Q..
  • [X] Before opening a bug issue, please check the Troubleshooting wiki section.
  • [X] If you want to contribute to the project please review the contributing guidelines.
  • [X] Keep in mind that Flood is a FLOSS (Free, Libre and Open Source Software), so please try to provide a PR when opening a bug issue. Without contributions the project can’t live and with your help fix and request will come faster.
  • [X] The project is accepting issues (bugs report), feature or enhancement requests, discussions and questions but not personal support.


Would it be possible to add a small “+” to “Torrent details”->“Peers” that allows adding a new peer by IP:port? As far as I understand rtorrent supports it via d.add_peer=host[:port] but it would be nice to have access to the feature via UI.

Updated 22/03/2018 18:59 3 Comments

Provide better user experience for applying types


Applying types manually from a JSON file is both less fun and error prone (see #36).

We should consider a nicer UI (a CLI prompt with the node runner, and maybe a GUI button for the webpack plugin, to show all of the accumulated types in some fashion, and allow you to apply them instantly.



Updated 21/03/2018 17:38

Are refunds manual or automatic?


So suppose the announced_launch_time is reached, but the SMT creator has not revealed the cap.

According to the whitepaper, and issue #2241, refunds occur manually. According to the implementation, refunds occur automatically. The purpose of the issue is to discuss which is the correct definition of the product: The whitepaper (manual) or the implementation (automatic).

Whichever way we decide to do it, either the whitepaper or the implementation will need to be updated, since they disagree about which way it should be done.

Manual refunds

According to the whitepaper, refunds are manual – each contributor individually decides whether to issue smt_refund_operation. This allows a SMT creator and its community to mutually agree to delay a launch in case of some unforeseen circumstance:

  • The SMT creator says “I’m okay with delaying the launch” by not revealing the cap
  • Each community member individually says “I’m okay with delaying the launch” by not asking for a refund

Automatic refunds

The code in #2245 implements refunds as automatic: Once announced_launch_time passes, refunds become a “ping” operation which can be executed by anybody. Effectively, this means users get their refunds immediately. (There is no way for a user to refuse to accept a “ping” refund executed by somebody else.)

Updated 21/03/2018 17:43 1 Comments



I thought I’d capture some notes around merging. There’s more discussion in the wiki and in my paper notebooks!

If we allow users to establish multiple initial timelines — that is, to work offline from scratch on more than one device, and then to sign in later to the same account — we will need to support merges of current state.

That is: given a timeline of states A -> B -> C and a timeline of states X -> Y -> Z, produce M(C, Z) which, in the simplest (totally disjoint) case is equivalent to C + Z.

This is not the same thing as attempting to rebase (X -> Y -> Z + 𝛿A + 𝛿B + 𝛿C); it’s possible for conflicts to occur between intermediate states that would be resolved in aggregate, and it’s also not correct in a theoretical sense to imply even an approximate happened-after relationship.

This kind of merge is a three-way merge from the empty shared root. There are other kinds of merges we might consider: e.g., a long-lived divergence from a non-empty shared earlier state.

I sketched out how I think this might work:

  1. Materialize the remote head in a second datoms table. Reusing the fulltext table avoids the need to rewrite those values.
  2. Ensure that the remote schema is current locally. Abort now if the local code is too old. We rely on both sides having the same concept of uniqueness and cardinality.
  3. Renumber locally. This ensures that by default our local data does not accidentally collide with the main timeline.
  4. For all unique av pairs, take the remote e and tx for that av, renaming local identifiers. This is essentially discarding local duplicates. Idents are a special case of this, and need to be done first in order to resolve attributes! Now we have no uniqueness conflicts. There should be no e conflicts (because we renumbered). This is our smushing step. Each time we rewrite an entity that is used elsewhere in the v position we need to iterate until we converge, because the new local av might yield another new e. Once this step is complete we have fully smushed: all of our entities that can be linked via a unique-identity path to a value will have been unified.
  5. For all cardinality-one ea pairs, look for a conflict between the two datoms tables and resolve according to rules. Resolving a conflict might mean taking the remote value, which might require further smushing.
  6. Now take all datoms in local and INSERT OR IGNORE into remote. Synthesize a single merge tx if desired: using a single tx gives us more formal definition of a merge, but loses history (and either renders meaningless or discards tx metadata). We could point back to txids from the merge tx. Using multiple txes preserves granularity but obscures ordering, unless we reify multiple transaction logs.
  7. Select from remote datoms on tx to populate the remote tx log: that is, we work backwards from the datoms table to the log.
  8. Annotate the new tx with merge info.
  9. Optional: compact the parts table.
  10. Upload the new data.
  11. Rebuild the cache and schema.
  12. Notify consumers about renumbering.
Updated 22/03/2018 22:39 6 Comments

improve guidance on response (status code) documentation


The guidance on error documentation is partially contradictory. On one side API designers are required to document all errors, on the other side they are allowed to omit technical errors as well as functional errors that do not differ from the default semantic.

To improve the guidance a default pattern is introduced that that solved this conflict and the documentation is extended to successful status codes.

Updated 21/03/2018 15:12

Team Meeting 03-21-2018



  • Functionality Review: @szymonkaliski in good place for Testing for core functionality
    • Need Intro screen (For Alyssa and HOT team) > Due Monday
    • Content for Learn page (lower priority b/c inspired by documentation)
    • Remove Localization / Language link
    • @szymonkaliski core functionality full screen button (See #29) > Plan for Monday
    • @szymonkaliski also make pages for Learn and Intro pages > Plan for Monday

  • Design Priorities What are current understandings and priorities for coming week
    • Notes are both visible to @szymonkaliski and HOT team
    • Would like to align on high level concept on UI on HOT styling guideline of new website
      • Current HOT styling: and,-Fonts,-and-Logo
      • Current OSMA style:
  • About and Learn move into one high level menu link / Keep Save in High Level link
  • Save @szymonkaliski Thinking of prompting when close website
  • Add Scale to map (See ID for how they do it)
  • Starting view will be in proper zoom level where user is.

Testing Priorities (@smit1678 and Alyssa work on that) - @mataharimhairi gathering feedback from users (timeline: 1.5 week) - Connect @szymonkaliski with 2-3 users in next week / See what type of questions they have / Someone from HOT attend meeting - @szymonkaliski can also watch one of us use the site (noting that we come with bias) - Identify problems we need to solve

Next Priorities Things @szymonkaliski work on for next week - Full screen, intro page, and learn page - Move towards design that aligns with HOT UI
- Note world tiles are up to date - HOT team send examples of Documentation Point of Reference

Updated 21/03/2018 14:47

Implement test runner interface


Every test runner has an interface showing the current state of test execution. As oppose to v4 where all reporters printed to the stdout stream, the reporter in v5 all push their logs to a file. That allows us to keep the interface simple and minimalistic. My suggestion would be something along these lines:

$ wdio wdio.conf.js

RUNNING 0-0 in Chrome - test/endpoints/info.test.js
RUNNING 0-1 in Chrome - test/handler/console.test.js
RUNNING 0-2 in Chrome - test/middleware/metrics.test.js
RUNNING 1-0 in Firefox - test/endpoints/info.test.js
RUNNING 1-1 in Firefox - test/handler/console.test.js
RUNNING 1-2 in Firefox - test/middleware/metrics.test.js
RUNNING 2-0 in Safari - test/endpoints/info.test.js
... 16 pending tests

Test Suites: 7 passed, 23 total (31% completed)
Tests:       65 passed, 2 failed
Time:        14.099s

This example shows a test suite with 3 test files running in 3 browser (Chrome, Firefox and Safari) with maxInstances set to 7.

Any other suggestions?

Updated 21/03/2018 13:32

Citations, credit, and papers


The CITATION file is massively out of date. It should at the very least have the MMS paper in, but we probably also want to get a new paper in some form out in the next year or so – we are missing some important contributors, e.g. @dschwoerer, @d7919, @bshanahan to name only a few. We do have a more complete list of contributors on our website, but those contributors that are only listed there are likely not getting the credit they deserve.

Getting proper credit for software is a tricky problem though, and it’s something the Research Software Engineer community is currently trying to get to grips with. I’ll try to briefly summarise our options, which are not necessarily mutually exclusive:

  • A traditional physics/research paper in something like PoP
    • This is probably the hardest, as we’d need to tackle a physics problem that showcased most/all of the recent developments in BOUT++
  • A paper in CPC
    • Unlikely, they are not keen on software updates, they seem to much prefer new software
  • A paper in Journal of Open Research Software and/or Journal of Open Source Software
    • These are new, novel journals aim specifically at tackling software citations. JOSS says

      If your software is already well documented then paper preparation should take no more than an hour.

      JORS expects a bit more of a traditional paper (I think?), but they seem to be both aimed at more just getting a DOI and a traditional journal style citation for software, than expecting research like PoP/NF, etc. I don’t know what either of their policies are on software updates and adding more collaborators. JOSS is free, and JORS is nominally £300, but pay-what-you-can.

As well as updating CITATION, we may also want to provide a Citation File Format file. This is a YAML file which aims to standardise CITATION files by making them both machine- and human-readable. We probably also want to mention explicitly how to cite us in both the and on our website.

Updated 21/03/2018 11:42

Raspberry pi fork


I’ve forked this project to make a raspberry pi compatible version, happily running on my own little rpi 3. Fork is at I’m letting you know as per your contributor guidelines


Updated 21/03/2018 21:50 3 Comments

Url behavior in mixins


When applying mixins we should fix all url() values to be relative to the target stylesheet


/0/1/ css .root { background: url(./asset.png); }

/0/2/ css :import { -st-from: "../1/"; -st-default: Style; } .root { -st-mixin: Style; }

Current output

/0/2/ css .root { background: url(./asset.png); }

Fixed output

/0/2/ css .root { background: url(../1/asset.png); }

Updated 21/03/2018 12:11 1 Comments

Color scheme


Recently Materia has changed the color scheme based on opinions of various users.

If you have any opinions or suggestions about color scheme, feel free to post it here.

Updated 22/03/2018 15:11 14 Comments

SQLite Array type



Please note this is an issue tracker, not a support forum. For general questions, please use StackOverflow or Slack.

For bugs, please fill out the template below.


What are you doing?

Do you think it’s worth to introduce the array type to SQLite dialect as well? Though SQLite does not support arrays we could try to imitate it via JSON type. Please let me know if you would accept a pull request at all for this. Something like this:

    function ARRAY() {
      if (!(this instanceof ARRAY)) {
        const obj = Object.create(ARRAY.prototype);
        ARRAY.apply(obj, arguments);
        return obj;
      DataTypes.ARRAY.apply(this, arguments);
    inherits(ARRAY, DataTypes.ARRAY);

    ARRAY.prototype.toSql = function toSql() {
      return 'JSON'
    ARRAY.prototype._stringify = function _stringify(value, options) {
      return `${options.escape(JSON.stringify(value))}`;

By the way, currently I add this new data type manually via DataTypes.sqlite.ARRAY = ARRAY;. Is that the recommended way?

What do you expect to happen?

Simple array type in SQLite.

What is actually happening?

SQL dump when using arrays: SequelizeDatabaseError: SQLITE_ERROR: near \“[]\”: syntax error

Output, either JSON or SQL sql CREATE TABLE IF NOT EXISTS `scopes` ( `scope` VARCHAR(255)[] NOT NULL DEFAULT ARRAY[]::VARCHAR(255)[], `userAlias` VARCHAR(255) NOT NULL REFERENCES `users` (`alias`) ON DELETE CASCADE );

Dialect: sqlite Dialect version: XXX Database version: XXX Sequelize version: 4.37.2 Tested with latest release: 4.37.2

Updated 22/03/2018 06:57 2 Comments

Add test job to CircleCI

  • Added test job to circleCI: takes about 1.5 minutes.

I think there are three ways we can go, regarding how to test a PR:

  1. Use Travis CI for tests and CircleCI for screenshots + Higher capacity, therefore fastest - Hard to maintain two configurations - Travis CI doesn’t provide artifacts storage
  2. Use CircleCI for tests and screenshots(this PR) + Easier to maintain - Uses 3 containers per a build
  3. Use CircleCI for tests and screenshots, screenshots job depend on test results[1], CircleCI docs + Easier to maintain + Screenshotter will not run if tests fail + Uses 1-2 containers per a build - Takes long time if tests pass

[1] image

Updated 22/03/2018 15:28 1 Comments


  1. analytics
  2. transactions
  3. can filter between different types of transactions?
  4. farmers (?)
  5. doesn’t really make sense to show farmers with admins and traders?
  6. accounts

home? - don’t really need buttons to navigate since there are tabs? - unless home = memos

Updated 21/03/2018 08:46

downscaled Run 1.2p that is still useful for validation?


Based on the discussions in the DC2 slack channel, it seems likely that the next PhoSim run will be based on the PhoSim configuration from test 33 or 34, which do not use the quick background mode. Our original estimate of Run 1.1/1.2 size (see plan here) was based in part on how much computational resources we wanted to use, and we had assumed a run time based on use of quick background. This leads to the following question: should our next PhoSim run follow the original Run 1.1 design (->consume more resources than planned for testing and validation) or should it be downscaled in some way to take up roughly the same computational resources as before? My question is currently aimed at the analysis working groups: we want to use these test runs to ensure that DC2 sims will enable you to achieve your DC2 needs, so the question is whether a downscaled version will still provide enough test data for you to check it out?

As a reminder, Run 1.1 was designed to cover 5x5 deg2 of WFD, of which the 1.1x1.1 deg2 uDDF is a subset. It includes all 6 bands. To quote the document, “The WFD visits will be the first 750 visits that overlap the main survey region. We follow Twinkles and choose 586 Deep Drilling Field visits that fall on the uDDF survey region. This is done by choosing unique combinations of night and filter bandpass, so that the visit list is not exhausted at the beginning of the survey.”

I can think of a few ways to reduce this, with my preference being for option 3 for reasons described below:

  1. Reduce area: If we want to do things like check the galaxy populations and even maybe run a cluster-finder to check that the sims are useful for the various analysis WGs, then reduced area might make validation challenging.

  2. Reduce the numbers of bands: I think we do want to check multi-band photometry, so we at least need a few bands, but I would prefer to keep all 6, in case there are wavelength-dependent issues that we could identify with this test run.

  3. Reduce the depth: at least for WFD, this seems like the safest way to do a validation test that is less expensive yet still will give us the info that we need. Even with a factor of 3 fewer exposures, we’d be reaching a depth that should enable interesting tests of galaxy photometry, cluster-finding, etc. - it’s still ~3-year depth. For uDDF, I would imagine some useful validation could also be done with fewer visits, though I would like to hear from @rbiswas4 @jbkalmbach about this assumption (and whether there are any tricky issues having to do with selecting the visits in a way that enables sufficient tests of the transient populations).

Again, while comments are welcome from anybody, I would particularly like to hear from the analysis WGs, who need to use these test runs to check whether the sims will meet their DC2 needs. Paging a few people: @rbiswas4 @jbkalmbach @fjaviersanchez (wearing an LSS hat for the moment :) @rmjarvis @erykoff @dannygoldstein @joezuntz – happy to hear from anybody else.

Also, if @fjaviersanchez (wearing his SSim hat now :) can comment on whether the changes we discuss have an impact on the effectiveness of the planned validation tools, that would be helpful. My feeling about the exposure checker is that even ¼-1/3 of the planned number of visits is plenty; it works at the level of ½-chips and so even the kind of downscaling we might do will provide a huge number of ½-chips. What about some of the other tools?

Updated 22/03/2018 21:27 23 Comments

Added: C# Design Notes for Jan 2018


Many of these topics have evolved since these notes - the raw notes are available, but have yet to be cleaned up.

C# Language Design Notes for Jan 3, 2018

  1. Scoping of expression variables in constructor initializer
  2. Scoping of expression variables in field initializer
  3. Scoping of expression variables in query clauses
  4. Caller argument expression attribute
  5. Other caller attributes
  6. New constraints

C# Language Design Notes for Jan 10, 2018

  1. Ranges and endpoint types

C# Language Design Notes for Jan 18, 2018

We discussed the range operator in C# and the underlying types for it.

  1. Scope of the feature
  2. Range types
  3. Type name
  4. Open-ended ranges
  5. Empty ranges
  6. Enumerability
  7. Language questions

C# Language Design Notes for Jan 22, 2018

We continued to discuss the range operator in C# and the underlying types for it.

  1. Inclusive or exclusive?
  2. Natural type of range expressions
  3. Start/length notation

C# Language Design Notes for Jan 24, 2018

  1. Ref reassignment
  2. New constraints
  3. Target typed stackalloc initializers
  4. Deconstruct as ref extension method
Updated 22/03/2018 21:19 8 Comments

Can I folk this repository and translate it to Korean?


I used this hexo theme. It seems too good. So, I want to translate for the other korean( and code annotation) And maybe, i need to edit css code. For example header position fixed, mobile reactive padding value, etc. Of course all this work will be done in my folk repository. Maybe, If you do not mind, I might be able to create a pull request about edited css.

Please make sure these boxes are checked before submitting your issue. Thank you!

  • [x] I have setuped and configurated the blog according to Hexo official documentation;
  • [x] I have read the Theme Wiki carefully and created my own configuration file(_config.yml);
  • [x] I have looked up the Issues and found no duplicate issues.


  • [ ] 我已经按照Hexo官方文档中的步骤安装与配置了Hexo;
  • [ ] 我已经仔细地阅读了主题Wiki并创建了自己的配置文件(_config.yml);
  • [ ] 我已经搜索了Issues,并且没有找到类似的问题。
Updated 21/03/2018 04:13 2 Comments

Design Feedback


This is initial design and layout feedback for the Animation Prototype. Still a work in progress and just idea. I’d like to review as a group before we prioritize anything to implement.

Updated 21/03/2018 14:51 3 Comments

Storing meeting data on the team member


Team to [Active] Meeting is a 1-to-1 relationship. For that reason, the initial action meeting stored the meeting details on the Team table and we just wiped it after the meeting ended. For retrospectives, there is so much more info stored, that it makes sense to give it its own table. it’s a little more work to make sure the 1-to-1 relationship is maintained, but doable.

Now we have the same problem with team members. We’ve got meeting-specific data about team members, and the question is: do we continue to store it on the TeamMember? Do we store it on their check-in phase? Or something else?

The data we need to store is: - check is info (present, absent, null) - votes remaining - ???

It doesn’t feel right to store teammember specific stuff on their check-in round because, for example, if a meeting doesn’t have check in round, where would it go? An alternative would be to create a “meeting member” entity that stores this stuff. it’d be a 1-to-1 relationship to team member, which gets confusing again.
we could denormalize it & put it on the meeting object itself. We end up doing that anyways for the meeting summary to keep track of who was present/absent.

The only difference between the 2 options is whether or not we create a new table, or store it in an object on the existing meeting table. Given those 2 choices, i’d opt for a new table for easier retrieval.

Updated 20/03/2018 23:21

Possibility of different database instead of SQL Server


As far as I know EntityFrameworkCore & EF in general supports other databases, is there a case I can use different, and how, I noticed there’s only ConnectionString that is passed to “AddCofoundry”

Is it possible to like in to custom set your DbContextOptionsBuilder with my instance for Npsql … UseNpsql instead of UseSqlServer.

In such case it would be nice to have a Migrations, because it can be done for other types of databases. Haven’t looked at the SQL Scripts, for generating Cofoundry database.

Updated 21/03/2018 10:03 1 Comments

Automatic actions framework


In this ticket, let’s discuss a framework for automated actions.

The problem is that we want some actions to occur “automatically” over time. However, we want to be sure that it is impossible for normal or malicious use to require an unbounded number of such automatic actions during replay.

Many existing blockchain functions could be described as automatic actions. However, in this ticket I will limit myself to consideration of not-yet-implemented, SMT-related automatic actions:

  • Claiming SMT tokens
  • Market maker ping transactions
  • SMT emissions

The design I propose is as follows:

  • Place automatic actions in a new block extension type, automatic_actions
  • The block extension should have a maximum serialized size equal to a fraction of the block size (perhaps 20%).
  • Witnesses are free to determine the set of block actions from the set of available actions using any algorithm.
  • Each action is completely determined by a corresponding action key. The rest of the action’s fields must be a deterministic function of the action type, action key, and the blockchain state.
  • For now, the witness plugin will implement a simple FIFO ordering of actions. A single SMT’s market maker and inflation actions will be included at most once per hour.
  • In the future, if actions start to queue because the block extension is consistently full, the witnesses (or any subset of them) can implement and switch to a different ordering algorithm at any time. For example, we may use the Steem in the SMT’s market maker to determine how often the SMT’s actions can occur.

Related tickets:

  • 2052 SMT emissions resource consumption

  • 2054 smt_contribute_operation, smt_claim_operation (proposes tranche system)

  • 1724 Implement timed event scheduler

  • 2060 Ping transaction for market maker

Specific implementation notes about different operation types:

Claiming SMT tokens

Within a given SMT, claims MUST be processed in FIFO order. The code should be written to allow processing of multiple claims for a single SMT within a single block.

The action key for SMT claims is the SMT’s asset ID.

Refunding SMT contributions

Refunds should not be automatic actions. Refunds should be operations; a user must explicitly issue a transaction in order to receive a refund for an SMT launch that did not occur. If the total contribution of users who do not choose to be refunded is greater than the minimum, a delayed launch can occur.

Market maker ping transactions

The action key for a market maker ping transaction is the SMT’s asset ID.

SMT emissions

SMT emission should have its own clock which advances one “tick” every time an action occurs. So if an SMT is programmed to emit every hour, and it has been 20 hours since the last action, the next action will only do a single hour’s worth of emission. For an SMT whose emission has become backlogged, multiple emission actions could potentially be included in a single block.

The action key for SMT emissions is the SMT’s asset ID.

Updated 22/03/2018 15:32 4 Comments

React component tests


Boostnote functionality grows incredibly fast. Core components became huge and bloated with logic. We should start components refactoring, but, frankly, I’m scared to do that without at least basic set of tests. Unfortunately, we don’t have any.

I propose to use usual testing stack Enzyme+Mocha+Chai to test components. Then we can step by step add tests for components before refactoring. I can start this process in my #1557

@kazup01 @Rokt33r What do you think?

Updated 21/03/2018 12:12 4 Comments

Duality audio upgrade


Here are some ideas (brainstorming) how we could improve the audio features of Duality.

1. Support Multiple SoundListeners at Once See #482

2. Adjust the volume via Gizmo Set the min. / max. distance of the audio emitter via scene editor like scaling an object: image

3. Cone Angles

the cone angles (inner and outer) specify how “sharp” the direction is. if inner and outer are 180, the sound is the same from all directions. if inner is 45, and outer is like 60, you’d have the full audio when the direction points at the listener within 45°, no (or minimum) audio at above 60°, and fading between both inbetween those angles by Adam

4. Audio Effects Add audio effects to a sound emitter (or sound instance ?) - Low Pass - High Pass - Echo - Distortion - Reverb

5. Audio Areas Define audio areas (like shape of rigidbody ?) so the audio emitter gets the same volume everywhere in this area.

So do u have further proposals and ideas?

Updated 21/03/2018 16:26

Collapsable/hideable package metadata.


I’d like to suggest an alternative to the two-column layout that would benefit mobile/narrow window users too. If the metadata was collapseable/hideable and the last state was remembered in a cookie users that don’t care about the metadata would only have to collapse it once and then not see it anymore, while those who care can just keep it open. A third condensed state would be nice where some important bits are shown like the currently viewed version.

Updated 21/03/2018 02:50

Left-align metadata identifiers


The lack of a clear distinction between description and metadata makes the pages harder to read for me. This comment conveniently has two images that show the problem, this is specifically about the readme background color but the effect’s the same with a long description, cf. the lens package.

In the first image the border clearly seperates the metadata and all is well, in the second image the lack of a border makes it hard for me to read, the metadata identifiers (Author, Repository, etc.) look like they’re part of the text, even though they’re bold.

I’d like to suggest left-aligning the identifiers, which would create the impression of a border and thus a clearer seperation.

Updated 22/03/2018 17:52 3 Comments

Release 0.2.0 Beta Discussion


Hey everyone!

I’d like to start a discussion and get some general feedback on the progress of FreeTube. If this goes well then I might open one up for every release.

I’m curious as to what the general opinion is on FreeTube at it’s current state. How do you like the frequency of updates? What features should take priority? Is there anything that I should be doing better with? This is my first time managing a project this big so it’s very possible that things could be better. I don’t want to assume that everyone is 100% happy with everything because surely that isn’t the case.

Let me know what you think. I’m looking forward to hearing your thoughts. :)

Updated 22/03/2018 19:10 5 Comments

What should we do with multiple expconfigs found in db by `Experiment.__init__`?


Case: During initialization of an Experiment object, we query the database for documents (configurations) with a specific (exp_name, user_name) tuple. Recently in #55 we proposed to make this tuple a key for the experiments collection in database. This means that we should also do probably with the checks in lines 125-130 in metaopt.core.worker.experiment.

if len(config) > 1:
    log.warning("Many (%s) experiments for (%s, %s) are available but "
                       "only the most recent one can be accessed. "
                       "Experiment forks will be supported soon.", len(config), name, user)
config = sorted(config, key=lambda x: x['metadata']['datetime'],

Now it is sure that a single document will be returned from this query, under normal conditions. Should we remove the check? Or should we convert it to error and throw an exception?? Are there other choices, what do you think?

Updated 21/03/2018 03:01 4 Comments

no arrow functions in clientside linting


Arrow functions are a fairly recent addition to javascript, we should try and avoid them for the client side code as it will not run on older browsers. Essentially any browser only a few versions back, eg a few years old, this would include a lot of existing mobile browsers. This would prevent the user from using the registration and from logging in.

Uncaught SyntaxError: Unexpected token =>
login.js:1 Uncaught SyntaxError: Unexpected token )
Updated 21/03/2018 20:02 2 Comments

Feature Request: remove/hide the header


Hi motss,

I’m working on a web app using your date picker, and I’d like to use a simpler version, without the header (selected-fulldate) containing the selected year and selected date, so the app-datepicker is just the calendar view.

screenshot from 2018-03-20 11-41-10

I understand this would mean the user can’t access the year switcher, but that’s ok for my use case, because they can change years by incrementing months. For my use case, users rarely need to input a date more than a year away, and the header takes up a lot of space.

Would this be a feature you’d be willing to add to the datepicker? If so, I’d be happy to implement it.

Updated 21/03/2018 14:16 1 Comments

Code sharing?


From @vesper8 on July 8, 2017 18:1

I’m still considering diving deep into trying to use this instead of React Native since my website is fully built-out with vue components already.

But one thing that isn’t clear is how your custom renderer deals with html and css. I imagine there’s a limit to how much translation you can do between html/css and what nativescript uses for visual elements. So a project who’s vue components are heavily tied in to html that’s been positioned and styled using bootstrap grids for example.. most or all of that would be lost right? And you’d end up being able to re-use most of the vue logic but would have to replace the layout part by something more native to nativescript?

Am I understanding that correctly?

Copied from original issue: nativescript-vue/nativescript-vue#29

Updated 20/03/2018 18:39 16 Comments

Ability to start and stop video recording from the tests.


We use protractor for our e2e tests , currently we are using vanilla selenium grid in our pipelines to run our tests but we are planning to switch to Zalenium (because its awesome) . We do run our tests in parallel and the way protractor runs the tests in parallel is that it shards the feature file and each feature file is run in a different browser. But the issue that we are having is when we record a video Zalenium creates one video file per feature file , but then this artifact might not be very useful when the developer views the video to debug the issue(Lets say if the feature file has 4 scenarios , if the third tests fails , the developer will need to see videos of first 3 scenarios which have passed)

Is there a way I could programmatically control the video recording via an API , if its available then possibly i could use a cucumber AfterSceario BeforeScenario hook to start and stop the video after every scenario

Updated 21/03/2018 23:50 3 Comments

🚀🚀 Text to Speech (TTS) in TensorLayer


A discussion for (real-time) text to speech (TTS) using TensorLayer and TensorFlow

Paper list


  • special layers for the models
  • a set of APIs to download/ preprocess the dataset,
  • a set of APIs to save the result or


  • For Model Acceleration, see
Updated 20/03/2018 17:55

🚀🚀 Real Time Object Detection in TensorLayer


A discussion for real-time object detection using TensorLayer and TensorFlow

  • For Model Acceleration, see

Paper list


Popular datasets for object detection.

Existing code


  • YOLO2 (ignore YOLO1) for VOC/ COCO
  • SSD for VOC/ COCO
  • API to load MSCOCO dataset
  • NMS and Soft-NMS
  • API for evaluation - mAP


Updated 22/03/2018 11:05 2 Comments

Authority.Authentication.after_validate/2 callback should accept the credential.


Given that the most common side effect I’ve implemented on authentication is updating/deleting the token used, I propose that we change Authority.Authentication.after_validate(user, purpose) to Authority.Authentication.after_validate(credential, user, purpose)

  defp do_authenticate(module, identifier, credential, purpose) do
    with {:ok, identifier} <- module.before_identify(identifier),
         {:ok, user} <- module.identify(identifier) do
      credential = combine_credential(identifier, credential)

      with :ok <- module.before_validate(user, purpose),
           :ok <- module.validate(credential, user, purpose),
           :ok <- module.after_validate(credential, user, purpose) do
        {:ok, user}
        error ->
          module.failed(user, error)

This allows one to use after_validate/3 for side effects as intended. I’ve had to do side-effect things (such as updating a token’s last_used_at) in validate/3, because after_validate/2 doesn’t know which credential was used to validate.

Updated 20/03/2018 17:23 1 Comments

Add Form Schema Endpoint


@pld @ukanga

OSM Fields

It seems that there is no way for us to get the expected OSM fields from just the XForm spec. The XForm spec will point to an OSM field and the OSM sheet on the XForm spec can give us information about some expected OSM fields but the actual fields that come with the data are only known to when it inspects the OSM attachments.

After processing OSM attachments, stores the OSM data in a model called OsmData, which has a JSON field called tags where the OSM data is stored in key:value pairs. These key value pairs are what we ideally want to include in our Druid dimensions spec - and we don’t have a good way to get them at the moment.

A new form schema API endpoint?

I propose that we add am API endpoint to that takes a form pk and returns a schema of the form’s fields. These fields will be generated by using the XLS form, the OSM data, and any other sources that would helps us get a complete and as-accurate-as-possible schema of the form’s fields.

Do you think this is a good idea?

Updated 20/03/2018 17:24 2 Comments

empty optionals


current situation: [0:1] – A is in java terms Optional[A] (accessing it with no value is an error, it has either one value or is invalid) [:1] – A is in java terms Set[A] (it may contain any number of values including none)

we should decide: 1- do we retain the behavior of [0:1] as optional 2- do we need a construct that is ‘list with one or 0 values’ that can always be assigned to lists, but only to single valued variables if it has a value (if we have such a construct, does it work the same on += and =) 3- do we need a construct that is ‘list with one or 0 values’ that can always be assigned (i.e. no direct warning on usage of empty optional)

see #608

Updated 20/03/2018 15:57

MXNet MKLDNN build dependency/flow discussion


Hi MXNet owners,

This is Jin from Intel MXNet Shanghai team, we spent some time analyzing current MXNet make/cmake flow, especially the part related with MKLDNN and BLAS selection and full MKL/MKLML library link usage/dependency. We came out with a small summary slides mainly talk about below items:

(1) MKLDNN, full MKL, MKLML library relationship (2) BLAS selection (3) Our preferred MXNet Build Logic regarding full MKL/MKLML library (4) Current build logic, issues, and potential improvements

Please help to take at the slides and we can discuss for further comments. Thanks.

Updated 21/03/2018 10:24 4 Comments

Are snippets just notes with syntax highlighting


I am trying to understand how I can use snippets once I have added them. For example, I currently have snippets in Dash and my IDE that I associate a certain string to and they complete for me. When I hear “snippet”, that’s the functionality I expect.

It’s not clear to me how to make that work in BoostNote or if that functionality is even available. Right now I am hesitant to move my snippets into BoostNote as it doesn’t seem to do anything different than a regular note.

I want to suggest this tool to the rest of my team, but I want to understand how this feature works first.

Thanks for a great tool. I am now using it daily for my own tracking but I want to expand that usage into other things.

Updated 21/03/2018 01:52 1 Comments

self-consistent decision criteria for what effects to include in DC2


I suggest that before we finalize the list of effects included in DC2 at the final DC2 checkpoint, we apply a coherent/consistent set of decision criteria. This comes out of a slack conversation (which was itself based on various conversations in this repo) specifically about effects for which there is not currently a correction in DM.

The criteria that @rmjarvis and I discussed were:

(a) the effects are small, so we can learn what they do to our science when uncorrected, but they won’t completely ruin our ability to test our pipelines in other ways for DC2.

(b) the effects are traceable in some sense (one can correlate against specific quantities to identify their impact - e.g., for DCR we can correlate against airmass).

© DM does plan to correct for them eventually, so we can later reprocess.

(d) Would having a large suite of images with the effect be useful for DM in developing the appropriate correction? If so, then it would be nice to let DM work with DC2 images rather than wait for the next time we do a huge simulation like this.

(e) I think a corollary of (d) that is perhaps obvious (but it’s better to be explicit) is this: the effect should be included in the sims in a way that is physically meaningful (based on real data and physics of LSST detectors), to ensure that the sims would actually be useful for developing and testing corrections. If the simulation implementation doesn’t reflect key features of the effect in reality, that might be used to build the correction or that might determine how important the effect is for LSST, then it’s better to have it off than to have it in DC2. (Special-order sims might still be useful but we shouldn’t use that much of our resources on it.) <– Edited to say that I added this one later, after the initial responses in this thread.

Previous discussions focused on considerations related to (a)-©. For example, the argument is that tree rings and DCR meet conditions (a)-©, and proper motions are being excluded because of (a) (they were the dominant astrometric residual in DC1).

Updated 22/03/2018 20:27 19 Comments

Fork me on GitHub