Contribute to Open Source. Search issue labels to find the right project for you!

Automate releases


Following up on the documented release process, we should fully automate package releases, in concert with Versioned Documentation.

In line with the above, we should do the following:

  1. Merge bugfixes / documentation improvements / etc directly to master. This should:
  2. Run CI, obviously
  3. If CI passes, publish a new PATCH version of changed packages via lerna on the CI server.
  4. Update the x.y documentation in place.
  5. Add an entry to the changelog

  6. Merge new features to the release/x.y branch. This should:

  7. Run CI, again.
  8. If CI passes, publish a new x.y-canary release of changed packages, via lerna
  9. Update the x.y preview documentation in place (or create it if this is the first commit to the branch).
  10. Add an entry to the changelog

  11. Run minor releases via merging release/x.y to master. This should:

  12. I guess run CI, just to be sure ;)
  13. Publish x.y versions for real, via lerna
  14. Change the x.y documentation to be the latest, update all <x.y documentations to point to it and list it in their menus. Update x.y-1 documentation to be “out-of-date”.
  15. Update the changelog with a release date, etc.

I think we can do all of that fairly easily, most of the pieces are in place now. It’ll be fun!

Updated 27/06/2017 01:53

Pattern matching on functions?


Despite the surprising title, we’re not sure this is actually a bug.

The following code works just fine. let m (f : int -> int) : Tot int = match f with | x -> 0 This is because F can prove (via z3!) that the match is total, since there is a single catch-all pattern. This also actually extracts* correctly, since the match is optimized away.

This is a consequence of matching being very semantic in F*, but struck me and @nikswamy as very odd anyway.

I tried to get a contradiction from this, but couldn’t, as really the only possible patterns one can provide for are wildcards and variables (or none). Here are some more examples, the first one being something that might desirable.

let m2 (a:Type) (_ : a == int) (x : a) =
    match x with
    | 0 -> 0
    | _ -> 1

let f (x:int) = x
let m3 =
    match f with

The last one (I believe..) has a non-sat WP (so no proof of false), although I wonder what’s the inferred type for it…

Also, inlining the definition of f in m3 will complain with Unexpected error; please file a bug report, ideally with a minimized version of the source program that triggered the error. Ill-typed term: cannot pattern match an abstraction meaning there are some don’t-pattern-match-a-lambda checks in place.

So, putting this out here for comments. Is this just fine? Should we be aggresive in forbidding matches on function types? What to do in cases like m2, then?

Updated 26/06/2017 23:40

Framework / compilation language


Currently we are using native ES6 for all the code, are we going to consider using something like typescript (which we get for free with electron-forge) to write our main/renderer code?

Also for discussion, in terms of the renderer currently using flexboxgrid and some standard HTML / CSS. How do people feel about a UI framework like angular / react or sticking with the super basic HTML layouts?

Just thinking as the main process scaffolding comes in we may want to think about these kind of things early on in the process

Updated 26/06/2017 23:24

Feature Request Process with new NodeBB Forum


For ‘accepted’ feature request tracking we seem to have consolidated on a list that includes links to background information per idea:

We are pretty close to getting to a finalized process, but I don’t think we have it quite clear nor documented and publicized. I’d like to revisit and perhaps finalize this process in view of the new consolidated NodeBB forum.

Some Questions: - should all feature requests that are user-facing (game players or map makers) go to the forums first? - should we keep one forum thread for all feature requests, as a running conversation, or split feature requests each into their own thread? If we do one request per thread, a forum section for feature requests would make sense. - Are we happy with the wiki list for accepted feature requests? Is it clear who would move content to that list, is that going to work out in practice? I think it is useful to have the list of ideas that have been vetted and are ready to go, but there should be multiple ways to do that and perhaps some more practical.

Updated 27/06/2017 00:41 1 Comments

Remote execution of workflows / auxiliary steps.


It is even conceptually clear what sos should do for the execution of workflows remotely. The problem is that if we call the workflow with sos_run, it would trigger other steps, which might or might not use remote targets. Also, it is unclear how to trigger auxiliary step from a remote step …

The bottom line is that our current remote execution model is limited to single tasks, not workflows.

Updated 26/06/2017 19:59

Checkstyle for Class JavaDocs?


Proposal: require javadocs on class definitions and interfaces

I would like to use this issue to gather any feedback, commentary or objections to the proposal.

Some benefits I would highlight to having class level javadocs: - a single class javadoc comment does good thing to design, forces an author to state what their class represents and what it is for. - Helps avoid code bloat in the sense it becomes more obvious when new functionality is being added to classes that are not a good fit. - Reader of code benefits quite a bit, no guessing or deriving the purpose of a class, you are told directly in a comment.

Looking around, I do see a good number of class javadocs, this looks to mostly be existing practice already.

This is following up on a proposal to remove javadoc requirement for public constructors (

For reference, the checkstyle code to add this would be something like: <module name="AtclauseOrder"> <property name="tagOrder" value="@param, @return, @throws, @deprecated"/> <property name="target" value="CLASS_DEF, INTERFACE_DEF, ENUM_DEF, METHOD_DEF, CTOR_DEF, VARIABLE_DEF"/> </module> + <module name="JavadocType" /> <module name="JavadocMethod">

Updated 27/06/2017 00:35 3 Comments

Where should BashOnWindows-related BSODs really be sent?



I recently encountered a BSOD affecting lxcore.sys, and as per I sent it off the minidump to However, a few days later I received an email replying that since this does not constitute a vulnerability, it would not be filed internally.

Is there a better place to send WSL BSOD minidumps, or are they ultimately not too useful in tracking down issues with WSL?

Updated 27/06/2017 00:54 3 Comments

Access to Serial Port


I need to get access to COM5, see image below:


According to document, , there should be device ttyS5, so how do I go about getting it? It looks like mknod call that needs to be made. How do I go about making the call, or is it done through registry only?

Updated 27/06/2017 00:41 3 Comments

Tagging a new version


We have several new features in LG since 0.9.0: - all sorts of parallel stuff - kcores / decomposition - doc mods - (backwards-compatible) change to DefaultDistance() - more

I’d like to tag a new version. The options are v0.9.1 and v0.10.0. I’m in favor of the former, since we don’t have breaking changes, but would like to hear opinions and votes. I’d like to do this by end of week. Thanks.

Updated 26/06/2017 18:47 2 Comments

Consolidate Encode, Transcode, and Decode in a single module?


That might be less confusing. However, it is not clear how to do the parametrization of input/output vocabs and size in/size out and it is likely to increase the code complexity internally.

For both input and output there needs to be a vocab and a size argument. It is not possible to use a single argument because an integer assigned as a vocab will be interpreted as the default vocab of that dimensionality. It must also be ensured that only one of vocab and size is set. Furthermore, the size_out/size_in arguments might not be set, but on the internal nengo.Node they will be set which might be confusing.

Depending on the vocab and size arguments, the valid function signatures and return types will change which affects the parameter validation.

Updated 26/06/2017 17:01

Build actions automatically


This solves #45 and one rarely wants to instantiate the spa.Actions without building the actions. But the build function should be kept and there should be a constructor argument to disable the automatic build to restore the current behavior.

The automatic build should store the return values of build as attributes on the action object. build should continue to return these objects because the function could be called multiple times and should not overwrite the object’s attributes each time.

Potential arguments against this change: * Gives magic side effects during instantiation. But this is similar to standard Nengo objects and build has those magic side effects too. * Makes spa.Actions behave like a standard Nengo object without being one (e.g. it doesn’t get added nengo.Network.all_objects). But it creates Nengo objects that behave the proper way. * No separation of pure description of the rules and actual implementation (but with the constructor argument that can still be achieved when relevant).

If anyone has any other arguments for or against this change, please comment.

Updated 26/06/2017 16:36

Announcement of v2 release and roadmap of 2017


We are happy to announce v2 and encourage all of you migrating from express-saml2 to samlify, and v1 is now deprecated.


  • @sebakerckhof (Another collaborator of samlify)
  • @killalau (Lint setting and ts definition)
  • @vbakke @ugolas @haoleman (Thanks for those feedbacks and reports)

Shortlist of the key enhancements in v2 release.

  • Support Yarn package manager (development)
  • Support and run tests in Node v6, v7 and v8
  • Rewrite and refactor the module using TypeScript v2.0
  • Revamp unit testing (ava)
  • Test coverage (nyc) and coverall report
  • Test suite including the actual flow of idp/sp initiated sso and slo
  • Update xml-crypto to 0.10.1
  • Remove the example folder from the repository
  • Default signature algorithm to rsa-sha256 instead of rsa-sha1
  • Missing base64 encode in unencrypted response 9a24665
  • Fix unstable replacement of encrypt assertion 61c5bb8
  • Security fix for ejs (one of the dependencies of xml-encryption) f3a968
  • 13 Code revamping - modify api for entity constructor

  • 35 Feature - order in xml parser

  • 42 Code revamping - stringify file

  • 43 Code revamping - callback style

  • 45 Code revamping - general

  • 50 Expose request ID and response ID for SPA (Another API break)

  • 54 Document - time verification

  • 55 Enhancement - recipient should use acl

  • 56 Enhancement - attribute customization

  • 63 Options in signature construction

  • 64 Add trigger for signing entired message

  • 66 Fix typo

  • 67 Fix the NameIDFormat replacement in redirect binding

  • 68 Handling extra query parameters in binding location

  • 69 Generic replacement of tags

  • 70 Migrate the documentation from Gitbook to docsify

  • 84 Increase test coverage to 90%

Roadmap of this library and development plan in 2017

  • Weekly/Bi-weekly patch

By the end of July

  • [ ] Separate example repository for Onelogin
  • [ ] Separate example repository for Dockerized Gitlab
  • [ ] Separate example repository for Okta
  • [ ] #82 Proposal drafted for v3 (deeper association and flexibility)

By the end of December

  • [ ] Passify-CE (A open-source identity provider driven by samlify) [beta]
Updated 26/06/2017 16:34

Action rule syntax


It was noted during the last lab meeting that the current action rule syntax might be misleading with its use of the equals sign. Currently, action rules use this syntax: utility --> sink = source. Please use this issue to suggest alternatives to the --> and = symbols.

Updated 26/06/2017 16:11

Geartrack v2 Architecture


Need opinions about the achitecture of the next version of Geartrack which will have authentication and notifications.


Concerns: - Should we add auth to the tracker api to prevent other apps to depend on us? (bandwidth can be expensive)

Updated 26/06/2017 15:52

[discussion] reducer concept



Since there are too many issues (#43, #56, #83, etc) that request manipulation of scrolling delta, I’m going to add a reducer concept in next major release, which is inspired by redux’s reducers:

  1. Whenever a scrollbar’s momentum is about to change, the reducer functions will be invoked with two arguments:
    1. delta: { x: number, y: number }: scrolling amount,
    2. action: { category: string, event: Event }: the action that caused scrolling.
  2. Manipulate delta yourselves, then return a new delta object like { x: 100, y: 0 }.
  3. Walk through all reducers and get final delta, then apply it to scrollbar instance.


type Delta = {
  x: number, // horizontally scrolling amount
  y: number, // vertically scrolling amount

type Action = {
  category: "wheel" | "keyboard"  | "touch" | "drag" | "select", // the category of current event
  event: Event, // original event object

interface Reducer {
  (delta: Delta, action: Action): Delta | null;

type ReducerMap = {

class Scrollbar {
  // ...
  static addGlobalReducers(reducers: ReducerMap): void;
  static getGlobalReducers(): ReducerMap;

  public addReducers(reducers: ReducerMap): void;
  public getReducers(): ReducerMap;

Example Usage

Invert delta for full page horizontal scrollers

Examples: - -

function invertDelta(delta, action) {
  if (action.category === 'wheel') {
    return { x: delta.y, y: delta.x };

Scrollbar.addGlobalReducer({ invertDelta });

Scale scrolling speed

Currently, we are using option.speed to scale scrolling speed, but we could also implement this with reducer:

function scaleSpeed(delta, action) {
  return {
    x: delta.x * 2,
    y: delta.y * 2,

Scrollbar.addGlobalReducer({ scaleSpeed });
Updated 26/06/2017 15:29

Discuss a way to distinguish compiled output files in a project from regular elixir files


There should be a way to distinguish between files that are an output of E[lm/x]chemy from regular Elixir file. Main reason being an ability to exclude generated files from source control

Suggestion 1 - A separate directory:

Pros: - Simple configuration - clear distinction - one command to remove all generated files

Cons: - Duplicated tree of directories in case of multi-nested folders - Hard to reason about (had to look into three exact same directory trees when looking for a bug)

Suggestion 2 - my_file.exchemy.ex

Pros: - Still can be excluded using simple wildcard - Less directories == nicer project structure

Cons: - Can cause problems in dumber text editors (?) - Removing all generated files requires a find -delete command or a custom command instead of simple rm -rf dir

Updated 26/06/2017 15:55 1 Comments

Cluster TODOs

  • Do we need a grain generator for non protobuf scenarios?
  • Can we get the protobuf grain generator to be a build step in VS.NET?
  • Passivisation of grains, should grains always stop on idle after some time?
  • Retries on grain calls. if a grain call fails, should we auto retry or leave this to the user?
  • Create a hashing formula that is consistent with Go, we can change either side or both, but they need to result in the same values
  • Tests, tests and more tests
  • Persistence, is there any sane way to connect Proto.Persistence with grains or should grains have their own persistence?
Updated 26/06/2017 15:44

Rename modules to reflect JVM association?


Should we rename the current modules to reflect their association with Java / the JVM?

Background: - We had a discussion about the issue of supporting other programming languages in the future. - It came up again in PR #52


  • livingdoc-jvm-engine
  • livingdoc-jvm-fixture-api
  • livingdoc-jvm-fixture-converter

The only module which does not need a rename is livingdoc-junit-engine since JUnit is explicit enough.

Updated 26/06/2017 12:46

Planned Updates for Chocolatey 3.0


This are the planned modifications for the first production release of Chocolatey.

This version will start with the tag 3.0.

Observation.: feel free to comment if you have any suggestion.


Content Management System

Planned Visual Modifications * Remove the Beta Overlay on Home Page * Update HabboWEB version (CSS and JavaScript and Images) (If any relevant modification were made)

Language System Improvements * Improve Language Templates, creating ability of also create localed images, for different countries. * Create more Language variants for other languages. Like Spanish, Dutch and German.

Documentation Improvements * Improve the Documentation of Models, Classes, Methods, and Language Templates

Code Compression * Compress all CSS’s and JavaScripts * Enable Lumen’s Cache System for Assets

Chocolatey Installer * Create an Unique Installer that will configure your .env file and config/chocolatey.php file. * Installer will check if all dependencies are present and if Composer and vendor folders are installed * Installer also may check if your configured SWF URI’s are valid (one of the biggest issues that people may have using chocolatey)

Management and Metrics API’s * Create Management API’s for managing Rooms, Users and other common Tables related to Chocolatey * Authenticated by a Token System with a Token table. * Create Metrics and Reports API’s for own Chocolatey performance and statistics. * Requests also to check version information and other data about Chocolatey, useful for updates System

Fix common Bugs and Problems * Fix some reported bugs on Issues, like “country code redirector”, “some date mistakes” and other small problems.



Code Main Features * Code Features Integrations with Chocolatey between the Management API. * Code Edit Users, Show Users, Remove User, Ban User * Code Edit Room, Show Rooms, Remove Room * Code Edit, Create and Remove Rank permissions and Ranks * Code Articles System

Those features will be available on tag 1.0 of Espreso.

The 1.5 or 2.0 version will have * Edit Catalogue * Edit Chocolatey Shop * Edit Chocolatey Famous/Recommended Rooms * Edit Global Messages and Campaigns * Show Metrics and Statistics * Many other features, like System Updates and more.

If you liked this software, go to my GitHub profile and like my profile. Clicking Here and clicking on the “Follow” button.

Updated 26/06/2017 18:09 1 Comments

Server code duplication


From the latest series of commits, 76fec69 d047877 and d46b97a seem to duplicate code written in the opus web server. If there are problems with the original code, we would like to fix them so that things are usable on the UI side; If quick changes are required, they can also be made directly by adjusting that code. However, duplicating code makes inefficient use of time on both sides.

Compared to what’s currently in playground, the functionality on our side is a bit more mature, with options for record/replay (including GET arguments). However, the framework used is exactly the same (the code in playground should have been trivial to port, but perhaps there was a blocking issue we weren’t aware of?). It’s not a problem at all if we decide to go and modify the playground code from now on, but it’s a matter that should be discussed.

We were intending to continue adding queries and features (i.e for processing pcap files) to our end, but given the latest series of commits we’re unsure on how to proceed.

Moving forward, I see two options:

  • we port over changes made in playground which fork our code and remove any other blocking issues.
  • work is continuing on the playground directory and the opus team redirects resources towards helping the demo in other ways, including adding all future contributions to /playground.
Updated 26/06/2017 12:50 1 Comments

Never ever abort sync runs


Some server side errors cause the client to abort the current sync run. If the ownCloud admin or user is not able to fix the error, EVERY sync run will abort at the same position, and following files that don’t have an error will NEVER sync because of this.

@SamuAlfageme We discussed this a time ago. Please link existing issues here or open new ones that could could cause this behaviour. Please also elaborate on the server improvements.

Updated 26/06/2017 12:32 2 Comments

Proposal: reminders



When it comes to messages that can be scheduled to be send to an actor somewhere in the future, we could split them into two categories:

  1. High-frequency timers: those are messages scheduled for sub-second intervals, that need to be delivered fast in performant way. Their downside is that they usually are not reliable for longer periods of time.
  2. Long-living delays (reminders): those are messages scheduled somewhere in the future (we speak about minutes, hours or even days). Because of that nature, schedule events needs to be persisted.

Currently we have pt. 1 solved as part of ActorSystems scheduler. Pt 2 can be solved using Akka.Quartz plugin, but there’s a problem: Quartz is quite heavy, and ensuring persistence usually is associated with some cost (serveral extra tables needs to be created). Another problem is lack of cluster awareness.


The API could be simplistic, only schedule delayed messages. If user needs to schedule periodical tasks, he can always reschedule them after receiving previous one. No need to complicate things. Example:

var schedule = new Reminder.Schedule(
    key: "key", 
    recipient: recipientActorRef.Path, 
    message: new MyMessage(), 
    triggerDate: DateTime.UtcNow + TimeSpan.FromHours(1));
var reminderRef = Reminder.Get(system).ReminderRef;
var reply = await reminderRef.Ask(schedule); // wait for ACK
  • Here key is unique identifier for target reminded call. It could be used i.e. to track its status or cancellation.
  • Recipient is ActorPath: same thing as with at-least-once delivery. Since we can call it weeks in the future, we don’t need to rely on actor ref here.

Additional thing to think about is delivery guarantee: are we going to simply push the message to a provided actor path or expect some kind of response before completing reminder request? To achieve confirmed reminders we can always combine them with at-least-once delivery actors.

Solution 1: eventsourced reminders

We could easily attach extra Akka.NET extension as part of Akka.Persistence library, that would expose a dedicated actor, which could utilize eventsourcing capabilities of actors to schedule long-living jobs (like in case of cluster sharding, they could have configurable journal ids and snapshot store ids). The biggest problem I see there is that Akka.Persistence actors don’t work in multi-writer scenarios. This enforces reminders to use coordinators in case of cluster-wide reminders.

Solution 2: CRDT-based reminders

Since we already have a distributed, replicated data types in akka, we could make use of them. This way we could have cluster-wide available reminders (so even if one machine dies, the reminder can still be picked up by another one). This however brings two problems:

  • Actors addresation: recipient of the message must be addressable from whatever node will pick up the reminder (I’ve already proposed universal actor addressation semantic on akka issue tracker). At this point in time it should be everliving actor (i.e. shard region in case of cluster sharding).
  • Actors living on different nodes would need to coordinate, which one is going to pickup reminder, so the associated message won’t be picked up and send multiple times. The simplest solution would be to allow only one actor (i.e. role leader) to work as sender, but I’m opened to suggestions.

Less immediate problem is that our binary format necessary for ddata persistence is still not set in stone.

Updated 26/06/2017 10:19

Boost source download for every component and duplicate build of components


Multiple boost source download

The boost package is downloading and extracting the source for every component, although the source can be shared for the build of all components.

This leads to significant build time increase, especially on Windows which is very slow with handling a lot of small files of the boost source. But even on Linux there is a considerable amount of time wasted.

Figures for Windows: 00:36:54.814 Creating directories for ‘Boost-chrono’ 00:36:54.970 Performing download step (download, verify and extract) for ‘Boost-chrono’ 00:44:55.100 – Boost-chrono download command succeeded.

In a summary, 8 minutes are lost per component. Assuming you build 10 components, you loose 80 minutes.

Figures for Linux: 00:06:54.265 [ 12%] Creating directories for ‘Boost-chrono’ 00:06:54.277 [ 25%] Performing download step (download, verify and extract) for ‘Boost-chrono’ 00:06:54.280 – verifying file… 00:06:54.280 file=‘boost_1_61_0.tar.bz2’ 00:06:54.769 – File already exists and hash match (skip download): 00:06:54.769 file=‘boost_1_61_0.tar.bz2’ 00:06:54.769 SHA1=‘f84b1a1ce764108ec3c2b7bd7704cf8dfd3c9d01’ 00:06:54.777 – extracting… 00:06:54.777 src=‘boost_1_61_0.tar.bz2’ 00:06:54.777 dst=‘/hunter/_Base/63718d6/c4b89cb/db0d7e2/Build/Boost/__chrono/Source’ 00:06:54.777 – extracting… [tar xfz] 00:07:18.127 – extracting… [analysis] 00:07:18.127 – extracting… [rename] 00:07:18.127 – extracting… [clean up] 00:07:18.127 – extracting… done

In a summary, 24 seconds are lost per component. Assuming you build 10 components, you loose 4 minutes.

Multiple boost component build

There is another side effect using this technique. For example when you use boost::system and boost::chrono components, the following happens: - The system component builds in /Build/Boost/system/Source/stage/lib the libraries for system. - The chrono component builds in /Build/Boost/chrono/Source/stage/lib the libraries for chrono and system, as chrono has an internal dependency to system! This means that components are built multiple times and files are overwritten randomly in the install directory.


Is there any reason why every component downloads the source again instead of using a shared source directory?

Fixing that has several advantages: - Download of source is done only once (performance improvement and good practice anyway!) - Every component is built only once (performance improvement and building it multiple times and randomly overwriting in the install directory is a bug!)

Updated 26/06/2017 16:18 3 Comments

URL "not secure" warning is misleading for self contained apps


Operation System: ubuntu Beaker Version: 0.7.2

The insecure protocol warning is confusing and way to aggressive. .. For users it indicates that the site which is shown is not secure. … but the info is only about the protocol used: eg http://

tiddlywiki is a self contained app, that doesn’t send anything back to any server. So the warning is completely misleading here.


Chrome and FireFox show the info like this:


Updated 26/06/2017 17:10 2 Comments

[Card] should not use > as selectors


Sometimes we may need to user .card after other elements instead of immediately after .cards.

Like, in this case, I am using a carousel (owl-carousel2) but the .card would not behave the same way and I was required to manually delete all the > selectors of > .card in the CSS and also do some amendments in js for the dimmer effect.

Don’t you think it would be better that the Semantic UI looks upon it for the next version?

Here is a snapshot: image

Updated 26/06/2017 15:47

Type parameter overhaul


I’m thinking about making a big change on how we deal with type parameters on C# and Java for Haxe 4.0 . We are currently riddled with issues like #3398 , #3399, #3703 - among others. They all come from a common issue - java and c# perform some type changes when a type is used as a type parameter.

Some of these changes are not optional - for example, in Java we need to change Int to be java.lang.Integer, which is the boxed counterpart of the basic int type. This is needed because e.g. Array<int> is not valid on Java. This change however has some inherent problems. In the Haxe’s typer’s eyes, we are accessing an Int, but as far as Java is concerned, we are accessing an Integer. For example, if we try to access Array’s underlying native array, it’s typed as java.NativeArray<Int> - which is int[], while Java’s vm sees it as Integer[].

Similarly, we can run on multiple issues with C# when dealing with Haxe created types vs native types and type parameters, since they are changed differently according to if the type was declared as nativeGen or hxGen.

I don’t think there is a definite solution for Java. Since it mostly happens with Java’s NativeArray, we could perhaps do what Atry has suggested here . I think this is a nice solution, although it hurts Vector<> performance - making it box the basic types while it wasn’t boxing in the past. So it then becomes a question of what we want to prioritize on Vector.

For C#, I think the solution will be to not change the type parameters at all. We currently change type parameters of any reference type on Haxe-generated class to be object. We do that both to avoid triggering the cast-by-copy semantics of types like Array, but also that there was no real benefit other than added strictness (which makes C# generated code more strict and less likely to run) - as the performance was the same. We will solve the cast-by-copy on #4872, and thus making an error if someone casted an Array twice (e.g. to overcome variance limitations) - which I think is a fair error state. We still also need to understand the impact of eliminating cast-by-copy completely on other classes other than Array - since I don’t think #4872’s solution can be generalized to any generic class. Also doing this will probably make changing variance in Haxe pretty much impossible, and the C# target will suffer even more for any new feature that involves type parameters.

I’d love to hear your thoughts about this

Updated 26/06/2017 08:35 2 Comments

Improve security for SSS webtasks


What is the current behavior? Although it should not be possible for an end user to get hold of the actual URL of a SSS, I would at least expect that it needs some Authorization token to execute. This applies to inline SSS functions.

What is the expected behavior? SSS cannot execute without a proper Authorization header (PAT?).

Updated 27/06/2017 00:49 7 Comments

Preparing for 2.1.4-stable


Same deal as #8087, but for 2.1.4.

This issue is meant to gather initial feedback on pre-release binaries, so that we can spot regressions (either from new developments or from bad cherry-picks I might have done).

Note: as always I’m only interested in regressions (i.e. bugs that did not affect earlier 2.1.x versions), including to some extent bugs in new features if there are any. Packaging issues with the templates, etc., are of course also interesting :)

Updated 26/06/2017 23:00 9 Comments

Proposal: allow open types in nameof


Imported from Please see that issue for discussion.

Currently to use nameof with a generic type, a type argument needs to be used. Problems:

  • It’s very odd to require something to be specified within an operand when it has no impact on the result
  • It’s inconsistent with typeof
  • It means that changing a constraint on a type parameter can break uses of nameof entirely unnecessarily

I propose that the exact same syntax as typeof is used. So for example:

class GenericClass<T>
    public string X { get; }

class GenericClass<T1, T2>
    public string Y { get; }

These expressions are currently valid, and would stay valid:

nameof(GenericClass<int, byte>)
nameof(GenericClass<int, byte>.Y)

These expressions would become valid, and would have the same respective results:


Using the terminology in, it may be that all we need to do in grammar terms is to expand named_entity_target to include unbound_type_name, but I suspect it’s more subtle than that.

Updated 26/06/2017 20:59 11 Comments



I noticed that the AI could be a traitor and found the concept incredibly stupid, there is already an entire gamemode for the traitor AI (malfunction), allowing AI to be a traitor adds nothing to the game and just makes the rounds in which it happen incredibly confusing and quite honestly unfun.

The AI should either be a malfunctioning AI or a loyal, normal AI (given that no one changes it’s laws of course). There is literally no good reason whatsoever to allow AI to be an antagonist during traitor rounds.

I love silicons and love the malfunction gamemode, but there is simply no point in having rogue AI exist outside of it, traitor antagonist should be restricted to non-silicons only.

Updated 26/06/2017 19:35 37 Comments

Router Context is Swallowed


I am here to resurrect #19 and #16, for which I apologize.

I believe the current state of things is that mobx (and mobx-react-router) don’t really work out of the box with react-router v4.

Cause: observer components use a customized shouldComponentUpdate that pays attention to the mobx context but not the context that react-router needs.

Here’s a repro:

  • Code:
  • CodePen:

Current workaround: Users of this lib should wrap withRouter around every observer in their app.

Solutions: Though this isn’t mobx-react-router’s fault (ReactTraining/react-router#4781, etc, etc) it does appear that this library owns the space at the intersection of mobx and react-router and is therefore in the best position to bundle a fix or workaround.

Short-term solution: This should be called out prominently in the readme. It’s relevant to ~100% of people who use this lib (I think?)

Longer-term solutions: A fix or workaround should be integrated into this lib if possible. I think it is possible! Anything is possible, right? Here are some potential approaches:

  1. Whatever react-router-redux does. Assuming it has this problem solved, which I haven’t verified.
  2. As part of syncHistoryWithStore, somehow turn the history attributes that react-router uses into observables. If that’s a thing.
  3. Add a function/decorator that takes the context react-router needs out of the routing store and puts it into the routing context of the child component. You’d still need to apply this decorator to anything that uses a Route or NavLink, but at least you don’t have to wrap every single observer in withRouter.
  4. Provide a router-friendly alternative to observer as part of this lib. This at least provides a solution out of the box and gets people like me to stop writing our own dubious one-offs.

Happy to submit a PR for the README or to look into options ¾ if you agree with the above.

Thanks! Daniel

Updated 26/06/2017 12:53 2 Comments

Proposal: fullnameof operator


Migrated from: and

A well-know one, not much to add.

Doing this currently which is not amazing:

public static class Extensions
    public static string fullnameof<T>() => typeof(T).FullName;

using static Extensions;

Updated 26/06/2017 20:59 7 Comments

streamline contribution guidance



From my initial stumbling (1k thanks to @stkent for the hints 🙂 ) and this, I gathered that PR templates might be useful for this repo as well.

I suggest creating a .github folder (and would be willing to do it) for 1. (just moving it), and 2. a details extracted from and, for example in the form of a task list).

IMHO, this could bring guidance info to the… Actionable point and time of contact, so to speak. Particularly for new contributors, e may feel that the two existing files are rather a lot to read (me: guilty as charged 😅 ). Seasoned contributors could of course skip over or remove the items, but would also be reminded about the current UX for newbies.

What do you think?

PS: Albeit also touching a general topic, this topic seems more “actionable” here than “other ;-) PLMK if I should move this issue there.

Updated 24/06/2017 20:13 1 Comments

What else should we log on the landing page?


Here are some interactions on the landing page that we do not currently log (this is meant to be exhaustive, not all of these are useful):

  1. Clicking “Sign in” (we log if they sign up, but not if they click on it and then close the box)
  2. Clicking on any of the tabs in the “How you can help” module
  3. Clicking on a neighborhood in the choropleth (we log if they click on “click here” to start auditing a neighborhood through it, but not if they make that first interaction)
  4. Clicking on “Watch Now” to start watching the “What is Project Sidewalk?” video
  5. Visiting twitter through the links in the “What people are saying module”
  6. Clicking on any of the images/links in the “Press” module
  7. Visiting Github, Twitter, or Email Us in the “Connect” section at the bottom
  8. Clicking on anything at the very bottom: our funding sources, Makeability Lab site, or UMD site

None of these would be useful for a paper really. But there are two categories of things here that might be useful to log…

  1. We often want to know what percentage of people saw our landing page, then just left. Right now we know if someone starts auditing right from the landing page, if they sign up, etc. Then the best we can do is to say “just about everyone who didn’t do one of those things probably left”. But maybe they decided to click on the link for our twitter, etc. So maybe we want to log clicks that route them elsewhere (5, 7, 6, 8; in order of importance)
  2. We also want to know how engaging the stuff on our landing page is. Questions like “does anyone even click on the choropleth at all?” or “does anyone even watch the video?” are useful in deciding what should be on the landing page in the future. (3, 4)

@jonfroehlich this is mostly directed at you. What do you want to see logged that is listed above? Both before relaunch, and just at some later date.

@adash12 Now has some practice with getting things logged in the webpage_activity table, as he set up logging when users start auditing via the choropleth or user dashboard map, so it probably wouldn’t take him too long to do these things.

Though relaunch is happening and much of this stuff is likely low ROI, and will just balloon the size of our website_activity table with useless info :upside_down_face:

Updated 26/06/2017 15:25 8 Comments

Consider systematically adding links to


We can now leverage to provide nice links from EDAM concepts to e.g.

  • tools that support FASTA ( format_ on input

  • tools that generate a Sequence alignment (

  • tools annotated with topic of Proteomics (

  • etc.

Should we do it? Maybe not now, but in future? I wonder if it really brings value to EDAM, I think maybe it does.

Updated 24/06/2017 11:55

Custom Tabbar with Image and Title


It is not bug but it is kind of feature. Is it possible in Tabman to get a view like attached image. I want scrollable Tabbar with Image and Title and selected item size different and not selected different plus all the data in Tabbar will be coming from API not static data. Please guide me if it possible in this.


Updated 25/06/2017 20:02 1 Comments

Consider remodelling of "Format (by type of data") concepts


There are two problems:

  • children of “Format (by type of data)” ( ) do not always exactly match concepts in the Data ( branch
  • children of these children are currently explicitly defined via SubClass relations

leading to: - inconsistency / confusion between format and data concepts - lack of sustainability (it’s an extra overhead to explicitly define SubClass relations)

A solution would be to replace the immediate ancestors of “Format (by type of data)” with a bunch of defined (equivalent) OWL classes, i.e. we define the OWL logic of what, say a “Sequence format” is and then infer the ancestors automatically.

I’d need to look into exactly how to do this, but I believe it’s possible using OWL syntax.

cc @matuskalas FHI

Updated 24/06/2017 11:02

Iterate over both keys and values of a dictionary.


Hi ! 👋

I regularly face the situation where I want to iterate over the values of a dictionary.

var data = {}

for key in data:

With Python, there is a simple API for this:


With JavaScript, too:

for (let [key, value] of data) {

Do you think it would be useful to have these types of shortcuts ?

Have a great weekend 🎉

Updated 24/06/2017 11:30 4 Comments

Rename the project to angularj-ssr


The current project name spring-boot-angular-renderer is quite long and people might think it’s only about Spring Boot. To reflect these problems I will rename the project into angularj-ssr.

After the rename I will also request a namespace in the Maven Central Repository to publish the project as library via Maven.

Updated 24/06/2017 08:17

Feature idea: Add a mod flag for specifying the targeted FSO version


We have quite a lot of modding flags throughout the FSO code the fix some behavior but which cannot be enabled since it might break retail compatibility.

I possible solution I had in mind could be to introduce a mod table flag that lets a mod specify which FSO version it’s designed to be run on. That value could then be used for deciding if a bug fix should be applied or not.

That would also allow to implement a clear deprecation model where obsolete flags are disabled if the mod targets a recent enough version of FSO.

What are your thoughts on this?

Updated 24/06/2017 23:47 3 Comments

Upgrade or replace Cassandra


Before completing 0.3 I want to firmly commit to a persistence solution that can handle our FTI and other persistence needs. Requirements:

  • A) 100% pure java
  • B) Embeddable
  • C) Distributed/Clustered running supported
  • D) Able to add/remove nodes to/from the cluster dynamically
  • E) efficient for read/write/update, delete not required to be efficient but should be possible
  • F) Eventual consistency - ok
  • G) Transactions a plus but not required.
Updated 24/06/2017 15:00 4 Comments

[Discussion]: synchronous request and response IO to be disallowed by default


Discussion for

Starting in ASP.NET Core 2.0.0 RTM, synchronous request and response IO will be disallowed by default.

For example, HttpContext.Request.Body.Read and HttpContext.Response.Body.Write will both throw InvalidOperationExceptions with a message communicating that either the equivalent async API should be called or synchronous IO should be explicitly allowed using IHttpBodyControlFeature.AllowSynchronousIO.

Synchronous IO hasn’t been disallowed by default yet. After 2.0.0-preview2, there will be changes to both Kestrel and HttpSysServer to disallow this. Both servers will have a property added to their respective Options classes to globally allow synchronous IO.

Note: This change only impacts the request and response Stream APIs.

Updated 24/06/2017 10:27 5 Comments

Latest version testing and regressions


Overview: Opening this issue to help get the latest version marked as a stable and then coordinate an upgrade. This is a difficult one because forced upgrades right now are bugged and would cause game freezes.

Problems identified: - [ ] user account is not ‘found’ after logging in. Mods are not recognized, ‘update account’ to update email or password does not work. (I noticed that I was not a mod when logging in with the latest version).

Testing needed: - [ ] PBEM - [ ] PBF - [ ] A couple AI vs AI games, make sure there are no errors over the course of the game.

Testing Completed: - [ ] Lobby bot game, can connect and play through a combat round.

Updated 25/06/2017 12:36 5 Comments

Build locally with Hunter fork


I’m attempting to build using only local packages. I’ve cloned hunter and modified the hunter.cmake for all the projects I’m using. I want to use a relative directory in the URL strings. I attempted to use both CMAKE_SOURCE_DIR, HUNTER_ROOT, and my own variable.

However, they all break down when adding recursive dependencies. For example, myRepo depends on package1 (works fine) but when package1 depends on package2, these variables are no longer set correctly. Is there any variable I could use that is set consistently?

I see two options, if the above doesn’t work.

  1. Call hunter_add_package() on the lowest dependencies first.
  2. Use an environment variable.

Thoughts on getting around this issue?

Updated 24/06/2017 07:27 1 Comments

Folder reorganize: Add tests/ to scripts/ ?



We should reconsider how this repo is organized a little bit, I could see a case possibly for merging scrips and tests into one folder for all the bash stuff (maybe).

One core confusing part is I think that there needs to be a more parsed out common bash script section within the tests - but these might apply to more than just tests! - could maybe apply to some of the non-test scripts too? If we are going to integrate in shelldown testing of the docs, We already need some new common scripts that don’t really belong in the existing file.

Updated 26/06/2017 12:06

[Windows Client] Files created upon version conflict not synching to server


<!— Please try to only report a bug if it happens with the latest version The latest version can be seen by checking the ChangeLog:

For support try: —>

Expected behaviour

Upon the creation of a version conflict (2 users having edited file simultaneously and saving different versions), the file should sync back to the server so that all users can see the conflicted versions and manually resolve.

Actual behaviour

The windows client recognizes that 2 users have opened a file (ex: test.txt) and both have saved different, conflicting versions of the file. A conflicting version is saved (ex: Test_conflict-20170623-100411.txt), however, the file does not synch back to the server.

Steps to reproduce

  1. Create a text file (ex: test.txt) with some text inside
  2. Have 2 users, using windows and the windows notepad, open the text file and make different edits. Both save their files.
  3. Conflict will be saved on 1 users' drive.
  4. Look for synch of the conflcted version back to the server.

Server configuration

Operating system: (not known / hosted by

Web server:


PHP version:

ownCloud version:

Storage backend (external storage):

Client configuration

Client version: 2.3.1

Operating system: Windows 10 Pro

OS language: ?

Qt version used by client package (Linux only, see also Settings dialog): n/a

Client package (From ownCloud or distro) (Linux only): n/a

Installation path of client: h:\program files (x86)\Nextcloud


Please use Gist ( or a similar code paster for longer logs.

Template for output < 10 lines

  1. Client logfile: Output of owncloud --logwindow or owncloud --logfile log.txt (On Windows using cmd.exe, you might need to first cd into the ownCloud directory) (See also )

  2. Web server error log:

  3. Server logfile: ownCloud log (data/owncloud.log):

Updated 26/06/2017 10:23 3 Comments

The rising infrequency of quest completion


Seems like there are a lot of players these days who accept quests but not so many who complete them.

<img width=“322” alt=“screen shot 2017-06-23 at 2 28 04 pm” src=“”>

We used to be looking at an average of 2 quests completed per user for quite some time, but that has fallen. I wonder why, and I wonder what you guys think about it. I’m not spooked by it or worried or whatever, but I think it’s worth thinking and talking about.

I think it’s entirely possible that resolving may help quite a bit. If folks accept quests and then simply cannot figure out how to complete them… that would make sense of the numbers. Certainly there’s always going to be folks who accept quests then decide “meh. forget it.” That’s fine. But there seems to be a trend in that direction.

Anyway, here are a couple explicit questions for discussion: - What things might cause players to accept quests and never complete them or abandon them? - What improvements can we make to increase the frequency of quest completion? - What other analytics or research could we do that would give us a better understanding of the player base and UX?

Updated 23/06/2017 19:15 1 Comments

Feature/expect ct


Implement Expect-CT support from #19

About: Expect-CT is similar to HSTS, where the criteria to fail the TLS connection is lack of certificate transparency.

Draft spec here:

Updated 23/06/2017 19:44 2 Comments

input: align-end demo is broken


The align-end demo for the input looks strange now, following #5141 it seems like the placeholder is right-aligned, but the input content is not. The placeholder animation is also strange, it seems to get clipped.

FWIW I’m not sure if the old behavior was exactly right either? previously the input content was right-aligned while the placeholder was not.

Updated 26/06/2017 17:38 2 Comments

Proposed budget for entertainment


We need to be careful with “entertainment” budgets. They are meant for clients, not ourselves, but if they are refreshments during meetings or opportunities to treat our students, then I think they are not only reasonable but tax deductible.

I would like to suggest an entertainment budget in the region of £50/week or £800/third, which is more than than the £409 we clocked up in the last third, but this will allow a bit more budget for parties and lunch meetings and for fruit deliveries (which I miss and I only stopped because we had no money for them).

Updated 23/06/2017 15:59

discussion: API changes



  • [x] replace fadj / badj with out_neighbors / in_neighbors
  • [ ] for consistency, rename indegree and outdegree to in_degree and out_degree
  • [ ] remove add_edge!, rem_edge!, add_vertex!, rem_vertex!, and add_vertices! from API (put them in core)

This would simplify the API to the following functions: - nv - ne - vertices - edges - is_directed - has_vertex - has_edge - in_neighbors - out_neighbors - zero

All changes would move through deprecation for at least one minor version.

cc @jpfairbanks @juliohm

Updated 23/06/2017 16:57 2 Comments

Update zpreztorc


Make a .zpreztorc.local that are not versioned in order to make update on prezto easier to update.

This PR is a discuss proposal related to This must not be merged.

How can we manage the other files in runcoms?

Updated 23/06/2017 18:15 4 Comments

ENS proposal


I’d like to put forward a more concrete proposal for using the ENS for handles in OpenBazaar 2.0. I suggest that we use subdomains of openbazaar.eth to register and resolve OB2.0 names. For example,

  • ob://cpacia would resolve to cpacia.openbazaar.eth
  • ob://shoes.cpacia would resolve to shoes.cpacia.openbazaar.eth
  • ob://bnb.cpacia would resolve to bnb.cpacia.openbazaar.eth

Because of the programmability of ENS registrars and resolvers, we can design a contract that registers an OpenBazaar 2.0 handle in a single transaction given the handle and the OpenBazaarID as inputs.

Advantages I can think of versus the current setup for OB1.0:

  1. single transaction registration (vs two transactions for OB1.0)
  2. low-latency registrations (seconds/minutes for OB2.0 vs hours/day for OB1.0)
  3. OpenBazaar subdomains, just like the web (e.g. ob://bnb.cpacia)
  4. lightclient friendlyness, thanks to Ethereum’s Patricia-Merkle trie for state/receipts (Chris made a lightclient resolver here, with an awesome demo)
  5. less squatting (there is more incentive to squat a generic naming system like BlockchainIDs instead of an application-specific namespace like openbazaar.eth)
  6. network effects with other Ethereum applications (as far as I know, the main application that uses BlockchainIDs is OB1.0, yielding no network effect for OpenBazaar)
  7. potentially cheaper registrations (assuming $3 per Bitcoin transaction, that’s $6 per BlockchainID. It is still unclear how much an OB2.0 registration would cost. My gut feel is that it’s <$1.)
  8. significantly higher uptake and innovation of the ENS compared to BlockchainID. For example, see this metric and integrations with many wallets, explorers, tools (e.g. MEW, ETHTools, Etherscan, Metamask).
  9. Looking into the future, I expect ENS will become the primary naming system for an IPFS browser. This means we can finally have native OpenBazaar web links which a user can type in the address bar, and use to point to other stores natively to the web.
  10. Generally more awesomeness across the board

Disadvantages I can think of versus the current setup for OB1.0:

  1. Unlike with OneName, there is no immediate transaction fee subsidizer for the ENS. My understanding (?) is that Blockstack will stop subsidizing registration fees soon. On that point, handles are really only for vendors, and Duo or OB1 could subsidize the first 10,000 vendor handles. We could have an OB2.0 chat bot automate registration right from the reference client. For example, the bot could check that the vendor has at least one listing, and that the vendor has been online for at least one day.
  2. A new currency (Ether) is added in addition Bitcoin. My understanding (?) is that OpenBazaar will eventually achieve currency abstraction anyway, with the possibility to support for example Litecoin, Zcash and Ether. On that point, it is not necessary to have lightclient verification for OpenBazaar handles. An API (similar to OneName) could be good enough to get started. Notice that the API could include Merkle proofs and signatures, providing some level of trustlessness, something the OneName API does not do.
  3. The ENS registrar will change. The current registrar is temporary, with a plan to move to another registrar in ~2 years. My guess is that the temporary will remain functional for much longer, and be forwards-compatible with the new registrar.
  4. There needs to be governance regarding ownership and control of the openbazaar.eth handle. This governance can be defined programmatically, e.g. using multisig. With BlockstackIDs there is no such governance issue.
Updated 23/06/2017 15:31 4 Comments

Role on CtBlock interface


There is currently no role on the different methods of CtBlock, shouldn’t we add them, especially on the getters? Or at least specified that the different insert are derived ? @tdurieux

Updated 23/06/2017 14:45




I think the future of plotting for the kernel (and probably for many other languages, Python included) is Vega and Vega-lite.

I found a Scala library called Vegas. I tried it very quickly with the kernel and it works.

screenshot from 2017-06-23 09-25-33

I only tried with Scala but I expect others scripting languages to work as well.

For now display a window with the rendered plot. In a near future, a Vega extension will be shipped with Jupyter Lab (and the Notebook as well): That will allow any Vega JSON string to be rendered if the mimetype is correct.

We could create a converter that send to the client the JSON string (using plot.toJSON) with the correct mimetype.

Updated 23/06/2017 14:03 3 Comments

making linking local packages great again


Hey, I really love pnpm and how it saves a lot of disk space..

As I excessively use linking of local packages, I think the whole link commando could use a bit of love: - [x] pnpm link pkg1 pkg2 should link both packages (like npm and yarn do) - [ ] pnpm link pkg1 should check for peerDependencies of pkg1 (#827) - [ ] pnpm unlink pkg1 should install pkg1 if it is a dep after unlinking (#819) - [ ] pnpm unlink-all to replace all local linked packages with normal installed packages - [ ] pnpm link-all link up all dependent locally available packages

pnpm unlink-all && pnpm test & pnpm link-all would be very valuable for me..

what do you think? If wanted, I could invest some time for a few PRs..

Updated 26/06/2017 10:09 2 Comments

Add overload to OpenId.UserInfoController.Me action for specific user id


It would be nice, especially in server-to-server scenarios, to be able to hit the UserInfo endpoint with an arbitrary userId, that is in addition to Orchard.OpenId/UserInfo/Me also have:


This will obviously require the caller to have a new permission that allows this, but otherwise a simple task I think. If agreed, I’m happy to submit a PR (and fix #842 too).

Updated 23/06/2017 15:47 3 Comments

the sort do not work when I use $regex , $ne or $not any one in selector


<!— Provide a general summary of the issue in the Title above –>

I have a requirement is to search for some content using a selector, then to sort the results according to the specified field

examples: docs: _id Name Debut Series dk Donkey Kong 1981 Mario falcon Captain Falcon 1990 F-Zero fox Fox 1993 Star Fox kirby Kirby 1992 Kirby link Link 1986 Zelda luigi Luigi 1983 Mario mario Mario 1981 Mario ness Ness 1994 Earthbound pikachu Pikachu 1996 Pokemon puff Jigglypuff 1996 Pokemon samus Samus 1986 Metroid yoshi Yoshi 1990 Mario

code1: work correctly, the results sorted by name desc db.createIndex({ index: {fields: ['name']} }).then(function)(){ return db.find({ selector:{name:{$gt:null}}, sort:[{name:'desc'}] }) }

code2: do not work db.createIndex({ index: {fields: ['name']} }).then(function)(){ return db.find({ selector:{name:{$ne:"xx"}}, sort:[{name:'desc'}] }) }

I read the pouchdb and couchdb’s documents, knows that: once we have an index on a field, we can sort documents by this field but $regex, $ne and $not cannot use on-disk indexes, and must use in-memory filtering instead.

so this means we can’t use the sort when we use $regex, $ne or $not in selector,

Updated 24/06/2017 16:01

Class name order shouldn’t matter



The result produced by class="ui segment attached bottom" and class="ui bottom attached segment" differs.

Maybe I’m lacking some basic HTML/CSS knowledge, but I think order should not matter, although I understand it’s more “english-readable” to put the classes in the “grammatically correct” order.

Whereas class="ui bottom attached segment" gives me the rounded bottom, class="ui segment attached bottom" does not (tested on Chrome 58).

Thank you all!

Updated 26/06/2017 02:43 4 Comments

Release 1.0.2


IMO it’s time to release 1.0.2 to the public. Maybe you know some patches or bugfixes which should be backported from master to 1.0.x before baking 1.0.2? For whisper I’m going to check @piotr1212 commits for some fixes beside python3 support (I think we need to release py3 a bit later, in 1.1.0) For carbon - the same. Should we include DYNAMIC_ROUTER support to 1.0.2 ? /cc @iksaif @DanCech @cbowman0 @drawks @graphite-project/committers

Updated 23/06/2017 11:10 1 Comments

investigate: implement base ripple and base adapter model


I think ripple and components implementation would become faster/safer if I align adapter design on mdc _base (component) and implement foundations and ripples from base classes.

I went through @robertkern vue-material’s work ( which is, by the way, the best MDC implementation I have seen sofar!). I’m starting to understand better MDC’s abstraction and how it can be rendered with vuejs.

So this issue is about investigating if it worth to refactor for reuse and how it is best done (mixins, extends, … ?) I’ll keep track of design options and notes here.

Updated 24/06/2017 15:11 1 Comments

Should Mavos with no specified storage backend be editable?


This refers to Mavos without an mv-storage attribute. They could however have an mv-source attribute. Such Mavos are not saveable, but are still editable. However, I’m not sure if that serves any purpose, and has been confusing a number of times. After all, what’s the point of editability if the data can’t go anywhere? The only benefit I see is immediate feedback when somebody is first learning Mavo, and they experiment with property before mv-storage. If such Mavos are not editable, they would see no difference.

@karger, I’d appreciate your thoughts on this.

Updated 25/06/2017 02:29 5 Comments

alternate binding syntax

  • [ ] - Implement and test functionality, release it in new can-stache-bindings.
  • [ ] - Make a code-mod.
  • [ ] - Update CanJS guides and recipes to use it (will require CanJS pre release)
  • [ ] - Update DoneJS examples to use it.
  • [ ] - Add documentation for it, deprecate old functionality.
  • [ ] - Make a release for this and canjs, push out new site.

New users can’t remember our binding syntaxes. So, I propose we fix them asap. A 2 week sprint to create this and update the docs will make a 10% better CanJS.

Here’s an alternate:

<!-- events ($click) -->
<div on:click="method()"/>

<!-- one-way scope to element property {$value}="scopeValue" -->
<input value:from="scopeValue"/>

<!-- one way element property to scope {^$value}="scopeValue" -->
<input value:to="scopeValue"/>

<!-- two-way element property to scope ({$value})="scopeValue" -->
<input value:bind="scopeValue"/>

I think the bindings above will work great for can-element which will have everything on the elements. There will be no need for “view model” bindings.

However, to enable this to work for view model bindings, I’ll make two proposals:

1. Use vm to signal that behavior on a view model

<!-- events (close) -->
<my-element vm:on:close="method()"/>

<!-- one-way scope to element property {$value}="scopeValue" -->
<input vm:value:from="scopeValue"/>

<!-- one way element property to scope {^$value}="scopeValue" -->
<input vm:value:to="scopeValue"/>

<!-- two-way element property to scope ({$value})="scopeValue" -->
<input vm:value:bind="scopeValue"/>

2. Use the vm if you have one.

99% of the time, if you are on a custom element, you want a vm binding. So the bindings would check if there is a vm and use that, resulting in:

<!-- events (close) -->
<my-element on:close="method()"/>

<!-- one-way scope to element property {$value}="scopeValue" -->
<input value:from="scopeValue"/>

<!-- one way element property to scope {^$value}="scopeValue" -->
<input value:to="scopeValue"/>

<!-- two-way element property to scope ({$value})="scopeValue" -->
<input value:bind="scopeValue"/>

Cool stuff

This might be able to solve some other issues nicely like #6.

<input on:enter:value:to="scopeValue"/>

This would update scopeValue on the enter event with input.value instead of on the change event.

Other consideration

We now support events like:

<div (person born)="animate"/>

What would this look like in the new syntax?

<div on:born:by:person="animate"/> <-- favorite -->

<div on:born:for:person="animate"/>
<div on:person:born="animate"/>

<div for:person:on:born="animate"/>

<div on-person's-born="animate"/> <!-- I don't think this is possible, but would be cool -->
Updated 26/06/2017 21:56 8 Comments

AnimationPlayer Node but simplier only for Sprite Animations


Basically an AnimatedSprite node + keyframing everything and tracks, specially keyframing a function in a certain frame of the animation or an CollisionShape for hitbox purpose

Spritesheets or multiple image sprites are way simplier, it only needs duration of animation, and if it needs to loop or not. but doing this in the AnimationPlayer node its very complicated because you have put first the timeline length, the step between frames and at this point youre just guessing and trying every number so your animation looks good.

AnimationPlayer is excellent for separated part characters but not so good for simple sprite animations if you want to use its amazing features.

Updated 25/06/2017 12:51 12 Comments

Fork me on GitHub