Contribute to Open Source. Search issue labels to find the right project for you!

Browser detection/warning


Currently we are only actively supporting Chrome and Firefox. MS Edge is now the default for Win10 machines, and I’ve watched many teachers inadvertently open BlocklyProp with it, only to then have issues.

I think two things should be done - - Alert/Modal warning users with a non Chrome/Firefox browser is detected - Spend a bit of time (not a ton…) determining what it would take to make BP more cross-compatible.

Updated 20/08/2017 07:12

Rework UIModel and UIDataSource


Data-Model and Data-Source


  1. Add option for URL for CRUD
  2. /api/user which will be used for following
    • GET /api/user/:id
    • POST /api/user/
    • PUT /api/user/:id
    • DELETE /api/user/:id
  3. All operations will provide isBusy indicator
  4. Add decorator @ignoreSerialize to ignore given property from serialization
  5. Observe all model property keys to update dirty and propDirty indicators (Need to think about structuring the propDirty)
  6. Are three model states required? This will be useful when using the model in a DataSource
    • __original__ maintains the original record
    • __updated__ maintains updated values that have not been synced via the API
    • discardChanges() will revert to the __updated__ dataset
    • reset() will revert to the __original__ dataset

Initial Thoughts

// Since there is no objectObservation need to rely on propertyObserver for updating the dirty indicators
// Update property dirty on propertyObserve callback
init() {
  each(keys, key=>observe(prop, ()=>this.propChange(key)));

propChange(key) {
  return ()=>this.checkDirty(
    this.propDirty[key] = this.__original__[key]!==this[key]);

discard() {
  each(keys, key=>this[key] = this.__updated__[key]);

reset() {
  each(keys, key=>this[key] = this.__updated__[key] = this.__original__[key]);

Remote DataSource

  1. Add option for URL for CRUD
  2. /api/user which will be used for following
    • GET /api/users fetch all models
    • POST /api/user/
    • PUT /api/user/:id
    • DELETE /api/user/:id

      Do we follow singular calls for all updates and deletes Or do we create a single call with multiple update models eg. DELETE /api/users body=[1,2,3]

  3. Observe the dirty property of all models to maintain dirty indicator for the store
  4. Need to have server side sorting, pagination and filter capabilities
    • Filter is important when using stores with lists and auto-complete
Updated 20/08/2017 06:24 1 Comments

support for using KaTeX in a wysiwyg math editor


There are a couple of approaches to implementing an editor: - edit the input string directly - edit the parse tree

Both have their challenges. In the case of editing the LaTeX string, determining which characters from the original source should be updated when the user makes an edit can be tricky especially when macros are involved. Editing the parse tree is complicated by the fact that we haven’t stabilized the parse tree yet.

There are various things that an editor needs to be able to do: - move a cursor around via the keyboard and mouse - render a cursor at the correct location - make a selection of multiple characters and render it - insert and delete nodes

All of this seems incredible difficult to do if we try to map back to the original input string. If we go with the parse tree approach we’d want to be able to navigate through the tree. The parse tree doesn’t support anyway to navigate to sibling nodes or to navigate to a parent node, but it could be made to do so.

I think the biggest challenge will be generating new nodes in the format expected by buildTree.

In order to facilitate using the mouse to position the cursor and make selections we’ll want someway to link nodes in the DOM with nodes in the parse tree. We could add an id prop to each node in the parseTree and matching katex-id properties to corresponding DOM nodes.

@spontaliku-softaria has been experimenting with adding katex-ids in…spontaliku-softaria:math-input.

Updated 19/08/2017 23:55 1 Comments

Adds standard magboot slowdown to Vox boots


Adds the standard magboot slowdown to the special Vox clawboots because it makes little sense that advanced magnetic technology slows you down while violently digging metallic talons into the ground in a “deathgrip” doesn’t. Also a huge amount of people seem to agree that the Vox having no penalty magboots is just silly and unnecessary.

Compiled and tested successfully.

  • rscadd: Gives Vox magboots the same slowdown as standard magboots.
Updated 20/08/2017 06:43 44 Comments

Remove @iteles as official reviewer for master-reference


As someone who has not been able to attend recent Founders & Coders meetings I suggest I am removed as an official maintainer for this repository So that there is no dependency on me for reviewing and merging of PRs that I do not have the context to review.


I knew in advance that from July - end October 2017 I wouldn’t have the availability to attend meetings for Founders & Coders campuses and therefore, my reviewing of PRs that come out of these meetings, particularly those of minutes of meetings, is not helpful to the organisation.

Saliently, I’m concerned that I will start a practice of not being able to attend meetings but then using these PR reviews to voice my insights after-the-fact (i.e. after hours of discussion have been had and agreements have been made), for example with losing an afternoon a week to curriculum planning ( This would be entirely detrimental to the Founders & Coders team and would set a bad (chaotic) precedent for others.

I am very happy to be continue added as a reviewer/assignee to PRs and issues when someone feels my input would be valuable, but being the ‘default’ is unlikely to be good for Founders & Coders.

Updated 19/08/2017 17:30

Add reselect and redux actions


Hey there! ~I have a WIP branch with some updates on our codebase, it is still not done but I want to get early feedback on what I have here.~

~Will update this description as I make more progress - thanks for your patience!~


I completed my work on this but I decided to not include Immutable (not in this PR at least), because reselect + redux actions can live without Immutable.

So, the structure of the code has changed slightly, it goes like:

  1. We have a organization.selectors.js file. Here we’re using reselect to “query” our store and get the data we need (reselect also has a bunch of benefits that you can read in their docs). This will save us some duplication when we want to get data from the store in different containers (we should reuse the selectors we have in the selectors file whenever we want to use an specific part of the state)
  2. We have a organization.constants.js file, I took this approach because we can use the constants through our reducers and actions (and in our tests in the future) also, I find the *.type.js notation quite ambiguous
  3. We no longer have that nasty switch statement in our reducer (I love this part)
  4. We have individual actions that take care of specific parts of the state, for example we can call an action just to update the loading state of the organization repos or update the error state only on the organization members - No more “batch” updates to the store (What I mean by that is having actions that takes care of 2+ things in our store) This will give us way more control over how our store is being updated.
Updated 20/08/2017 04:33

Code inspection should warn if variable is declared at module and method level


I’m not so confident with coding best (or worst practices) but can’t help but feel there should be some sort of warning about declaring a variable at module level and also within subs / functions. There are no warnings or suggestions for the following: ``` Option Explicit Private foo As String ‘ <<<

Public Sub bar() Dim foo As String ‘ <<< foo = “bar” Debug.Print foo End Sub

Public Sub foobar() foo = “bar” Debug.Print foo End Sub

Public Sub foobarfoo() foo = “bar” Debug.Print foo End Sub ```

Updated 19/08/2017 13:01 1 Comments

Build as a multi-repository


Lot of us currently using stable version and it would be much easier to migrate each component separately.

So at the start we will be able to load components separately:

import { Card, CardText, CardActions } from 'material-ui';  // 0.19.0
import { Button } from "@material-ui/button"; // ^1.0.0

Idea from material-components.

Where user can install whole lib from material-components-web repo or each component separately from @material/*.

See lerna.

Updated 19/08/2017 11:05 5 Comments

allow for namespacing or nesting of events


It is really nice that I can nest my actions in a mixin, like

actions: {
  auth: {
      login: () => {}

But if I have an event in the same mixin, I can’t do the same. Would it be worth making something like emit(‘auth.expired’) possible? I realize this is a minor convenience and would definitely not be in favour unless it is not too complicated to do.

Updated 19/08/2017 23:02 4 Comments

OSSEC: Remove or modify daily "Ossec server/agent started" emails



The SecureDrop servers reboot every day. After reboot, the OSSEC server and agent start and one gets two emails:

Received From: (app)>ossec
Rule: 503 fired (level 3) -> "Ossec agent started."
Portion of the log(s):

ossec: Agent started: 'app->'.
Received From: mon->ossec-monitord
Rule: 502 fired (level 3) -> "Ossec server started."
Portion of the log(s):

ossec: Ossec started.

Some administrators may not want these emails and would consider them spam. On the other hand, some may want these emails because it lets the administrator know that their monitor server is working and that the nightly reboot has occurred successfully. I have heard both these sentiments from administrators in the past on these particular alerts, so I’m opening this for discussion before taking action to see what others think.


  1. Remove both alerts. If people want to be reassured that their monitor server is working, they must SSH in or take some other action that triggers an alert.
  2. Remove one alert. There are emails from both app and mon here. One is redundant: if the OSSEC server does not start, then the OSSEC agent started email will not be sent. I suggest we suppress the “Ossec server started” alert, as that is implicit if one is receiving an email alert from OSSEC (i.e. leave only the “Ossec agent started” alert). One might also edit the description in the alert to say “OSSEC restarted after nightly SecureDrop reboot” or something more explicit.
  3. Leave as is (this option is unsatisfactory if the logic in Option 2 is sound).

User Stories

As a SecureDrop administrator, I want to decrypt only email alerts containing useful or actionable information.

Updated 19/08/2017 06:01

xplat issues with detection and use of SSE instructions for SSE versions greater than SSE2


In my spare time I have been chipping away at . Whilst doing this I had a look at some of the existing code that is used to selectively use SSE greater than SSE2 instructions at runtime. For example ``` uint Encoder::CalculateCRC(uint bufferCRC, size_t data) {

if defined(_WIN32) || defined(SSE4_2)

if defined(_M_IX86)

if (AutoSystemInfo::Data.SSE4_2Available())
    return _mm_crc32_u32(bufferCRC, data);

elif defined(_M_X64)

if (AutoSystemInfo::Data.SSE4_2Available())
    //CRC32 always returns a 32-bit result
    return (uint)_mm_crc32_u64(bufferCRC, data);



return CalculateCRC32(bufferCRC, data);

} In order to generate a portable x86_64 binary I need to compile without ``-msse4.2``. Doing so will essentially turn this function in to this: uint Encoder::CalculateCRC(uint bufferCRC, size_t data) { return CalculateCRC32(bufferCRC, data); } ``` Note that this unconditionally disables use of the SSE4.2 crc32 instruction.

Unfortunately if -msse4.2 is enabled then SSE4.2 instructions will be enabled globally throughout the codebase. This means that Clang is free to emit SSE4.2 code wherever it pleases (not just where you have used an intrinsic such as _mm_crc32_u32). This means it will generate code that will not work correctly on processors that only have support for SSE2. I think this is an example of how Clang differs in behaviour to MSVC. I think MSVC will only emit SSE4.2 instructions specifically where you have used SSE4.2 intrinsics.

When the -msse4.2 flag is enabled you will essentially compile the following: uint Encoder::CalculateCRC(uint bufferCRC, size_t data) { if (AutoSystemInfo::Data.SSE4_2Available()) { //CRC32 always returns a 32-bit result return (uint)_mm_crc32_u64(bufferCRC, data); } return CalculateCRC32(bufferCRC, data); } But, if you substitute in that implementation for CalculateCRC and then compile without -msse4.2 you will get the following error: [ 30%] Building CXX object lib/Backend/CMakeFiles/Chakra.Backend.dir/Encoder.cpp.o /root/src/ChakraCore-1.7.0/lib/Backend/Encoder.cpp:1033:14: error: always_inline function '_mm_crc32_u64' requires target feature 'ssse3', but would be inlined into function 'CalculateCRC' that is compiled without support for 'ssse3' return (uint)_mm_crc32_u64(bufferCRC, data); ^ 1 error generated. make[2]: *** [lib/Backend/CMakeFiles/Chakra.Backend.dir/Encoder.cpp.o] Error 1 make[1]: *** [lib/Backend/CMakeFiles/Chakra.Backend.dir/all] Error 2 make: *** [all] Error 2 See error details above. Exit code was 2 Which means that you cannot use the intrinsic without the -msse4.2 flag.

There are a couple of work arounds for this (eg putting the SSE4.2 code in a separate file and then compiling that code -msse4.2). Using inline assembly is probably closest to what the intrinsic does. I’ve had a couple of tries at this, but I can’t seem to get the right incantation. Eg, the following illustrates the idea, but is buggy (oddly it seems to work fine when compiling -O0 but breaks when compiling -O3, on Clang 3.8.0): ``` uint Encoder::CalculateCRC(uint bufferCRC, size_t data) {

if defined(_WIN32)

if defined(_M_IX86)

if (AutoSystemInfo::Data.SSE4_2Available())
    return _mm_crc32_u32(bufferCRC, data);

elif defined(_M_X64)

if (AutoSystemInfo::Data.SSE4_2Available())
    //CRC32 always returns a 32-bit result
    return (uint)_mm_crc32_u64(bufferCRC, data);



if defined(_M_X64)

if (AutoSystemInfo::Data.SSE4_2Available())
    unsigned long long tmp;
    unsigned long long tmp2=0;
    tmp=(unsigned long long)bufferCRC;
    __asm__ __volatile__("push %1;crc32q %2, %1; movq %1, %0;pop %1" : "=r" (tmp2): "r" (tmp), "r" ((unsigned long long)data));
    return (uint)tmp2;



return CalculateCRC32(bufferCRC, data);

} ```

Updated 19/08/2017 00:22 2 Comments

Making done-ssr more general purpose


Currently done-ssr works excellent in DoneJS apps, and ok but not wonderfully in other scenarios. We would like to make it more general purpose so that the excellent technology within is useful for developers of other frameworks.

This issue is the starting point to creating API ideas that lets devs to use individual parts of done-ssr and compose them together. Ideas to come.

Updated 18/08/2017 20:53

[IOperation] Handling user-defined conversions


Currently, for user-defined conversions, we produce an IConversionExpression that looks something like this: ```C# class C1 { void M1() { C1 c1 = new C1(); C2 /<bind>/c2 = (C2)c1/</bind>/; }

public static explicit operator C2(C1 c1) => new C2();


class C2 { } IVariableDeclarationStatement (1 declarations) (OperationKind.VariableDeclarationStatement) (Syntax: ‘C2 /<bind> … </bind>/;’) IVariableDeclaration (1 variables) (OperationKind.VariableDeclaration) (Syntax: ‘C2 /<bind> … </bind>/;’) Variables: Local_1: C2 c2 Initializer: IConversionExpression (Explicit, TryCast: False, Unchecked) (OperatorMethod: C2 C1.op_Explicit(C1 c1)) (OperationKind.ConversionExpression, Type: C2) (Syntax: ‘(C2)c1’) Conversion: CommonConversion (Exists: True, IsIdentity: False, IsNumeric: False, IsReference: False, IsUserDefined: True) (MethodSymbol: C2 C1.op_Explicit(C1 c1)) Operand: ILocalReferenceExpression: c1 (OperationKind.LocalReferenceExpression, Type: C1) (Syntax: ‘c1’ ```

However, this is effectively a method call of the conversion method. Because of this VB has some additional complications we’re currently not handling, namely around in and out conversions. For example: ```VB.NET Module Program Sub M1(args As String()) Dim i As Integer = 1 Dim c1 As C1 = i'BIND:“Dim c1 As C1 = i” End Sub

Class C1
    Public Shared Widening Operator CType(ByVal i As Long) As C1
        Return New C1
    End Operator
End Class

End Module IVariableDeclarationStatement (1 declarations) (OperationKind.VariableDeclarationStatement) (Syntax: ‘Dim c1 As C1 = i’) IVariableDeclaration (1 variables) (OperationKind.VariableDeclaration) (Syntax: ‘c1’) Variables: Local_1: c1 As Program.C1 Initializer: IConversionExpression (Implicit, TryCast: False, Unchecked) (OperatorMethod: Function Program.C1.op_Implicit(i As System.Int64) As Program.C1) (OperationKind.ConversionExpression, Type: Program.C1) (Syntax: ‘i’) Conversion: CommonConversion (Exists: True, IsIdentity: False, IsNumeric: False, IsReference: False, IsUserDefined: True) (MethodSymbol: Function Program.C1.op_Implicit(i As System.Int64) As Program.C1) Operand: ILocalReferenceExpression: i (OperationKind.LocalReferenceExpression, Type: System.Int32) (Syntax: ‘i’) ```

There is an implicit conversion from int to long in the implicit user-defined conversion here that we are not currently representing. There are two possibilities for how to resolve this. The first is to have represent user-defined conversions in both languages as a function call. The nicety here is twofold: first, the user-defined conversion is a function call. Second, we can represent that the in and out conversion are part of the same conversion step. However, this will make it harder to IOperation users to simply get all conversions, as they’ll have to subscribe to all invocations and filter to method symbols that are conversions. The other possibility (and the one I favor) is to simply insert the in and out conversions as IConversionExpressions if necessary. So the hierarchy would look like this: IConversionExpression (out conversion, if present) Operand: IConversionExpression (User-defined conversion) Operand: IConversionExpression (in conversion, if present) Operand: IOperation (Converted expression) I like this because it makes it easy to see that multiple conversions are actually taking place. IsImplict on the outer and inner conversion should provide the information that the conversion is actually part of the user-defined conversion. Tagging @dotnet/analyzer-ioperation @AlekseyTs for discussion.

Updated 18/08/2017 19:46 4 Comments

[question] deep state, destructuring & action verbosity/DRY


hey guys,

i don’t believe there is something like concise notation for deep destructuring:

const x = ({a.b.c}) =>;

most examples with hyperapp use a flat state, so they look simple. but the app size will grow rather quickly and become less DRY as you have to repeat the state structure in multiple places. adding just one level to the counter example results in a lot more repetitive action code, or any code that can return partial state (but must be aware of the entire ancestor state structure):

attempts to make it smaller [understandably] don’t quite work:

thoughts? thanks!

Updated 20/08/2017 02:28 34 Comments

How to set ClientCertificateMode in core 2.0?


What is the equivalent to this in 2.0? csharp whb.UseKestrel(options => { var httpsOptions = new HttpsConnectionFilterOptions(); httpsOptions.ServerCertificate = certificate; httpsOptions.ClientCertificateMode = ClientCertificateMode.AllowCertificate; httpsOptions.SslProtocols = System.Security.Authentication.SslProtocols.Tls; options.UseHttps(httpsOptions); })

The functional tests use a constructor that is marked internal so I can’t use it.

Updated 19/08/2017 05:40 7 Comments

Cannot get AuthenticateInfo in Extended SignInManager Class


How in Core 2.0 would we get the Microsoft.AspNetCore.Http.Authentication.AuthenticateInfo class if we are extending the SignInManager<TUser> class. In Specific we have a ctor:

        public AppUserSignInManager(UserManager<TUser> userManager, 
            IHttpContextAccessor contextAccessor, 
            IUserClaimsPrincipalFactory<TUser> claimsFactory, 
            IOptions<IdentityOptions> optionsAccessor, 
            ILogger<SignInManager<TUser>> logger,
            IAuthenticationSchemeProvider schemeProvider) : base(userManager, contextAccessor, claimsFactory, optionsAccessor, logger, schemeProvider) 

And an override method:

public override async Task SignInAsync(TUser user, bool isPersistent, string authenticationMethod = null)

In the method we need the AuthenticateInfo: var scheme = await SchemeProvider.GetSchemeAsync(Microsoft.AspNetCore.Identity.IdentityConstants.ExternalScheme); var ai = await Context.Authentication.GetAuthenticateInfoAsync(scheme.Name);

But this does not seem to work now.

Startup.cs looks like: services.AddIdentity<AppUser, AppRole>() .AddUserManager<AppUserManager<AppUser>>() .AddUserStore<AppUserStore<AppUser>>() .AddRoleStore<AppRoleStore<AppRole>>() .AddDefaultTokenProviders(); ``` services.ConfigureApplicationCookie(config => { config.Cookie.Name = lib.util.constants.IdentityConstants.CookieName; config.LoginPath = new PathString(“/account/login”); config.LogoutPath = new PathString(“/account/logout”); config.AccessDeniedPath = new PathString(“/unauthorized”); config.ExpireTimeSpan = TimeSpan.FromDays(1); config.SlidingExpiration = true; config.ReturnUrlParameter = “returnUrl”; });

        services.Configure<IdentityOptions>(options =>
            options.Lockout.DefaultLockoutTimeSpan = TimeSpan.FromDays(1);
            options.Lockout.MaxFailedAccessAttempts = 5;
            options.Password.RequireDigit = true;
            options.Password.RequireNonAlphanumeric = true;
            options.Password.RequireLowercase = true;
            options.Password.RequireUppercase = true;
            options.Password.RequiredLength = 8;


Any help appreciated. THanks!

Updated 20/08/2017 07:05 2 Comments

Investigate how we can avoid cyclic can-* dependencies


We run into issues when we add a new can-* dependency in order to test something and that dependency depends on the current package.

For instance, adding can-stache to test something in can-view-scope when can-stache already depends on can-view-scope.

└── can-stache
    └── can-view-scope

We should avoid this. It would be awesome if we had a lint rule or some kind of postinstall hook that would check for this and let people know not to do it.

Updated 18/08/2017 16:38

Inspections & QuickFix XML-Doc


The Rubberduck website’s inspections page currently discovers implemented inspections in the Rubberduck build it’s referencing, and uses the descriptions and meta-descriptions to provide useful information about every implemented inspection.

Meta-descriptions are better than nothing, but they’re intended to be relatively a few sentences, succinctly describing the rationale behind an inspection - short enough to fit the inspection toolwindow’s bottom panel, but long enough to be useful.

These dynamically-generated website pages would be the single most ideal place to document in details everything there is to know about an inspection, and the quick-fixes that support it.

So let’s add proper and complete XML-doc to every inspection and quickfix implementation:

  • A <summary> tag, that very shortly describes what the inspection is looking for, or what the quickfix does. Bear in mind that this content will also appear in Visual Studio IntelliSense when hovering the type name.
  • If applicable, a <remarks> tag, with any further useful/relevant information - e.g. if a quickfix isn’t available when the inspection target is a project or a module, that’s where it’s documented.
  • An <example> tag, containing at least one <code> tag which contains a simple example of VBA code that triggers an inspection result. For quickfixes, the <example> tag should contain two snippets: one before the fix is applied, and one showing the same VBA code after the fix has executed.
  • For inspections, a <seealso> tag for each quickfix that supports that inspection. For quickfixes, a <seealso> tag for each inspection that can run the quickfix.

See Recommended Tags for Documentation Comments (C#) on MSDN.

Inspection’s type/category and default severity properties are already self-documented and don’t need to show up in the XML doc (they’re too easy to change and then accidentally forget to update the documentation anyway).

So, as soon as we agree on the format of it, let’s enforce a policy that every new inspection and quickfix is required to have complete XML-doc; the website will consume that information and render it on the inspection pages.

For example the OptionExplicitInspection class could look like this:

/// <summary>An inspection that identifies modules without <c>Option Explicit</c> specified.</summary>
/// <remarks>Configuring the VBE options to "require variable declaration" will make the VBE itself take care of specifying <c>Option Explicit</c> at the top of every single module.</remarks>
/// <example>This code module doesn't specify <c>Option Explicit</c>, and will therefore trigger an inspection result:
/// <code>
///     Private foo As String
///     Public Sub DoSomething()
///     End Sub
/// </code>
/// </example>
/// <seealso cref="OptionExplicitQuickFix" />
public sealed class OptionExplicitInspection : ParseTreeInspectionBase

And the OptionExplicitQuickFix class could be:

/// <summary>A quickfix that specifies <c>Option Explicit</s> at the top of a module.</summary>
/// <remarks>Configuring the VBE options to "require variable declaration" will make the VBE itself take care of specifying <c>Option Explicit</c> at the top of every single module.</remarks>
/// <example>Before:
/// <code>
///     Private foo As String
///     Public Sub DoSomething()
///     End Sub
/// </code>
/// </example>
/// <example>After:
/// <code>
///     Option Explicit
///     Private foo As String
///     Public Sub DoSomething()
///     End Sub
/// </code>
/// </example>
/// <seealso cref="OptionExplicitInspection" />
public sealed class OptionExplicitQuickFix : IQuickFix
Updated 18/08/2017 16:35

Non-Swift repos?


Currently apodidae only searches for repos primarily written in Swift including a Package.swift. While the second part of that makes sense, not every possible package has to be implemented in Swift (or at least mainly in Swift).

The obvious downside to not limiting the language in the query is that the amount of results would be too much to fetch with a single request, since the package manifest filtering is only done afterwards on the client-side. GitHub limits searches to a maximum of 100 repos for a single query.

Not sure what a good course of action could be here.

Updated 18/08/2017 16:04

Make DBKey public


Currently DBKey is already exposed through MapProof enum: rust pub enum MapProof<V> { /// A boundary case with a single element tree and a matching key. LeafRootInclusive(DBKey, V), /// A boundary case with a single element tree and a non-matching key LeafRootExclusive(DBKey, Hash), /// A boundary case with empty tree. Empty, /// A root branch of the tree. Branch(BranchProofNode<V>), }

Updated 18/08/2017 16:24

Assigning TODOs to issues


I notices the code is full of TODOs nobody takes care of (and it probably doesn’t matter). Let’s make ORDNUNG in them. If the TODO require attention, it should be a GitHub issue, if not, it does not have to be there at all.

What I suggest is that every TODO should be associated with an open GitHub issue, e.g. by putting the number of the issue into a bracket. Then we can automatically check whether all TODOs are associated with an issues and whether the TODOs have been removed from the code after the issues was closed.

What do you think?

Updated 18/08/2017 15:33 1 Comments

Proposal: Enable Downcasting Objects


Definition of Downcasting: In class-based programming, downcasting or type refinement is the act of casting a reference of a base class to one of its derived classes.

There are ways to work around downcasting such as Composition(Passing the base class into the constructor of another class) and Mapping the base to the derived type.

Lets say we have two classes:

class Car {
      public List<Wheel> Wheels {get; set;}
      public Engine Engine {get; set;}

class PremiumCar : Car {
      public Navigation Navi {get; set;}

//I think I should be able to do this:

var myCar = new Car 
      Wheels = new Wheels(),
      Engine = new Engine();

//Then if I want to upgrade that car to a premium car:

var myPremiumCar = (PremiumCar)myCar;

//Now I have all the properties of Car  already  populated all I need to do is add navigation which should be null before being added.

myPremiumCar.Navi = new Navigation();

//That's it.

Let me know what you think.

Updated 19/08/2017 07:37 9 Comments

[Timer + List] 리스트 아이템 클릭 시 타이머가 바로 시작하는게 어떨까요?

  1. 현재 상황:
  2. List에서 아이템 클릭 시, Timer로 화면이 전환되며 작업을 처리하기 위해서는 START 버튼을 눌러야 함.

<img width=“250” src = “”/> <img width=“250” src = “”/>

  1. 현 상황의 문제점
  2. Timer는 List로부터만 접근되어야 하므로, 버튼에 대한 활성/비활성 처리를 해줘야 한다. <br>
  3. Timer가 바로 시작되지 않기 때문에 List 내에서 DO 상태를 언제부터 적용해야 하는지 혼란스러움. 경우는 다음의 두 가지로 나뉜다. 1. 아이템을 클릭하자마자 TODO->DO로 전환해야 하는지 2. 혹은 Timer에서 START 버튼을 눌렀을 때에 DO 상태로 전환해야 하는지. 1의 경우엔, 사용자가 Timer START 버튼을 깜박하고 누르지 않았을 경우가 발생할 수 있을 것 같고(리스트에서는 하는 중으로 표시되니까. 시간 얼마나 갔는지 보려고 했는데 아직 스타트 버튼 안누른거..) 2의 경우엔, Timer가 시작되었다는 데이터를 Timer가 List 로 넘겨주어야 함. <br>

  4. 대안

  5. List에서 할 일 아이템 클릭 시 타이머가 바로 시작된다.

  6. 효과

  7. 2의 문제점을 고려하지 않아도 된다.

  8. 또 다른 문제점과 해결방안

  9. 사용자가 실수로 할 일을 시작할 수 있다. -> 중지하면 원상태로 복귀되므로 괜찮다.(혹은 시작하기에 앞서 confirm창을 띄워주면 된다.)

@lynring24 이렇게 하는게 Timer, List가 처리할 일이 줄어들 것 같습니다. 만약 그대로 하고 싶으시다면 2-2의 경우를 사용해서 진행하려고 합니다. 자세한 사항은 내일 만나서 논의하면 좋을듯!

Updated 18/08/2017 13:22

Logging to non-relational database


I was thinking about way we log our experiments and think we should be able to mine much more information from the training process, so we can better understand it. However, it would require logging much more information than we currently do (changes in attention during training, visualizing attention over images, structured output) a many of them cannot be easily printed in console.

What I suggest is logging into a non-relational database and display the logs using either using a console client (which will show log it its current form) or a web client which could do some clever search.

After the logging will be done in this way, we can do take advantage of a shared file system and run validation and model assessments (embedding evaluation, …) in a different process on a different machine writing into the same TF event file and into the same database.

What do you think?

Updated 18/08/2017 15:43 3 Comments

Resolving duplicate field names


In the following code:

indexer = BaseIndexer::
        theIndex := 0.

        int len := 0.                                    
        $owner readLengthTo vint:len.

        theLength := len.

theLength is a field name both in BaseIndexer and the nested class owner (list template)

Currently it is not possible explicitly to say which of these two fields should be used. The compiler compiles it like a nested owner field.

Updated 18/08/2017 07:44

add keyboard key style


Proposed usage:

<pre>when the user types a command and then presses Enter{:.keyboard} …</pre>

The CSS is borrowed from StackExchange, which implements this with the <kbd> tag.

Block inline attribute lists are already used by SWC for e.g. the codeblocks, so I gather that this span inline attribute list implementation is preferred over an HTML tag for SWC material.

SWC prefers verbose identifiers (.objectives, .keypoints, &etc), so I think {:.keyboard} would be preferred over {:.kbd}.

In practice it looks like this:


This would settle some confusion about how to format keyboard key names, such as swcarpentry/shell-novice#561.

Updated 18/08/2017 22:18 5 Comments

Windows Subsystem for Linux is still showing in Beta after Creators Update

  • Your Windows build number: (Type ver at a Windows Command Prompt) 17063 (From About Section in System setting)

10.0.15063 (When I type ‘ver’ in cmd)

  • What you’re doing and what’s happening: (Copy&paste specific commands and their output, or include screen shots)

It’s Windows 10 Home. I recently got Creators Update. By the sound of it, WSL should come out of Beta with this update. But even after the updated it still was showing as Beta. So I contacted MS Support and got following instructions via email to “Cleanly” update my PC for Creators Update

Here’s the steps to do in-place installation:

first disable all antivirus software

         copy and paste this link :
         to a browser to download and RUN the file, once done just follow the steps to run the troubleshooter, once done, proceed with the next steps :
  1. Go to this link :
          2. Choose "Download tool now"

*or simply download and run this direct link on 1703 : -

        3. After downloading, run the application.
         4. Choose "I accept"
         5. Choose "Upgrade this PC now"
  1. Click on…next…
          7. Then wait to finish the installation and restart the PC
          8. After the restart, then that's it your computer will be back to normal and everything will be corrupted free

After following these instructions and “Cleanly” updating my PC, WSL is still showing as Beta.

I had to enable “Developer Mode” to get WSL/Bash working.

  • What’s wrong / what should be happening instead:

WSL should be out of Beta. It’s still in Beta.

Updated 18/08/2017 17:26 4 Comments

Unconscionable Grabber use problem solving discussion


6a00d83451b36c69e201bb092bb2d4970d-600wi There is a couple of problems, which happend because of current grabber way of work. I mean unlimied freedom of users to do everything with every source listed in the Grabber. This IS a very big problem, because of not fair use of Grabber by some users there are many problems for other grabber users, site users, and even Grabber developer.

Here I show some of these problems:

Without any limits, every grabber user can easily make a DOS attack on the sources, even without evil intent. And several Grabber users can even make a DDOS attack. And in current Grabber version there is nothing that can stop them from this.

Almost 80% of all sended crashes of grabber happens when user is trying to download an extreamly big amount of images(all images with very popular tag like “touhou” or all images from one booru, or even all images from all boorus). It means that fair-use users almost not face with such problems, but not-fair-users are causing problems for them by themselfs and causing problems for Dev, who need to check every single crash.

Even one user that tries to download all images from booru is equivalent to hundreds or fair-use grabber users and thousands of normal site users. And now, according to the crash reports, there are much more then one such users!

So big parasite server load from these users disturbs source owners and motivate them to ban downloads for all grabber users, and even for everyone(like case with sankaku and So because of small amount of not-fair-use users suffers all other fair-use users.

And that’s not the full list of serious problems, caused by unlimited using of Grabber. This is classic game theory problem caused by unconscionable use of the limited common resource. So I want to discuss, how to solve these problems, before it’s too late.

Updated 19/08/2017 21:52 11 Comments

Add tests


So first of all, this adds a package (which I didn’t vendor) so you’ll need to go get before you can run go test.

goconvey documentation has a load of information about it. I’m a fan of typing goconvey in the /beacon/ directory which opens up a web page that shows you great information about your testing, coverage, etc and runs a watch to auto refresh on save.

I understand we don’t want new packages since we want to be newbie friendly so if you’d like, I’ll take it out if you decline this PR. Mostly I really wanted to get our test coverage up higher so that when new people attempt to make changes, the tests should help them know if they’ve broken something.

I have a few ideas on how to get our test coverage higher but figured I’d share this as a starting point.

Updated 18/08/2017 20:00 1 Comments

Why does nilearn restrict plotting adjacency matrices to symmetric ones?


Is there any reason not to remove lines 1333-1334 from nilearn/plotting/

    if not np.allclose(adjacency_matrix, adjacency_matrix.T, rtol=1e-3):
        raise ValueError("'adjacency_matrix' should be symmetric")

The reason I ask is that I’m trying to use nilearn to plot my adj. matrices from dMRI-generated structural matrices which are fundamentally asymmetric (when using prob. tracking). I think nilearn’s plotting would benefit greatly if it were more flexible in this regard.

Curious to hear your thoughts on this.


Updated 18/08/2017 21:19 6 Comments

Logo for gitwatch org


Hey guys, we need a logo for gitwatch org. This is not urgent, but it’s cool to give a face to the org. Any ideas to help define the logo?

CC @nevik, @dmusican, @salanki, @jwerle, @ajthemacboy and @AndrewS-Vbosch. Feel free to ignore this thread if you don’t have time.

Updated 19/08/2017 10:11 5 Comments

<a> tag styling not possible.


Bug, feature request, or proposal:

Feature request.

What is the expected behavior?

Style <a> tags using color=“primary”, “accent”, “warn” or something like class=“md-primary”.

What is the current behavior?

If you put color=“primary” it doesn’t work. The link stays ugly.

What is the use-case or motivation for changing an existing behavior?

Match the Material 1 behavior: Material Design 1 - Issue

Which versions of Angular, Material, OS, TypeScript, browsers are affected?

Material 2 Beta 8.

Updated 17/08/2017 21:23 1 Comments

Heroku deploy


Hey pessoal, eu adicionei uma integração do repositório com o Heroku (no momento uma conta free) somente para mantermos o projeto online. O deploy só é feito após os serviços de CI darem seu ok e o PR for mergeado no master.

O link da app:

Updated 17/08/2017 20:16 2 Comments

No output when using Quantum as a bundle plugin


When Quantum is not specified at FuseBox.init but rather in a bundle fuse.bundle("myBundle").plugin(...), no output file is produced (in the dist folder) without warning nor error.

Note: If one bundle is using Quantum and a second one is made after even without Quantum, the second one has no output neither.

Updated 18/08/2017 03:03 1 Comments

DatabaseSeeder task broken


See error in screenshot below:


The backtrace in the screenshot mentions line 35:

This is caused by the following four pointing nowhere:

use Phanbook\Databases\UsersSeeder;
use Phanbook\Databases\PostsSeeder;
use Phanbook\Databases\TagsSeeder;
use Phanbook\Databases\PostsTagsSeeder;

Almost exactly 2 years ago, you find the last instance of them in the repo:
Commit: 01226ab

They were removed 3 days later:
Commit: c71a76a

Right now the task simply fails due to the error. We could reinstate the seeding files so the task has something to do. and/or revise the task. We could also aim at a different directory structure altogether even. I like the idea of going in the Laravel direction and have a “database” directory to toss in our migrations. I think the schema approach makes it hard to track database changes. What do you guys think? 1) Remove /schema and reinstate /databases?
2) Add /databases so we have both /databases and /schema?
3) Or leave the directory structure alone and remove the task?

The schema right now is a complete mess. You can’t even run the two SQL files back-to-back without encountering SQL errors related to the primary key already being used with multiple tables. Once it fails, you try to run it again, and the tables already exists because it’s not a transaction. Anyone have any reasons to keep /schema or any reasons against reinstating /databases? I think consistency is a better choice. Also if we’re going to head in the Laravel direction, I think we should call it /database rather than /databases in the plural.


I want to drop /schema, reinstate /databases then drop the “s”, renaming it to /database.

Updated 18/08/2017 20:27 1 Comments

Memory usage when sending the exact same message to multiple clients



I was working on a server that has to send the same message to all the connected clients, something like a broadcast.

I’ve noticed that the memory usage was going high very quickly and the garbage collection was happening too often.

Examining the code shows that in order to send a single byte array, a ByteBuffer object, a BinaryFrame, a List<Framedata>, an ArrayList<ByteBuffer> and at least a new ByteBuffer is created.

So, in this case where one may want to send the exact same content to many clients I guess that it is better not to recreate the objects with the same content for each recipient.

A simple solution that I have in mind is making the write() method in WebSocketImpl public. Then one can create the message (List<ByteBuffer>) once and write it to each client. Here is an example:

public void broadcast(byte[] buffer, Collection<WebSocket> clients) {
        if (!clients.isEmpty()) {
            Draft draft = clients.iterator().next().getDraft();
            List<Framedata> frames = draft.createFrames(ByteBuffer.wrap(buffer), true);
            List<ByteBuffer> outgoingFrames = new ArrayList<ByteBuffer>();
            for (Framedata f : frames) {
            for (WebSocket client : clients) {

I have tested this thought and by profiling a test application it seems to works ok.

Do you find this approach ok? I don’t fill comfortable making a private method public. Do you have any suggestions to achieve this in a better way?

Updated 18/08/2017 10:36 2 Comments

How to handle mutable highlights for less obvious cases


See this tweet by @eiriktsarpalis:

The binding to an array is immutable, its contents are mutable. Not highlighting the binding is correct insofar as the binding itself cannot be changed in-place, but because you can modify the values inside of the array, it can also “feel” inconsistent with the highlighting that mutable bindings get.

Arrays aren’t the only case where this can seem a bit inconsistent. You can have mutable properties on objects in F#:

type C() =
    member val P = 0 with get, set

let c = C()
c <- C() // Will not compile
c.P <- 1 // Will compile just fine

One could make the case that P should be highlighted as mutable in this case. But should every value in an array be highlighted like that as well? What would that look like?

Generally speaking, it’s not clear to me how cases like this should be highlighted. @dsyme Makes the point that consistency matters the most here, and I agree with that. I also think that we should consider any change with configuration in mind, and if something is on or off by default. Would love to hear the thoughts of others here.

Updated 17/08/2017 22:40 3 Comments

Sensor relocation following sorghum 4 harvest


This issue is to serve as a warning and discussion area for upcoming changes to sensor locations in the sensor box. I’m not sure of all the methods the downstream extractors are using but be prepared for slightly differently structured data after this season!

Sensors “confirmed” to be moved are:

  • Top 3D laser scanners will be moved around 2cm south in the sensor box
  • NDVI/PRI and CROPCircle will be moved so they share y-axis coordinates with the top StereoVis
  • Side 3D lasers moved… somewhere not where they are now

Sensors “in priority discussion” for possible movement are:

  • FLIR to better clear sensor box shade
  • VNIR/SWIR for shading issues

Other sensors being discussed:

  • possibly moving/repurposing side 3D laser to sensor box for x-axis scanning
  • relocating side StereoVis to support photogrammetry/ work better with x-axis scan paradigm
  • any other suggestions?

Is there a sensor change or addition that would make your life easier or would improve the output? What is working well now/ could possibly break extractors that we shouldn’t change?

Ideally I’d like to hear from everyone downstream (even if you think the gantry is perfect the way it is and you don’t want anything more! 👍 )

  • [ ] move top 3D

  • [ ] move PRI/NDVI cluster

  • [ ] move CROPCircle

  • [ ] decide on FLIR and move if necessary

  • [ ] decide on SWIR/VNIR and move if necessary

  • [ ] discuss/decide on other sensors

Updated 17/08/2017 19:02

/backend/store produces 404


Navigate to /backend/dashboard and you’ll notice a link to “store” in the navigation:


Upon clicking it, you get a 404 page:


Judging by the navigation link, it seems like there’s supposed to be a StoreController:

There’s no such controller in the repo, and I can’t find one in the history either.
Any input?

Updated 18/08/2017 20:28 1 Comments

Rename Marionette.Object to Marionette.Service


For v4 we want to have named exports

However this is complicated for both Object and Error. Error seems like no big deal to rename to MarionetteError or something. It’s a lib that maybe can be utilized by Marionette plugins, but isn’t really that useful to anyone else… I could also see just privatizing it.. easy enough for someone to make their own.

But for Object.. this has always been a naming issue.. originally called Marionette.Controller it was just meant to be a generic thing to acts as a mediator for things.. it wasn’t a view.. it wasn’t a model.. so it was named the “C” in MVC.. however that wasn’t a great name.. because the view is also essentially the “C” and a Marionette.Controller wasn’t strictly speaking the “C” either. So the next best generic name we could come up with was Object.. which worked great except now that we want to export it, it’s a reserved name. bummer.

Since that time, the radio interface was added to Object giving it a little more purpose. I believe the name “Service” is still very generic.. it likely describes what an Object is doing in most cases, and it highlights the radio interface. There is precedence for this naming here in the POC library for the radio interface that landed for v3.

Bottomline We pretty much must rename Object to something. Service seems like the next best thing to me. Thoughts?

Updated 18/08/2017 08:56 6 Comments

Remove `%put` magic


With %get --from, I do not see a reason to use %put, especially when %put are usually used after completion of a cell but the magic has to be specified at the beginning of a cell, thus we will have to start a new cell to use %put.

Updated 17/08/2017 15:24

Spinning out the Plush runtime as a standalone library (std/runtime) ?


This issue is to discuss a potential idea.

One of the things I’m working towards is fully bootstrapping the Plush implementation. There are currently two implementations of the language. The cplush one, which is written in C++, and the self-hosted Plush language package (in plush/plush_pkg.pls).

I’ve been slowly working on giving the VM the ability to serialize code into ZIM files, which would enable us to write compile code into ZIM files that don’t need to be parsed in order to run on Zeta. At this point, the Plush package is already able to parse itself in-memory. It’s actually surprisingly fast, only taking about 1.4 second on my laptop.

I did run into a snag though, which is that currently, cplush bakes the Plush runtime (plush/runtime.pls) into compiled files. The Plush package then makes use of the baked functions directly. This doesn’t really work when the package parses itself. What would make more sense, it seems, is to have the Plush runtime be its own package.

This brings me to a question which I would like your opinion about: where should I put the Plush runtime package? One possibility is that I could directly put it into std/runtime. This runtime library could become not just the Plush runtime, but a collection of useful runtime functions which multiple languages can use.

However, there is a risk that no matter what, the library will always remain very Plush-specific. Hence maybe it shouldn’t be named std/runtime. Possibly, this should be a sub-package of lang/plush, but we currently do not really have support for these. It’s not something I’ve given much thought to. Possibly, we can simply allow packages to use the current versioning scheme, but to live within the path of other packages. As such, there could be a lang/plush/runtime/0. This would be versioned independently of lang/plush/0.

Trying to think about the future development of our ecosystem, having sub-packages may be inevitable. If we think about having a Lua implementation, for instance, we will need to implement the Lua standard library. This will have to live within one or more packages as well. It might make more sense for such packages to live within lang/lua/stdlib/* rather than under lang/lua-stdlib.

Updated 17/08/2017 15:25

Crazy checker.ts refactor experiment (25kloc)


I have made tool-assisted refactoring experiment on typeChecker. The goal is to convert checker.ts from incapsulated js scope to a class. For better extensibility and publicity (#17680).

All tests are passing, excluding linting (because of auto converting). We can fix linting issues later.

How I refactored 25k lines of code:

  1. All types moved to ts namespace out of createTypeChecker;
  2. Class TypeCheckerImpl was created.
  3. All variables from createTypeChecker moved to public properties of TypeCheckerImpl;
  4. Property initializers were moved to constructor. (because of constructor parameters dependency) declarations now looks ugly, because ts public propName = (false as true) && this.someMethod(); is only way to auto infer type of function return value; We can fix it later.
  5. All nested functions from createTypeChecker converted to public methods of TypeCheckerImpl;
  6. For all converted functions auto inserted “this” keyword.
  7. For methods that contain nested functions I have to create ugly conv_self to enclosure “this”.
  8. Added ugly code to bind all methods. We can fix it later.
  9. Now TypeCheckerImpl instance is not exposed to public, but I want to do it later.

Here is commit in my fork: Unfortunetly GitHub can`t show diff, and that is one more point for necessity of refactoring.

Raw checker.ts before


This is only experiment.

I have spend around 16 hours for these transforms. I think that this experiment is successful. For better typescript future I think checker.ts should be separated into multiple files/classes, and this is the first step. This code is not merge ready. Of course we should think about methods to be provided to public api, and may be unsafe access for people who do really complicated transformations.

What do you think about it?

I can dedicate more time, improve documents and publish tools I’ve made, if more people support this experiment.

Updated 18/08/2017 12:59 4 Comments

Separate user data directory for Godot 3


We already have config and layouts files named differently for 3.0, but I wonder if it would be better to have a separate folder for Godot 3.

Reason: to prevent possible conflicts (future or present but not already noticed, like maybe export templates) and to add the ability to uninstall or clean Godot 2 and 3 user data separately.

Now the folder is pretty confusing regarding what files/subdirectories belong to each branch.

Updated 19/08/2017 18:19 16 Comments

Proposal: extend tuple projection initializers to preserving element names through unnamed tuples


(This proposal is a rewording of a comment from #415)

Currently, as of v7.1, the compiler can “project” tuple names across expressions. However, this “breaks down” the moment a tuple with unnamed elements is introduced. For example, the following code won’t compile: ```cs (T1, T2) Bar<T1, T2>() => (a:default, b:default);

var x = Bar<int, int>(); var y = x.a; // Bar returns (int, int), not (int a, int b), so a cannot be accessed. ```

There is a use case where being able to preserve names across a tuple with unnamed elements “boundary” and that is when method chaining with methods that take a generic tuple as a parameter.

This can be explained without using a complex method chaining example with the following simple code example: ```cs public static void Foo() { (T1, T2) Bar<T1, T2>((T1, T2) value) => value;

var tuple = (a:1, b:2);
var newTuple = Bar(tuple);
var a = newTuple.a; // Since we passed (a, b) in, it would be extremely useful to 
                    // get (a, b) back out again

} ```

As ValueTuple<T1, T2> is a struct, it isn’t possible to just use Bar<T> and constrain T using generic constraints. The only current solution is to use an unconstrained T, but then the method can’t be constrained to a tuple and the elements are not accessible from within Bar.


To give a possibly clearer example, take the following extension method for (T1, T2): ```cs public static TR Foo<T1, T2, TR>(this (T1, T2) t, Func<(T1, T2), TR> f) => f(t);

(a:1, b:1).Foo(t => t.a == t.b); // doesn’t compile as the compiler doesn’t “project” // a & b through the (T1, T2) type declaration ```

Updated 17/08/2017 17:15 9 Comments

Don't always activate the BuildVision window


The following is related to the activation of the BuildVision window when the build process starts. Sometimes, when I have a damned stubborn compilation error, I just compile a single C++ file several times (after modifications), without linking. In this case I want rather see the Output window. In this situation the BuildVision window does rather hide informations I want to see, again and again. Question: would it be possible to avoid the activation of the BuildVision window when I compile just a single file? Better: I suggest to activate the BuildVision window ONLY when the user builds the entire solution, not when he compiles a single file and also not when he builds a single project of the solution.

Updated 18/08/2017 06:22

Access warnings like errors from BuildVision


If you have compilation errors you can display them by clicking on the row and jump into the code from there. If you have only warnings you must indeed go into the Output window and search them there. This is not good. You should handle errors, warnings and messages (#pragma message) in the same style.

Perhaps it would be a good way to limit the number of the warnings, because many developers seem to ignore warnings and have a very big number.

Perhaps it would also be good to have an option to use this new icon or not.

Updated 18/08/2017 06:22

New icon for build projects having warnings


One thing I always missed is that warnings are not visualized by a colorful icon. In C++ there are a lot of warnings which have the importance of errors. The user should notice (and handle) them very seriously. But BuildVision gives him only a very tiny number in a column. You can easily overlook it. I would like to see a yellow icon instead of the green checkmark on the left side of each project which tells me “Hey, you have warnings! Look at them!”.

Updated 18/08/2017 07:40 3 Comments

BokehRenderer.get_plot should use curdoc in server mode


Currently you manually have to pass a bokeh Document to BokehRenderer.get_plot to ensure that streams can communicate correctly. It’s possible we could just always attach the curdoc to the holoviews plot if the renderer is in server mode avoiding surprising errors with linked streams. That said I’ll have to investigate if there’s ever a reason to pass a manually created Document in, in which case we’d at least have to leave the doc argument.

Updated 17/08/2017 14:27 1 Comments

GUI for rdesktop


It would be really nice and usable to create a gui frontend for rdesktop. Either a standalone or builtin using GTK or equivalent GUI toolkit.

Through times there has been many UI frontends available but many of them have moved away from rdesktop in favor of freerdp. However still, rdesktop is a graphical application and requires X11 so its is not optimal to only have a terminal interface.

The main advantage is that an UI would simplify how to choose between different authentication methods and other options that is now passed via command line. Also it would be nice to launch rdesktop without an terminal active.

With that said I personally think the approach should be to port the X11 ui code over to GTK to start with and then take the UI dialogs, starting with the login form and options.

Another approach would be to include and ship one of the numerous frontends for rdesktop, they widly differs in quality and many of them is not only rdesktop but to manage connection using different programs eg: vnc, freerdp etc.. Those mulit manager is out of question due to they are to generic.


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource. </bountysource-plugin>

Updated 18/08/2017 15:38 3 Comments

The future of this project


First, let me apologize for my inactivity. I know I’ve been quite sluggish in the maintenance of this project, but I’m basically working on this alone in my spare time so I hope you all can understand. I plan to resolve the open issues and rewrite the project using the latest version of Material Theme (hopefully within the month).

However, I’ve been thinking about the possible future of this project. One thing I’m not a fan of is the fact that if you install Materialize, you are basically installing every single theme variation, even the ones you aren’t using which is kind of wasteful. The number of themes has just been increasing continuously as I’ve been accepting more theme requests (it looks like there are currently 27). This is not ideal, considering that each theme has a bunch of assets and images associated with it (tabs, backgrounds, icons, etc).

I’m looking for advice from the community on how to proceed from here. Theme requests are still coming in today, and I’m a bit reluctant to accept them. Even the way it currently is today is way too much. If possible, I would like to allow the user to only install the theme that they want, but I don’t know if there is a way to do “subpackages” in Package Control. Does anyone have any knowledge about this?

Another option is to deprecate this project, and split it into individual theme packages. However, I don’t know how the folks over at @packagecontrol would feel about me suggesting they add 30 almost identical themes to the repository.

A third option could possibly be a sublime plugin that would allow you to select a theme, and then once you have chosen it, it would remove all the unused themes/schemes/assets. I don’t know very much about the plugin API, so this would need to be looked into.

The fourth option is to do nothing and just continue as-is. I’m basically looking for some feedback from the community on what a good way to proceed from here would be. If any of you have knowledge about the plugin system or package control’s capabilities, I would really appreciate it.

Updated 17/08/2017 09:48



I use BuildVision for years and it’s a good help in Visual Studio.

One thing I always missed is that warnings are not visualized by color. In C++ there are a lot of warnings which have the importance of errors. The user should notice (and handle) them very seriously. But BuildVision gives him only a very tiny number in a column. You can easily overlook it. I would like to see a yellow icon instead of the green one on this row which tells me “Hey, you have warnings! Look at them!”.

Next is accessing warnings from the BuildVision window: If you have compilation errors you can display them by clicking on the row. If you have only warnings you must indeed into the Output window and search them there. This is not good. You should handle errors, warnings and messages (#pragma message) in the same style.

The following is related to the activation of the BuildVision window when the build process starts. Sometimes, when I have a damned stubborn compilation error, I just compile a single C++ file several times (after modifications), without linking. In this case I want rather see the Output window. In this situation the BuildVision window does rather hide informations I want to see, again and again. Question: would it be possible to avoid the activation of the BuildVision window when I compile just a single file? Better: I suggest to activate the BuildVision window ONLY when the user builds the entire solution, not when he compiles a single file and also not when he builds a single project of the solution.

Thanks for reading.

Updated 17/08/2017 13:49 6 Comments

[Proposal] null guards for nullable types


The following code does not compile:

        private int Test(int? value)
            if (value != null)
                return value + 5;

            return 0;

It should be clear to the compiler that inside the truthy branch of the if statement value is of type int, not int?, and that after the if statement it is null. This would be only of slight usage for the current nullable types, but with the next version of C# where everything can be nullable this would be of greater value.

This would be sugar for the following:

        private int Test(Nullable<int> value)
            if (value != null)
                int _value = value.Value;
                return _value + 5;

            return 0;
Updated 18/08/2017 15:51 8 Comments

how to remove api code in dist file,How to configure?

 in bundle.js, i get some code (Quantum API file )  like this, how can i remove it
Because I have a lot of independent components I don't want to each component has the code.
and also need   Removes Object.defineProperty(exports, '__esModule', { value: true }); or 
exports.__esModule = true from the bundle

// code warper

//api file
(function() {
     if (window.$fsx) {
    var $fsx = window.$fsx = {}
    $fsx.f = {}
    // cached modules
    $fsx.m = {};
    $fsx.r = function(id) {
        var cached = $fsx.m[id];
        // resolve if in cache
         if (cached) {
             return cached.m.exports;
         var file = $fsx.f[id];
         if (!file)
         cached = $fsx.m[id] = {};
        cached.exports = {};
        cached.m = { exports: cached.exports };
        file(cached.m, cached.exports);
         return cached.m.exports;
Updated 17/08/2017 11:04 3 Comments

Default value for Select with remote-method


When a Select has remote-method what should be the default value for - empty value, nothing selected - pre-selected value ?

In the case we initiate the select with nothing selected, a empty string '' blocks the select and doesn’t open the dropdown (see jsFiddle). But if I use null as value then it works as expected. If I pass a empty object the placeholder gets [object Object].

Question #1 (not-selected initiation): I think null is OK for default empty value. Can we add that to the docs?

In case of pre-selected value, if I pass only the value as a string it will not display the label (expected since options are async). If I pass a object like {value: 11, label: 'Some label'} i get [object Object] in the placeholder. I could pass label in the props but since the select knows the method that gets the values, value should be enough.

Question #2 (pre-selected initiation): Can we have a object with value and label as default value, or should we use a string (the value key) and make the select call the remote-method and do a .find() on the selected option?

Vue.js: v2.4.0 iView: v2.1.0 jsFiddle: here

Updated 18/08/2017 16:03 9 Comments

Connector's message format to kafka


Proposal to message' format: - type: string (“block”, message type) - blockchain: string (bitcoin, ethereum) - branch: string (ex. main || testnet <- bitcoin, main || testnet:ropsten || testnet:kovan <- ethereum, etc) - block_hash: string (ex. bitcoin main 000000000000000000fe316680c703c4464c8963253ad3feb3be1cfd129e107b) - block_number: integer (ex. bitcoin main 480899) - body: json which contains raw block data and raw txs

Updated 17/08/2017 07:32

Mapping the UCD into unic-ucd subcrates


UAX44 § 5.1 Property Index gives a list of UCD properties. For convenience, I have reproduced below those which are intended for exposure in library APIs. This issue will serve as a tracking list for exposing those. Each property is also given a type in UAX44 § 5.3 Property Definitions (one of Catalog, Enumeration, Binary, String, Numeric, or Miscellaneous). For definitions of those, see UAX44 § 5.2. The type is also included in the below table for ease of reference.

<details><summary><strong>Property Index</strong></summary>


  • [ ] Name (Miscellaneous)
  • [ ] Name_Alias (Miscellaneous)
  • [ ] Block (Catalog)
  • [X] Age (Catalog)
  • [X] General_Category (Enumeration)
  • [ ] Script (Catalog)
  • [ ] Script_Extensions (Miscellaneous)
  • [ ] White_Space (Binary)
  • [ ] Alphabetic (Binary)
  • [ ] Hangul_Syllable_Type (Enumeration)
  • [ ] Noncharacter_Code_Point (Binary)
  • [ ] Default_Ignorable_Code_Point (Binary)
  • [ ] Deprecated (Binary)
  • [ ] Logical_Order_Exception (Binary)
  • [ ] Variation_Selector (Binary)


  • [ ] Uppercase (Binary)
  • [ ] Lowercase (Binary)
  • [ ] Lowercase_Mapping (String)
  • [ ] Titlecase_Mapping (String)
  • [ ] Uppercase_Mapping (String)
  • [ ] Case_Folding (String)
  • [ ] Simple_Lowercase_Mapping (String)
  • [ ] Simple_Titlecase_Mapping (String)
  • [ ] Simple_Uppercase_Mapping (String)
  • [ ] Simple_Case_Folding (String)
  • [ ] Soft_Dotted (Binary)
  • [ ] Cased (Binary)
  • [ ] Case_Ignorable (Binary)
  • [ ] Changes_When_Lowercased (Binary)
  • [ ] Changes_When_Uppercased (Binary)
  • [ ] Changes_When_Titlecased (Binary)
  • [ ] Changes_When_Casefolded (Binary)
  • [ ] Changes_When_Casemapped (Binary)


  • [ ] Numeric_Value (Numeric)
  • [ ] Numeric_Type (Enumeration)
  • [ ] Hex_Digit (Binary)
  • [ ] ASCII_Hex_Digit (Binary)


  • [X] Canonical_Combining_Class (Numeric)
  • [X] Decomposition_Type (Enumerated)
  • [ ] NFC_Quick_Check (Enumerated)
  • [ ] NFKC_Quick_Check (Enumerated)
  • [ ] NFD_Quick_Check (Enumerated)
  • [ ] NFKD_Quick_Check (Enumerated)
  • [ ] NFKC_Casefold (String)
  • [ ] Changes_When_NFKC_Casefolded (Binary)

    Shaping and Rendering

  • [ ] Join_Control (Binary)
  • [ ] Joining_Group (Enumerated)
  • [ ] Joining_Type (Enumerated)
  • [ ] Vertical_Orientation (Enumerated)
  • [ ] Line_Break (Enumerated)
  • [ ] Grapheme_Cluster_Break (Enumerated)
  • [ ] Sentence_Break (Enumerated)
  • [ ] Word_Break (Enumerated)
  • [ ] East_Asian_Width (Enumerated)
  • [ ] Prepended_Concatenation_Mark (Binary)


  • [X] Bidi_Class (Enumerated)
  • [ ] Bidi_Control (Binary)
  • [ ] Bidi_Mirrored (Binary)
  • [ ] Bidi_Mirroring_Glyph (Miscellaneous)
  • [ ] Bidi_Paired_Bracket (Miscellaneous)
  • [ ] Bidi_Paried_Bracket_Type (Enumerated)


  • [ ] ID_Continue (Binary)
  • [ ] ID_Start (Binary)
  • [ ] XID_Continue (Binary)
  • [ ] XID_Start (Binary)
  • [ ] Pattern_Syntax (Binary)
  • [ ] Pattern_White_Space (Binary)


  • [ ] Ideographic (Binary)
  • [ ] Unified_Ideograph (Binary)
  • [ ] Radical (Binary)
  • [ ] IDS_Binary_Operator (Binary)
  • [ ] IDS_Trinary_Operator (Binary)
  • [ ] Unicode_Radical_Stroke (Miscellaneous)


  • [ ] Math (Binary)
  • [ ] Quotation_Mark (Binary)
  • [ ] Dash (Binary)
  • [ ] Sentence_Terminal (Binary)
  • [ ] Terminal_Punctuation (Binary)
  • [ ] Diacritic (Binary)
  • [ ] Extender (Binary)
  • [ ] Grapheme_Base (Binary)
  • [ ] Grapheme_Extend (Binary)
  • [ ] Regional_Indicator (Binary)
  • [ ] Indic_Positional_Category (Enumerated)
  • [ ] Indic_Syllabic_Category (Enumerated) </details><br>

These need to be partitioned into subcrates. Some properties clearly fit into one of the crates already implemented or planned. Below is a the list of UCD crates planned, along with the properties which they are most likely to contain. How these properties are exposed is a separate question, which this issue does not intend to address. Properties marked “(??)” are included as this is a logical place to put the property, but needs further consideration.

Note that these crates may include more tables than that listed here. Namely, those which are contributory and thus excluded from this listing.


None. The version of Unicode. </details><details><summary><strong>age</strong></summary>

  • Age </details><details><summary><strong>name</strong></summary>

  • Name

  • (??) Name_Alias (??) </details><details><summary><strong>category</strong></summary>

  • General_Category </details><details><summary><strong>block</strong></summary>

  • Block </details><details><summary><strong>script</strong></summary>

  • Script

  • Script_Extensions </details><details><summary><strong>normal</strong></summary>

  • Canonical_Combining_Class

  • Decomposition_Type </details><details><summary><strong>normal-quickcheck</strong></summary>

  • NFC_Quick_Check

  • NFKC_Quick_Check
  • NFD_Quick_Check
  • NFKD_Quick_Check </details><details><summary><strong>case</strong></summary>

  • Uppercase

  • Lowercase
  • Lowercase_Mapping
  • Titlecase_Mapping
  • Uppercase_Mapping
  • Cased
  • Case_Ignorable </details><details><summary><strong>case-quickcheck</strong></summary>

  • Changes_When_Lowercased

  • Changes_When_Uppercased
  • Changes_When_Titlecased
  • Changes_When_Casefolded
  • Changes_When_Casemapped </details><details><summary><strong>grapheme</strong></summary>

  • Grapheme_Base

  • Grapheme_Link </details><details><summary><strong>numeric</strong></summary>

  • Numeric_Value

  • Numeric_Type
  • (??) Hex_Digit (??)
  • (??) ASCII_Hex_Digit (??) </details><details><summary><strong>bidi</strong></summary>

  • Bidi_Class

  • (??) Bidi_Control (??)
  • (??) Bidi_Mirrored (??)
  • (??) Bidi_Mirroring_Glyph (??)
  • (??) Bidi_Paired_Bracket (??)
  • (??) Bidi_Paired_Bracket_Type (??) </details><details><summary><strong>joining</strong></summary>

  • Join_Control

  • Joining_Group
  • Joining_Type </details><details><summary><strong>ea-width</strong></summary>

  • East_Asian_Width </details><br>

This leaves the following list of properties which should be exposed, but don’t have a definite home yet. (Properties included in the above listings with a (??) indicating inconclusive placement are not re-included here.)

<details><summary><strong>Homeless Properties</strong></summary>


  • White_Space
  • Alphabetic
  • Hangul_Syllable_Type
  • Noncharacter_Code_Point
  • Default_Ignorable_Code_Point
  • Deprecated
  • Logical_Order_Exception


  • Case_Folding
  • Simple_Lowercase_Mapping
  • Simple_Titlecase_Mapping
  • Simple_Uppercase_Mapping
  • Simple_Case_Folding
  • Soft_Dotted



  • NFKC_Casefold
  • Changes_When_NFKC_Casefolded

    Shaping and Rendering

  • Vertical_Orientation
  • Line_Break
  • Sentence_Break
  • Word_Break
  • Prepended_Concatenation_Mark



  • ID_Continue
  • ID_Start
  • XID_Continue
  • XID_Start
  • Pattern_Syntax
  • Pattern_White_Space


  • Ideographic
  • Unified_Ideograph
  • Radical
  • IDS_Binary_Operator
  • IDS_Trinary_Operator
  • Unicode_Radical_Stroke


  • Math
  • Quotation_Mark
  • Dash
  • Sentence_Terminal
  • Terminal_Punctuation
  • Diacritic
  • Extender
  • Regional_Indicator
  • Indic_Positional_Category
  • Indic_Syllabic_Category </details><br>

These properties need to be given a home crate before they can be included.

Updated 17/08/2017 05:47 2 Comments

Benchmarks ?


hello, thanks for an awesome lib, I was waiting for async HTTP framework since a long time. if possible can you just give benchmarks, so we kinda have some idea?

Thanks, team.

Updated 18/08/2017 08:10 1 Comments

The new general container type


During the workshop we added a new template argument type GENERAL_CONTAINER_TYPE, which may be either a std::vector or an ArrayView. Part of the discussion in #4723 involves adding an implicit converting constructor from a std::vector<T> to an ArrayView<T>, so (if that goes through) we will not need to instantiate those templates and things will ‘just work’. Alternatively, if we don’t add the constructor, I believe that ArrayView is the correct argument to pass (and it is not too onerous to ask people to write make_array_view(vec) or make_array_view(vec.begin(), vec.end())).

@jppelteret @luca-heltai I believe that you two added this; what do you think of my proposal?

I really like the new functions we added; my only reservations about them are due to the fact that they now take 50% more time for me to compile.

Updated 17/08/2017 13:18 2 Comments

Sprites jitter/jump when moving in _fixed_process()


Operating system or device - Godot version: Windows 10 - Godot 3.0 / latest

Issue description: When I move objects at a fixed speed in fixed_process, there is occasional jitter/jumping which happens at a regular interval. When more time has passed than fixed_process() can process (because fixed_process() can only simulate physics in fixed step intervals), the sprite is drawn where it was at the end of the last physics update rather than where it currently is. The ‘accumulated leftover’, or difference between the last physics update and the current time will keep getting larger as the loops get further out of sync, until finally the ‘leftover’ gets greater than 1/60 and some of it can be processed by the physics engine. When this happens, the sprite will ‘jump’ to a more precise location.

This phenomenon is the same one that is described in the “The final touch” section of this article: The author proposes using interpolation to draw the sprite at a position between its position at the end of the most recent physics update and the physics update before that–I think that’s a reasonable approach, but it seems like then your character would always be ‘behind’.

My preferred solution would be, when rendering sprites, to ‘extrapolate’ their position at the time of drawing by starting with what their position was at the end of the last _fixed_process() and then calculating their projected position using their last known velocity. How exactly to extrapolate the position of some sprite is subjective (what if the sprite has an acceleration, etc.) so it might make sense to have the extrapolation be implemented by the user in GDScript.

Steps to reproduce: Play the example project with a 60hz refresh rate monitor so the game runs at 60 FPS. The ‘leftover delta’ that has accumulated will be printed: notice that, when it finally accumulates to ~0.16 (1/60), it will ‘roll over’ and an extra physics update will be performed for that frame, causing the position of the sprite to suddenly be corrected and appear as an ugly jump.

Link to minimal example project:

Updated 18/08/2017 19:03 11 Comments

Please Consider to Simplify the Keyword of "await"!


Async is now everywhere, but “await” is always not that convenient, I think 2 things are making trouble:

  • [ ] - 1. await is a bit LONG, especially when a lot of async functions are consequently called, and not forget async using and async stream in C# 8, in which “await” is making cumbersome;
  • [ ] - 2. await is on the LEFT! Considerring how many javaers envy the Extending Method, which is the same troubling when you going to do chaining async function calls with so many “()”.

  • First Proposition: underline to simplify “await”:

Thanks to @HaloFour , as underline could be use as variable and member names, it is invalid in regard to properties' await behavior, so it is Not Perfect. ``` async Task<string> NormalAsync() { await aAsync(); await Task.Delay(1000); await Task.Run(async () => { await Task.Delay(1000); ,,,,,, }); if((await getIdentityAsync()).Substring(0,2)==“aa”) return null; if((await “aa”.Trans().getIdentityAsync())==null) throw new Exception(“”); var str = (await (await (await 5.AAsync()).BAsync()).CAsync()); return str; }

   async Task<string> SimplifiedAsync() {}

    _ Task<string> SimplifiedAsync() {
        Task.Run(_() => {
            return null;
            throw new Exception("");
        var str = 5.AAsync()_.BAsync()_.CAsync()_;
        return str;

- Second Proposition: ! to simplify "await":

Athough it looks pretty good, but thanks to @eyalsk, "damnit" operator is going to be used for the non-nullable reference types, so............
   async Task<string> SimplifiedAsync() {}

    ! Task<string> SimplifiedAsync() {
        Task.Run(!() => {
            return null;
            throw new Exception("");
        var str = 5.AAsync()!.BAsync()!.CAsync()!;
        return str;

- Third Proposition: ..(double dots) to simplify "await":
thanks to @birbilis , this seems to be a good solution, esp when dot is easy to use than most other punctuation marks.
    async Task<string> SimplifiedAsync() {}

    .. Task<string> SimplifiedAsync() {
        Task.Run(..() => {
            return null;
            throw new Exception("");
        var str = 5.AAsync()...BAsync()...CAsync()..;
        return str;

- Fourth Proposition: "<" to simplify "async", ">" to simplify "await":
thanks to @gwhzh21, this is best at match the pair.
   async Task<string> SimplifiedAsync() {}

    < Task<string> SimplifiedAsync() {
        Task.Run(<() => {
            return null;
            throw new Exception("");
        var str = 5.AAsync()>.BAsync()>.CAsync()>;
        return str;
- Fifth Proposition: pipe "|>" or extension method:
thanks to @birbilis @HaloFour ,pipe is sure a solution but, but we shall never forget the keyword of "delegate" and "=>", less always made things more gracefull.
static async Task<string> NormalAsync() =>
   await Foo("a") |> await Foo()  |> await Foo()  |> await Foo();

static .. Task<string> NormalAsync() => Foo("a").. |> Foo().. |> Foo().. |> Foo()..;
thanks to @yaakov-h @eyalsk,  extension could make async chain happen. But i have doubts about its performance, considering more Task wraps compared with a struct stateMachine .Besides,  there is not a interface which Task and ValueTask both implement and could result in Roslyn creating stateMachine code, because GetAwaiter COMES FROM NO INTERFACE! Extenstion code is not a general solution for all promise-like class and structs. 
    public static Task<T2> FAsync<T1, T2>(this Task<T1> objT, Func<T1, Task<T2>> funcAsync) 
        =>return objT.ContinueWith(t1 => funcAsync(t1.Result)).Unwrap();

    public static T2 F<T1, T2>(this T1 obj, Func<T1, T2> Func) => Func(obj);

    await "a".F(s=>Foo(s)).FAsync(s=>Foo(s)).FAsync(s=>Foo(s))


Updated 19/08/2017 08:44 66 Comments

Include Thesis


I think it would be reasonable to include Thesis in the template. It seems to me the only use cases where Thesis would be inappropriate would be a pure JSON or GraphQL API in which case we wouldn’t be using Firebird to spin it up anyway.

Updated 17/08/2017 02:52

Re-evaluating MessageCollectionViewCell


Re-evaluating MessageCollectionViewCell

Currently, MessageKit supports a cell structure shown in the figure below: messagecollectionviewcell

I originally added the cellTopLabel and cellBottomLabel to this cell because I wanted to be able to top align the avatar with other views and not just the messageLabel.

While this allows the layout I was looking for, it also has some drawbacks. Namely, the cellTopLabel and cellBottomLabel are limited to UILabels and multiple views increase the layout complexity for this cell.

Proposed Solution

We remove the cellTopLabel, cellBottomLabel, AND the AvatarView from the cell. Instead, cellTopLabel and cellBottomLabel are replaced with their own cells. This should remove the UILabel limitation.

We would just need a way to specify if the MessageCollectionViewCell or MessageTopCell contained the AvatarView.

IGListKit does this really well. I could see it being a formal dependency in the future. Why? It makes sense not to reinvent the wheel.

But for now – dependencies are off the table. If we make this change we will start with something small, in-house, and see where the architecture leads us.

Updated 17/08/2017 21:36 2 Comments

MaxListenersExceededWarning: Possible EventEmitter memory leak detected



I’m getting this error whenever I upload a file:

(node:49079) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 connection listeners added. Use emitter.setMaxListeners() to increase limit

The upload completes just fine, but I obiously need to address this issue.

I have this in an app that is also running Hapi, but I disabled that and I still get the same error.

By way of a second question. is there an event that fire when a file has been uploaded?

Updated 16/08/2017 23:34 1 Comments

Regressions caused by #3200


3200 is now merged and this is an issue tracker for any tickets arising from that.

Known issues and workarounds - [ ] Increased first time stutter - try openGL and roll back if all else fails while the shader pipeline is replaced with new code. It will be improved soon. - [ ] Missing graphics using intel drivers on windows. This isn’t really new, but warrants investigation - [X] Broken immediate indexed render - A fix has already been submitted #3241 - [ ] Sometimes the shader cache may be corrupted if the emulator crashes when writing shaders to disk. Will be resolved soon with validity checks. Delete the shader cache if the emulator crashes when loading the shaders from disk. - [ ] Increased emulator startup time due to shaders loading. - Will be optimized in the near future. For now, you may delete the cache if loading is taking too long, or use openGL which has much faster loading times. - [x] Toggling fullscreen may result in a crash or hang

Updated 20/08/2017 00:17 25 Comments

Monster classification and organization.


The Common Monsters list is yuuuuge. Would there be any benefit to creating a more limited list of monsters with general types? So instead of “Skeletons, Undead, Zombie, Ju Ju Zombie, Ogre Zombie, Beholder Zombie, Aquatic Zombie, Centaur Zombie, Mummy, Mummy Lord, Dread Mummy, Wight, Wraith, Lich, Demilich, Dracolich, Lich Hound, Revenant, etc. etc. etc.,” there would be a box for “Undead.” I think first of all, there would be more people looking for adventures featuring “undead” than specifically an “aquatic zombie”.

Maybe if this were to happen, it would be possible that next to the “undead” checkbox, there is a little drop-down arrow opening up a sub-list of “Zombies, Mummies, Liches, Other” or something, and if even those weren’t good enough, those could have sub-lists.

Classifying and grouping related monsters provides an additional benefit of revealing to the DM, “you wanted Zombies? Well right here I have all the different zombies in a convenient place for you, and if you’ve never heard of Aquatic Zombies before, now’s your chance because they aren’t buried in a list of hundreds of random creatures!” If an entry is obscure enough, hiding it in a huge list just makes it useless because of two issues: 1) Nobody knows it exists in the first place. 2) Nobody will realistically stumble upon it by chance. In both cases it is not used and just creates clutter.

I would also argue that many DMs do not even have the name of a particular monster in mind, or even know what a particular creature is called. They just know they want some kind of “warrior zombie”. Having them all grouped together would make narrowing down that choice much faster (“ah, a wight. That fits pretty well with what I wanted. I would never have known if it weren’t here next to the other undead.”)

The whole point of this site is to make is easy for DMs to find adventures with the content they want. Having to sift through literally hundreds of monsters does not accomplish that goal.

Edit: Kobold Fight Club has already done a pretty good job of classifying a lot of monsters. It can be done.

Updated 18/08/2017 02:33 4 Comments

Change UI incentives to focus on interoperability, not test pass rate


According to, the stated purpose of the WPT Dashboard is:

to promote viewing the web platform as one entity and to make identifying and fixing interoperability issues as easy as possible.

However, the way the UI works today explicitly rewards passing tests over failing tests by displaying green for 100% passing results and shades of red for anything else.[1] If a browser came along and magically made all their tests 100% green, that wouldn’t entirely satisfy the goal of platform predictability.

Ideally, as I understand the goals, the “opinion” of the dashboard UI should be:

  • Tests on all platforms passing: GOOD
  • Tests on all platforms failing: OK
  • Tests on two platforms passing, other two failing: BAD

My concrete suggestions are: 1. Move away from using the colors green and red with test results. To maintain the ability to quickly glance and see passing vs. failing tests, we could map test passing percentage to a shade of blue on a linear scale 2. Calculate the standard deviation of test results per directory and per test file (normalized for number of total subtests) and use red & green colors to reward directories that have a low deviation.[2] We can also highlight rows more prominently that have a high deviation and therefore need more interop focus.

I have a demo of this up here:

screenshot from 2017-08-16 13 46 29

[1] The code that determines the color based on pass rate lives at components/wpt-results.html#L320 [2] The green=good red=bad connotation applies only in Western cultures, however I can’t think of a better alternative

Updated 18/08/2017 17:09 3 Comments

Tie up maker's UTXOs in the mempool as a DOS method

  1. Attacker is a taker and coinjoins with many makers and sets a 0.5 sat/b miner fee on the transaction, which remains unmined for a long time

  2. Makers are by-default programmed to not offer UTXOs that are currently in the mempool

  3. The attacker could then run their own makers, either for sybil-attack/deanonymizing or to increase their own maker income due to reduced supply

Thanks to a redditor who PM’d me this idea.

Possible solutions are:

  1. Proper handling of miner fees as in where every make has a “minimum fee rate” so they can use that to stop their UTXOs being stuck with a very low fee.

  2. Enable RBF on everything, and potentially still offer maker’s UTXOs for coinjoining if the taker pays for the RBFing.

For now we yield-generators should monitor how many UTXOs and transactions we have tied up in the mempool, and raise the alarm if it looks like somebody is actually doing this.

Updated 16/08/2017 19:16

Proposal: New middlewares loading API




Right now, RoutingControllersOptions has following signature:

 * List of middlewares to register in the framework or directories from where to import all your middlewares.
middlewares?: Function[]|string[];

We allow to load middlewares in two ways: ```ts // first createExpressServer({ middlewares: [__dirname + “/controllers/*/.js”], });

// second createExpressServer({ middlewares: [JwtMiddleware, AuthMiddleware, LoggingMiddleware, ErrorMiddleware], }); ```

I would like to introduce the new API: ts createExpressServer({ middlewares: { before: [ AuthorizationMiddleware, ], after: [ LoggingMiddleware, ], error: [ CustomErrorMiddleware, ] } }); Combining together with #255 users can get rid of the @Middleware decorator in classes (with problematic priority option) and define the order in array or in index file.

However in this case, proposal from #255 might need some boilerplate depending on used convention: - two/three index files and ts import * as beforeMiddlewares from "./middlewares/before"; import * as afterMiddlewares from "./middlewares/after"; - configuration-like index file ```ts import { One } from “./one-middleware” import { Two } from “./two-middleware” import { Three } from “./three-middleware”

export default { before: [ One, Two, ], after: [ Two, Three, ], } ts import middlewaresConfig from “./middlewares”; createExpressServer({ middlewares: middlewaresConfig }); We can still support old version for glob string or force to use naming convention: ts createExpressServer({ middlewares: { before: [path.join(dirname, “../middlewares/before/*.js”)], after: [path.join(dirname, “../middlewares/after/.js”)], error: [path.join(__dirname, “../middlewares/error/.js”)], } }); // or createExpressServer({ middlewares: { before: [path.join(dirname, “../middlewares/*.before.js”)], after: [path.join(dirname, “../middlewares/.after.js”)], error: [path.join(__dirname, “../middlewares/.error.js”)], } }); But I would stick to all-in-one glob string as we still need `@Middleware({ priority: number })` decorator: ts createExpressServer({ middlewares: [path.join(__dirname, “../middlewares/*.js”)], }); ```

Main proposal

interface MiddlewaresConfig {
    before?: ExpressMiddlewareInterface[];
    after?: ExpressMiddlewareInterface[];
    error?: ExpressErrorMiddlewareInterface[];

interface RoutingControllersOptions {
    middlewares?: string[] | MiddlewaresConfig;

Adding @NoNameProvided @pleerock for discussion.

Updated 19/08/2017 09:53 9 Comments

Track previous versions of security.txt file


Since the vendors are in complete control over the security.txt, it’d be good to give some leverage to the hackers. There have been instances in the past where the vendor changed the rules of engagement after the hacker submitted a security vulnerability. To avoid discussion around the rules that applied when the hacker submitted the vulnerability, it’d be good to have some form of versioning in the file itself. This might not be trivial to implement in the file itself because the company is in complete control of the file contents.

One of the ideas could be that a third party introduces a service to cache the current version of a security.txt file. The way it could work is that the service downloads the security.txt file and returns a unique URL that proofs the file contents were on the site at one point. This should be accompanied with a timestamp and could be accompanied with a hash.

Updated 19/08/2017 16:10 3 Comments

Proposal: New loading controllers/interceptors/middlewares API




Right now, RoutingControllersOptions has following signatures for controllers/interceptors/middlewares - Function[] | string[] - which allows to register in the framework list of classes or directories from where to import.

The array syntax is great because it let’s you define the order of middlewares explicitly: ts createExpressServer({ middlewares: [ MorganMiddleware, JwtMiddleware, AuthMiddleware, CompressionMiddleware, LoggingMiddleware, ErrorMiddleware, ], }); So there’s no need to jump through middlewares files and looking for priority option.

However, in big apps it might be not comfortable to list all of 58 controllers explicitly in array, so we have support for glob string and loading from directories: ts createExpressServer({ controllers: [__dirname + "/controllers/**/*.js"], }) The cons are that you can’t disable one middleware for dev/debug purpose and you have to use priority option in middleware decorator to declare the order of calling them which is very hard in maintaining.

In TypeScript and ES6 we can export and import from modules, so it’s a common case to have index files. The directory structure looks like this: middlewares - auth-middleware.ts - index.ts - jwt-middleware.ts - logging-middleware.ts // etc. And the index.ts file looks like this: ts export * from "./jwt-middleware"; export * from "./auth-middleware"; export * from "./logging-middleware"; And in app.ts: ts import * as middlewares from "./middlewares"; Because all object properties are traversed “in the order in which they were added to the object”, we can use index.ts to manipulate the explicit order of middlewares. It doesn’t have to be placed as index file, it might be placed in app configuration folder with modified paths.

Main proposal

Right now we can’t pass object to routing-controllers option. So we need dirty hacks like: ts createExpressServer( middlewares: Object.keys(middlewares).map(key => (<any>middlewares)[key]), }); I propose to add support for objects containing middlewares/controllers/interceptors. We could then just do: ts import * as middlewares from "./middlewares"; createExpressServer( middlewares: middlewares, }); or with ES6 shorthand syntax: ```ts import * as middlewares from “./middlewares”; import * as controllers from “./controllers”; import * as interceptors from “./interceptors”;

createExpressServer({ middlewares, controllers, interceptors, }); ``` Also I think that importing all from directories by glob string is an antipattern and this proposal reduce a lot of boilerplate of explicit array option. If users get used to this feature I would even deprecate loading from directories feature.

Adding @NoNameProvided @pleerock for discussion.

Updated 18/08/2017 16:01 1 Comments

Component-based router API


Default router.

  view: [
    ["/", state => <h1>Hi.</h1>]
    ["*", state => <h1>404</h1>],
  mixins: [Router]

Component-based router (proposed syntax).

  <Route path="/" route={() => <h1>Hi.</h1>} />
  <Route path="*" route={() => <h1>404</h1>} />
Updated 20/08/2017 07:25 5 Comments

Inline small template/scss files?


@astefanutti @akieling @kahboom

I kinda feel like we could stand to inline templates and scss into the .ts file for a component if it’s less than 5 or 10 lines, what do you guys think? Is there any drawback? Less files to look through would be nice :-)

Updated 18/08/2017 20:21 3 Comments

Philosophy of the curriculum


Do we have a clearly defined plan/vision/philosophy for the curriculum?

If not, do we not need one? I know we have the “” file but as far as I’m aware it is outdated.

Why do we need one? - If we don’t have a clearer vision we can’t define what it even means to improve the curriculum - This greatly affects curriculum planning which is being discussed heavily right now - what is a valid reason to raise an issue on a workshop/morning challenge/week of the MR?

I think different people have ideas of different philosophies, including:

  • Accessibility: The master reference should be a curriculum which anyone should be able to follow (without mentors), so mentor dependent activities (code-along, presentations) should not exist, and workshops which require lots of mentor input should be changed/improved.

  • Making the 8-week F&C course as good as possible for the students on it: code-alongs and other mentor-dependent activities are encouraged, as they are beneficial to the students. Workshops should be pitched at the right level

  • Modularity: Each week (or other section) could be a standalone mini-course, so workshops are mainly technology independent

Basically, do we need to decide on a vision/philosophy/plan/aim for the curriculum, and if so, what should it be?

UPDATE 17/8: changed ‘master reference’ to ‘curriculum’ a couple of time related to #48 Overall course learning outcomes

Updated 18/08/2017 23:57 2 Comments

Responsive gutters in Foundation 7


I’d like to generate some discussion around the responsive gutters feature within Foundation which I believe was added in version 6.1.

For those that don’t know what this is, it enables you to set a different gutter size for each breakpoint, eg:

$grid-margin-gutters: (
  small: 20px,
  medium: 30px

This works fine when using just padding for gutters as it doesn’t add much code bloat. However when using margin gutters it adds a ton of code bloat, and a hugely complicated mixin implementation in order to be able to generate the right classes.

I would really like to see responsive gutters removed for version 7. I think the benefit they give is tiny compared to the impact on the code base. I also have a suspicion that not a lot of people actually use them and leave them responsive by default (I’ve been guilty of doing that too!) even if they don’t need them, but I could be wrong. If we removed it the grid mixin API could be hugely improved and simplified.

I’d really love to hear everyone’s opinion on this feature: if you use them or not, your use case if you do use it, would you miss it if it was removed etc.

Updated 19/08/2017 13:15 3 Comments

BlockVector vs MultiVector


Currently we mostly use BlockVector to cover cases like | M B^T | = | F | | B 0 | | G | and then be able to work on each block separately as well as on the vector as a whole, which necessitates the presence of BlockVector::collect_sizes() and BlockVector::locally_owned_elements( ). The latter creates a union of index spaces for each block with correct shifts.

I am curios whether or not we want to distinguish between the case in the above and | L 0 | = | F1 | | 0 L | | F2 | where each block has the same index space and there is no need/use of creating the collected index space? Also the number of block here can be large, we are speaking about hundreds.

Both shall provide some basic interface, i.e. unsigned int n_blocks () const void add (const value_type a, const BlockVectorBase &V) void update_ghost_values () const

But there are differences as well, the latter does not necessarily need to know how to void add (const std::vector< size_type > &indices, const std::vector< Number > &values) as one may have no global index space defined. The main usage is (G1 G2) = L * (F1 F2) that is, apply the same operator to all vectors. There are linear algebra packages targeted on efficient sparse matrix multi vector (SpMMV) products which store those F1 F2 in such a way that you can’t even access BlockVector::block(unsigned int). That’s another principal difference. Finally, one can think of a MultiVector consisting of BlockVectors.

However, Trilinos does not distinguish between the two cases and uses typedef Epetra_MultiVector MV; typedef Epetra_Operator OP; for both, AFAIK.

So I wonder if we shall distinguish between them? I see several options:

  1. Don’t bother
  2. Add some mechanics to disable building of the collected index space and throw errors if it’s being used without being constructed.
  3. Introduce a top level MultiVector (interface) as well as ComposedMultiVector<VectorType> (simple case of a collection of vectors) classes and derived BlockVector from the latter.

(3) Might be tricky when it comes to actual vectors (PETSc, Trilinos, LA::d::Vector) as one would have to split each implementation somehow. I would say (2) is probably the best option.

Updated 16/08/2017 15:46 3 Comments

[Proposal] Stackblitz


Bug, feature request, or proposal:


What is the use-case or motivation for changing an existing behavior?

Please provide a material template for stackblitz ( It is far easier and faster to use then plunker or other tools.

Updated 16/08/2017 21:16 2 Comments

Let value types participate in contravariance


At the moment, it is not possible to use value types with (contra)variance to object.

Given: csharp class Class {} struct Struct {}

Attempting: ```csharp IEnumerable<Class> classEnum = …; IEnumerable<Struct> structEnum = …;

IEnumerable<object> objectEnum; objectEnum = classEnum; // okay! objectEnum = structEnum; // not okay :( ```

Could the language be changed to permit this? It could potentially require runtime changes, maybe the runtime can perform boxing conversions when required?

Updated 18/08/2017 19:04 7 Comments

2D editor enhancements


Hi everyone,

I plan to work on the 2D editor to make it more modern. I opened this issue to discuss what and how the changes should be implemented.

Here is a list of what could be added:

Control nodes - [x] Display the margin values when moving a node (#10437), - [x] Display width and height values when resizing a node (#10437), - [x] When moving anchors, draw a rectangle to show the selected node without rotation/scaling (#10437), - [x] Hide anchors helpers if child of a container (#10437), - [x] Add a menu option to hide the helpers (#10437).

Grid and guides - [ ] Implement top and left rulers - [ ] Implement guides - [ ] Smart snapping: snap anchors / nodes to the grid / guides / align with other nodes, etc… (suggested by reduz) - [ ] Display the grid behind the selected node, but above the selected node’s parents. (This seems really hard to implement :/) - [ ] Implement a smart grid that align with the rulers, thus depending on the zoom level.

Other - [x] Move “zoom in”/“zoom out”/“Reset zoom” out of the view menu, - [x] Remove “Set zoom”, it’s pretty useless don’t you think ? - [ ] Fusion lock+unlock and selectable children/not selectable children - [ ] Snap rotation (and scale ?) - [ ] Add a configurable key to zoom by moving the mouse. - [ ] Add a local/global button that could: - rotate the view to align with the selected node (or its parent) - display the grid relative to the node, - If it is implemented, make the smart grid/rulers display relative values instead of absolute ? - [ ] Blender-like selection: first click selects the topmost node, other the one under and so on. (implementation reminder to myself: keep a list of the control nodes under mouse when clicking somewhere. If the list changes when we click somewhere else, we select the node on the top, else, we take the following node in the list)

Keyboard controls - [ ] Toogle grid snapping temporarily (ctrl key when dragging) - [ ] Toogle grid snapping - [ ] Toogle grid visibility - [ ] Use the shift key to restraint anchor dragging to a single axis ? - [ ] Multiply/divide the grid size by 2

Menus - [ ] move the anchor menu to a right click on the anchors dragger ?

Updated 19/08/2017 21:28 12 Comments

custom installation step


I’m trying to convert some library to use CMake + Hunter. The library depends on Clang tooling but uses some headers not installed by Clang by copying them manually from the source folder to the installation dir. Is this possible to achieve within Hunter? Maybe through some custom installation step?

Updated 16/08/2017 15:58 5 Comments

Fork me on GitHub