Contribute to Open Source. Search issue labels to find the right project for you!

Design of funky


Too funky is slowly coming together. I’m now at the point where I can’t trust myself to implement more features ad-hoc and assume it will all work well. This is an educational project, which might also be transformed into an academical project. This issue will be used for discussing and tracking the overall design of too-funky.

Some overall characteristics of the OS that too funky will strive towards: Please, criticise any and all of the characteristics. Make new proposals if you feel like it.

  • [ ] Plan 9 styled “everything is a file” (in contrast to Linux' “everything is a file”) with a sprinkle of ownership.
    • [ ] A file can be owned by a process. In that case, it’s the only process that has direct access to it.
    • [ ] A file can be borrowed from the kernel, or from a process if it allows it.
    • [ ] A borrow can be mutable or immutable.
    • [ ] A file may be in one of the following states:
      • Owned by the kernel.
      • Owned by a user process.
      • Owned by any process (including kernel) and borrowed immutably by any number of processes (one-way multicast transmission).
      • Owned by any process and mutably borrowed by one other process (one-way unicast transmission).
    • [ ] Ownership may be transferred.
    • [ ] Generally, the kernel doesn’t actively take part in transmission.
  • [ ] The kernel is a microkernel.
    • [ ] The kernel provides a filesystem, memory management, ownership, IPC, scheduling, …
  • [ ] Direct access to kernel resources is performed via syscalls.
    • [ ] Syscalls use the System-V ABI (in contrast to Linux x86 passing arguments in registers).
  • [ ] Access to hardware resources is performed via character devices.
    • [ ] A character device is owned by the char dev manager(-s) (possibly the kernel).
  • [ ] Access to drivers is performed via block devices.
    • [ ] A driver is a userspace process which owns a character device.
  • [ ] Processes are strongly defined.
    • [ ] IPC to be defined.
    • [ ] edit: IPC can be implemented with the proposed ownership model.

Note: None of those features are guaranteed to be implemented. Too funky isn’t a very serious project and isn’t guaranteed to be finished.

Updated 22/03/2018 18:19

Permettere ad un utente amministratore di associare e disassociare un prodotto con una lista di keyword.


Realizzare un sistema di associazione e disassociazione Prodotto-Keyword: l'utente amministratore deve visualizzare le keyword presenti nel database non ancora associate ad un prodotto in modo tale da poterle associare ad esso.

E' necessario che l'utente amministratore possa visualizzare sia le parole chiave già associate ad un prodotto e sia quele presenti nel database e non ancora associate al prodotto. In questo modo può sceglere tra quelle non ancora associate, la keyword da associare al prodotto.

Updated 22/03/2018 17:41

Additional models to support with summ


My general philosophy goes like this:

  • The model’s output needs to be relatively predictable (this is why I have not supported lavaan, which can be endlessly complicated and used for very different purposes)
  • The model should be regression or similar — summ will not handle other kinds of input, like data.frames or the like. skimr is a package that does those things well.
  • summ should be able to offer added value above and beyond summary

With that said, models I definitely plan to support are:

  • lme

Still thinking about/auditing:

  • brmsfit — worried about variation in output due to wide variety of options, unsure if summ can add benefit since refitting models isn’t feasible.
  • stanreg — less concern about variation than with brmsfit, but “added value” concern remains
  • polr — Need to look more closely at the interface, make sure I know enough to make a good summary. Need to think about how to plot predictions from these models (same goes for ordinal package models), but that isn’t essential.
Updated 22/03/2018 17:25 1 Comments

Need to set the table to a fixed height


The current form height is automatically increased following content. If the assets and pairs are too much, it will cause the page to be too long and inconvenient to use. So need to set a fixed height for the table to ensure that all tables can be displayed on one screen.

Updated 22/03/2018 17:21

Making commercial data FAIR


For the corporate networks project, together with @lbogaardt , we make use of a commercial database. We would like to make this data more FAIR.

The current situation is as follows: 1) The data used is bought from a commercial organization. I do not know the details of the contract, but only researchers at the UvA can make use of the raw data 2) The data is delivered as SQL dumps, with limited meta data (some meta data is available on their online interface to the database) 3) These SQL dumps are processed, modified and put in a MySQL database on a server at the UvA 4) The research group queries this database, and usually does further analysis on an aggregated dataset (e.g. figures per country or per city, instead of per firm).

A student-assistant soon start working on step 3) for new dumps, and we want to make a plan to do this in a way that makes the data more FAIR. This raises the following questions (and probably more): - Can we apply all FAIR principles to this data? - What meta data should we store, and what if this meta data is not available? - How to name the tables and fields to increase inter-operability and reusability? - How to separate code that describes provenance, from implementation details (such as creating indices)? Is it necessary to have this separation? - What agreements should we make with the data provider? For example: - Can we share the meta data? (e.g. names and descriptions of fields)? - Can we share the aggregated data? - Can we share the scripts used to transform the data?

Updated 22/03/2018 17:15

Get rid of deprecated functions inside Kokkos


The concurrency function is deprecated in favor of UniqueToken, see However, Kokkos_ExecPolicy.hpp still uses the concurrency function here:, which causes a compiler error when using KOKKOS_OPTIONS ?= "disable_deprecated_code".

Updated 22/03/2018 18:50 1 Comments

Help Wanted: Email Verification


Problem: The user needs to verify the email address that they are using to create an account.

Task Structure: a) User attempts to log in with an unverified email. b) Catch this and notify the user that the email needs to be verified. c) Exit the app.

Updated 22/03/2018 17:04

Plywood Supplier in Portugal? ("Plan B")


Ideally I would like to manufacture all of the furniture from Bamboo because it’s arguably more environmentally sustainable than Hardwood Ply see: #15

However, if the shipping & import costs (from China to PT) end up being “prohibitive”, an alternative is to get Birch Plywood from a “local” supplier. There are quite a few companies that sell Plywood (obviously; it’s a popular building material!) The (“big box”) “retail” stores are not “cheap” and none of the B2B suppliers list their prices!!

Aki (the equivalent of “Home Depot” in EU/Portugal) sells Plywood (no detail on what type of wood it is!) for €51.99 per 1220 x 2440 x 12 mm sheet. image I used AKI plywood in projects (e.g. tables, desks, chairs, bookshelves, etc) when I lived in PT, the quality is “standard”, but the price is expensive per sheet.

Similarly, Leroy Merlin (akin to “Lowes”) sells Plywood for €59.99 per 1220 x 2440 x 18 mm sheet (which would be a good thickness to use for our project …) image However … “Tipo de madeira: Okoumé” (the type of wood is okoumé) image Native to equatorial west Africa in Gabon and the Republic of the Congo, so unlikely to be “FSC” … (if it’s not explicitly stated it’s a pretty safe “bet” that it’s **not FSC-certified…) 😕 Additionally it is classified as “vulnerable” on the IUCN Red List of Threatened Species image So I would rather not use it if we can avoid it … 🌳 😞

There is a “counter argument” that “supporting” the West African Wood industry through trade helps the local economy to develop, however I do not “buy” it because - from both research and personal experience (speaking to west-African farmers in our “Fair Trade” days…) - I know that much of the wood being logged in the region is unsustainable: and Interactive Map that shows the scale of world deforestation: image The latest “FAO” (Food and Agriculture Organization of the United Nations_) report for West and Central Africa is:

The B2B supplier with the best SERP for the query: “comprar contraplacado portugal” (“buy plywood Portugal”) is: (at the time of writing …) image I’ve sent them a “pedido” (quote request) via their contact form because obviously they don’t have any prices (or a price list) on their site! image image multiplacas-pedido-de-info

Alternative suppliers: + + +

I plan to email each of them to get quotes.

Updated 22/03/2018 17:01

Optimize clickstream demo to use single elasticsearch sink connector


Current implementation of the clickstream demo creates a unique elasticsearch connector per topic.

curl http://localhost:8083/connectors

I’m not sure this is necessary. Consider using one elasticsearch connector and specifying multiple topics.

Updated 22/03/2018 17:02

Failure to create the LFS progress tracking file should not abort a clone


4304 describes a scenario where the clone operation fails because we’re unable to create a temporary file for Git LFS to write its progress output to. I think we should be much more lenient and default to logging the error and not tracking LFS progress in the rare case where we’re unable to create that file.

The function which creates the file is createLFSProgressFile:

It’s used from executionOptionsWithProgress

We should catch any errors from createLFSProgressFile inside of executionOptionsWithProgress and continue on without LFS progress tracking.

Updated 22/03/2018 16:59

Containerize camayoc


Issue type

  • Enhancement (improvement to Camayoc framework other than new test cases)

Description of issue

Currently camayoc only supports certain versions of python 3. We’ve chosen to do this to take advantage of the latest packages and features of python. However, this complicates running camayoc on RHEL 6 and 7 where recent versions of python can be difficult to come by.

I think we should look into what it would take to run camayoc against quipucords when camayoc is in a container. This would allow us to run any tests in camayoc against the container on all platforms we are installing on.

Communicating to the quipucords API over the network seems like it would work “out of the box”. I’m not sure if this is practical from the perspective of needing to have access to the qpc and rho binaries installed on host machine. Perhaps could somehow mount /usr/bin/qpc and/or /user/bin/rho to the container.

Completion checklist

  • [ ] discuss feasibility
  • [ ] if decide to move forward – flesh out this todo list
Updated 22/03/2018 16:57

Configurable log level & formatter


We need to allow the log level and the formatter to be configurable in sensu-backend & sensu-agent.

It can be easily customized in Logrus. Example:

package main

import (
  log ""

init() {
  // do something here to set environment depending on an environment variable
  // or command-line flag
  if Environment == "production" {
  } else {
    // The TextFormatter is default, you don't actually have to do this.

  // Only log the warning severity or above.
Updated 22/03/2018 16:55

Jackett search cannot be set as default search engine


What build of DuckieTV are you using (Standlone / Chrome Extension (New Tab / Browser Action)) …Standalone

What version of DuckieTV are you using (Stable 1.1.x / Nightly yyyymmddHHMM) …Stable 1.1.15

What is your Operating System (Windows, Mac, Linux, Android) …Windows 10

Describe the problem you are having and steps to reproduce if available …When I set a Jackett search engine (IPTorrents) as the default, it keeps switching back to ThePirateBay.

Attach any DuckieTV statistics or Developer Console logs if available

Updated 22/03/2018 18:45 1 Comments

Interface between GUI front-end program and backbone program


I can’t seem to track our chosen interface rules down in my Promethean notes, so let’s hash this out one last time on the record here!

GUI sending a job to the backbone

DONE This one is easy, and we decided on it in December: The GUI will pass data to the backbone as a command line argument. The data will consist of a String naming the folder location which contains the files to be analyzed. * Ex. honestpanther-backbone.exe C:\Documents\Assign-2018-03-22

Backbone giving results back to GUI



The Backbone will report data back to the GUI by simply printing on standard output. In Java, it’s just good ol' System.out.println()


  • In what format should the backbone report results? ** Presumably, it could report a similarity score for each pair of files. This means each singular result is two file name strings and a number of some kind. How should this be expressed?
  • Should there be any line breaks?

Adding data to wiki

Once this conversation has reached a consensus, someone needs to post the relevant data to a wiki page that details the interface between the two programs.

Updated 22/03/2018 16:47

Test csw:csw-2.0.2-GetCapabilities-tc1.3 fails



SUT: TODO Tested with production * ETS CSW 1.16 * TEAM Engine 4.10

and local docker environment * ETS CSW 1.18-SNAPSHOT * TEAM Engine 5.2


Validating the response <ExceptionReport xmlns="" version="1.2.0"> <Exception exceptionCode="VersionNegotiationFailed" locator="acceptVersions"> <ExceptionText> [Version negotiation failed, acceptVersions]: locator={1} </ExceptionText> <ExceptionText> None of the supplied parameter values are supported. </ExceptionText> </Exception> <!--Response received in [84] milliseconds--> </ExceptionReport> fails with: * Error 1: cvc-elt.1.a: Cannot find the declaration of element ‘ExceptionReport’.

The reference implementation passes the test, the response is an OWS 2.0.0 ExceptionReport.

Updated 22/03/2018 16:45

Create Basic Dashboard View


Once the user creates the project with the basic info. A new view should be presented to the user. This view should contain four main components.

  1. Top Bar with actions related to: saving, undo, redo, running, sharing and configuring the project.

  2. Left container with the view hierarchy, the option to switch between views and a left-bottom container with the drag-able objects.

  3. Right Container with the Properties for the selected view/object

  4. Center container containing the canvas where the “Smarthphone” will appear. This is the main container of the application. As the user will be creating everything inside this component. The user should be able to have selectors to switch between phone and tablet sizes. Switch between platforms (iOS, Android). Change the current zoom of the canvas.

Updated 22/03/2018 16:33

Track Kubespray long-polling proxy timeout issues


One one side there’s and

Given history, there’s and

May need to work with @xaf-scality to figure out how to tackle this correctly.

Updated 22/03/2018 16:25

Model Data


I just had a first look at the CMIP5 Data and as fas as I can see, these are just available on a monthly scale. Making predictions on such a scale doesn’t make any sense in my eyes. I think this is what Stefan was refering to at launch but we didn’t understand…

Just for my own interest I had a look now at the DWD data available via ftp and there we could get hourly data of some measurements like temperature pressure rain etc. Maybe we should rather try these…

Updated 22/03/2018 16:07

Additional CDN (AWS S3)


Current site deploys to Netlify.

Would be good to at least break out the CDN to be able to use a different IaaS (AWS comes to mind).

This issue would probably be good to break out into separate tasks and do in small chunks.

  • CloudFormation
  • Netlify forms replacement
  • CI/CD
Updated 22/03/2018 15:59

Module translation support

  • All translatable strings are to be stored in a single file at src/Utility/data/lang.php, making translation somewhat easier. Each string has the necessary comment lines, and notes in specific contexts.
  • These are acquired from the Jumplinks\Utility\Lang static class via get() and getf()
  • Jumplinks\Utility\Inputfields also makes use of Lang via the @ signal - this will be documented for reference purposes.

… and perhaps some actual translation files, if I can get the help… Google Translate is good, but not good enough, so I’ll need a human to assist with translations.

If you’re interested in assisting, please give a shout-out below, and I’ll send the lang file to you when it becomes ‘stable’.

Updated 22/03/2018 15:56 1 Comments

Version numbers not being interpreted as expected


Hey there, I’ve been trying out knex-migrator (and also studying the code to look at possible postgres support). While running against a MySQL database I noticed that version numbers such as 1.0.11 is ranked as higher than 1.1.0 in the migrator.

I think this is due to the utils function isGreaterThanVersion. It will convert 1.0.11 to 1011 and 1.1.0 to 110. Then in the final check it will output that 1011 > 110.

I think this can be fixed fairly easily using the popular library compare-ver and just modifying the isGreaterThanVersion function to make use of the library. Alternatively a re-implementation of the library can be used, but it seems like a good fit for this usage.

I’d be happy to sit down and create a formal PR for this if the approach seems sane.

On a small off-topic question, how would the maintainers feel about me possibly submitting a PR with Postgres support for this module? I’m aware you do not support Postgres across your Ghost product so it may not be something you’re willing to maintain, in which case a fork may be a better idea for my uses.

Updated 22/03/2018 17:00 1 Comments

Implement faster and more convenient "intersects with interval" checks


Currently, if I want to test whether a bitmap has values contained in an interval, I have to do something ugly like this: roaring_bitmap_t *r = roaring_bitmap_of(4, 1, 2, 3, 1000); roaring_bitmap_t *range = roaring_bitmap_from_range(10, 1000+1, 1); bool intersection = roaring_bitmap_intersect(r,range)); This is not optimal, if only because we have to create a temporary bitmap

Updated 22/03/2018 15:40

Microsoft Planner automation



I work for a company that would like to pull as much information from Planner into Microsoft Graph. I have done a lot of research on this and I came up with many solutions, only problem is many of the software suggested on Microsoft I can’t download because of privacy issues within the company. What is the best way to automate Planner with Microsoft Graph? I do have access to Python 2.7v and SQL server. I can download NodeJS as well but that comes with other software’s and databases to use in REST WebAPI. One of the project requirements is to use REST API but I know that doesn’t work with Python so what can use or what’s the best way to use graph with planner?

thank you !

Updated 22/03/2018 15:40

Fallo en la integración continua.


Este es el error que da Travis:

The command “eval ./mvnw install -DskipTests=true -Dmaven.javadoc.skip=true -B -V ” failed. Retrying, 2 of 3. /home/travis/.travis/job_stages: line 236: ./mvnw: Permission denied The command “eval ./mvnw install -DskipTests=true -Dmaven.javadoc.skip=true -B -V ” failed. Retrying, 3 of 3. /home/travis/.travis/job_stages: line 236: ./mvnw: Permission denied The command “eval ./mvnw install -DskipTests=true -Dmaven.javadoc.skip=true -B -V ” failed 3 times. The command “./mvnw install -DskipTests=true -Dmaven.javadoc.skip=true -B -V” failed and exited with 126 during . Your build has been stopped.

Updated 22/03/2018 15:25

Annoying 'Gradle Test Executor' finished with non-zero exit value 137


Hey team!

Can someone take a look at the issue that intermittently hits us on CI? I’m seeing this issue a couple of time per week and I’m forced to re-run builds.

> Process 'Gradle Test Executor 2' finished with non-zero exit value 137

Example full build log:

Suggestions: - SO and google show hits for this problem - perhaps we need to fiddle with java/memory settings for test execution?

Updated 22/03/2018 22:04 1 Comments

Exception when using Microsoft.Azure.Amqp 2.2


Steps to reproduce

  1. Install Microsoft.Azure.ServiceBus package. the package depends on Microsoft.Azure.Amqp (>= 2.1.2), so version 2.1.2 is installed.
  2. since the latest version of Microsoft.Azure.Amqp is 2.2, upgrade it to this version
  3. write code to send a message to a queue, and execute. I followed roughly this tutorial.

    Actual Behavior

    An exception is thrown from the queueClient = new QueueClient(ServiceBusConnectionString, QueueName); line (in my case it was in a class library called from an Azure Function, but other than that it was essentially the same). Message text is: “Could not load file or assembly ‘Microsoft.Azure.Amqp, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35’ or one of its dependencies. The system cannot find the file specified.”:“Microsoft.Azure.Amqp, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35”

    Expected Behavior

    Message should be sent without errors


  • OS platform and version: Windows 10 Enterprise
  • .NET Version: 4.6.1
  • NuGet package version or commit ID: 2.0.0
Updated 22/03/2018 23:19 4 Comments

Implement faster and more convenient "intersects with interval" checks


Currently, if I want to test whether a bitmap has values contained in an interval, I have to do something ugly like this:

        // some bitmap
        RoaringBitmap rr = RoaringBitmap.bitmapOf(1,2,3,1000);

        // we want to check if it intersects a given range [10,1000]
        int low = 10;
        int high = 1000;
        RoaringBitmap range = new RoaringBitmap();
        range.add((long)low, (long)high + 1);

        System.out.println(RoaringBitmap.intersects(rr,range)); // prints true if they intersect

This is not optimal, if only because we have to create a temporary bitmap.

Updated 22/03/2018 20:52 6 Comments

Plural localization not working


Just installed this pod and tried plural localization via "eventCount".l10n(arg: count) Added this lines to Localizable.strings "" = "%d event"; "eventCount.many" = "%d events"; "eventCount.other" = "%d events"; Traced code, self.bundle in this function is nil, so no resource is loaded ``` private func resource(named resourceName: String?) -> ResourceContainer { let resourceName = (resourceName ?? “”).isEmpty ? “Localizable” : resourceName!

    return self.resources[resourceName] ?? {
        let resource = ResourceContainer(bundle: self.bundle, name: resourceName)
        self.resources[resourceName] = resource
        return resource

``` Version 5.1.0

Updated 22/03/2018 17:17 4 Comments

Improve LB rebuild behavior when health status changes occur


Today, when health checks transition host status or new health status arrives in EDS (, we do expensive (at least O(n2)) rebuilds in various places of the host lists, healthy host lists, locality lists, subsets, WRR, etc.

This isn’t great when we scale the number of endpoints per cluster, if we have short health check intervals or if we have short EDS intervals. This issue will track work on optimizing this behavior.

Updated 22/03/2018 15:12 1 Comments

Buffalo g r people generates invalid names


It seems that Buffalo resources generator is not using correct naming in some cases when using irregular plurals as people like:

$ buffalo g r people first_name:string last_name:string email:string identification:string


  • Uses Persons as the plural in person.go model.
  • Auto renderer looks for templates/persons/*.html template instead of templates/people/*.html
Updated 22/03/2018 20:26 10 Comments

CustomJS /CSS


I understand that this particular feature had been mentioned in one of our earlier calls. I am not quite sure how this works (in connection with DataSet Configuration App), what process we need to follow to use this and what results do we expect based on this. From our earlier discussion it seemed like we need to include it while using the app. Can we have a brief call to discuss the same?

Updated 22/03/2018 15:08

Wanted features


Playback: - [ ] Fade in/out - [ ] More EQ controls (eg. pitch, speed) - [ ] Cross-fade into queue of a different instance - [ ] More options for controller the playback? (eg. hardware) - [ ] Hot keys for easy switching between windows (eg. between tracks and EQ)

Data: - [ ] Listen for outside data changes and then sync - [ ] More sorting options (eg. sort by ‘added to collection’)

Maybe: - [ ] Allow playback from inside the queue/history

Technologies we can use

Updated 22/03/2018 15:06

fatal error: concurrent map writes when multiple backends

krakend[4734]: goroutine 1053 [running]:
krakend[4734]: runtime.throw(0xc8a990, 0x15)
krakend[4734]: /var/opt/go/1.10/src/runtime/panic.go:619 +0x81 fp=0xc42061d920 sp=0xc42061d900 pc=0x42c921
krakend[4734]: runtime.mapassign_faststr(0xb74440, 0xc4205adda0, 0xc84b14, 0xd, 0xc4201eacb0)
krakend[4734]: /var/opt/go/1.10/src/runtime/hashmap_fast.go:703 +0x3e9 fp=0xc42061d990 sp=0xc42061d920 pc=0x40d529
krakend[4734]:, 0xc4201a0840, 0xc4205ca140, 0xc4205f85a6, 0xc4205f85c0, 0xc42064ac78)
krakend[4734]: /home/bartoszgolek/go/src/ +0x12c fp=0xc42061dbc0 sp=0xc42061d990 pc=0x96f96c
krakend[4734]:, 0xc4201a0840, 0xc4205ca140, 0x0, 0x0, 0x50)
krakend[4734]: /home/bartoszgolek/go/src/ +0xd9 fp=0xc42061dc88 sp=0xc42061dbc0 pc=0x95de49
krakend[4734]:, 0xc4201a0840, 0xc4205ca0f0, 0x412468, 0x10, 0xba0580)
krakend[4734]: /home/bartoszgolek/go/src/ +0x304 fp=0xc42061ddc0 sp=0xc42061dc88 pc=0x6dba54
krakend[4734]:, 0xc4201a0840, 0xc42049ca50, 0xc4201a0840, 0xc4203620d0, 0xc4201282d0)
krakend[4734]: /home/bartoszgolek/go/src/ +0x1df fp=0xc42061deb8 sp=0xc42061ddc0 pc=0x6dcd3f
krakend[4734]:, 0xc420020f60, 0xc4202dd770, 0xc42049ca50, 0xc420020fc0, 0xc420021020)
krakend[4734]: /home/bartoszgolek/go/src/ +0x84 fp=0xc42061dfb0 sp=0xc42061deb8 pc=0x6dad14
krakend[4734]: runtime.goexit()
krakend[4734]: /var/opt/go/1.10/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc42061dfb8 sp=0xc42061dfb0 pc=0x459921
krakend[4734]: created by

I think it is connected to:

I’ve found that better solutino would be:

Updated 22/03/2018 23:37 11 Comments



data_store extension for data_reader_triplet. - modification to data_reader_multi_images to make this extension easier while reusing the code as much as possible. The requirement for data_reader_triplet is mostly the same as for data_reader_multiple_images. The only difference is the sample type, pair<vector<string>m, label_t>, returned by reader->get_sample(idx). The label_t is int for multi_images and uint8_t for triplet. - addition of data_store_triplet inheriting data_store_multi_images - override generic_data_reader::setup_data_store() for each derived class such that we do not need to include all the data_reader headers and associated data_store headers in the implementation file of the base class. - data_store for triplet not at least pass the data_store setup() but still crashes

Updated 22/03/2018 15:01

WebAssembly API for JavaScript


Finally we have (wasm-bindgen)[] to expose Rust struct to JavaScript, so we should be able to create Handlebars instance from JavaScript, and call any API from it.

This has made it possible to create a JavaScript interface for handlebars-rust. The next step will be allowing JavaScript implemented helpers and directives being registered in handlebars-rust.

Updated 22/03/2018 14:51

Add a section on the deprecations app to the ember learn page


Currently the page has a section highlighting various internal ember apps so people can browse the source to see ember in action. This is also a good place to see what ember apps we have if people want to contribute. Recently we’ve release the deprecations app at and it uses some interesting technologies such as prember and json static site generator.

@serenaf or @mansona can provide a couple of bullet points to the more interesting aspects of the app.

The file to update is data/showcase.yml. Just fool the pattern set by the others. Name the screenshot showcase-deprecations.png and follow the image guidelines listed in showcase.yml.

Updated 22/03/2018 14:45

Checklist: Markdown Conversion


(spawned via discussion on #23)

Due to the conversion from Blogger, a lot of the posts are still in HTML format, which can cause problems:

  • The formatting will be off
  • They show as one long string of HTML
  • Other considerations (images, code samples, etc.) have not been covered.

So we’ll try to knock these out one at a time in the order of priority, since the site is live.

What we need to do for each post

When converting a post, you’ll need to:

  • Rename the post file from .html to .md
  • HTML formatting to markdown formatting
  • Save images to assets/images/posts and reference them from there
  • Mark all code samples in markdown and indicate the language
  • Revise to clean up in another ways as necessary

We prefer many small PRs here, so tackle one or two posts at a time, and then check that post off on the list.

Prioritized List of Page Conversions

  • [ ] TBD
Updated 22/03/2018 14:36

[Pls send halp!] Cleaning up the test output


♻️ ♻️ ♻️ Please consider the environment before printing this Github issue. ♻️ ♻️ ♻️

<img width=“1194” alt=“screenshot_3_22_18__1_46_pm” src=“”>

Currently, the test output is very noisy – even for green builds. This makes it very difficult find any actual failures, especially on small screens and on phones.

Let’s try to clean it up!

Here are the useless output I have found based on this test run. Leave a comment about what you are looking into (🔒) so others won’t duplicate the effort.

A lot of these probably have the same root cause (for example, because of how the default Ember.onError works) and probably require some refactoring in the test infrastructure.

  • Build phase
    • [ ] 'container' is imported by tmp/rollup-cache_path-I6PjJI6s.tmp/run_loop.js, but could not be resolved – treating it as an external dependency
    • [ ] 'ember-debug' is imported by tmp/rollup-cache_path-l0Z4sygU.tmp/index.js, but could not be resolved – treating it as an external dependency
    • [ ] 'assert' is imported from external module '@glimmer/util' but never used
    • [ ] 'assert' and 'SERIALIZATION_FIRST_NODE_STRING' are imported from external module '@glimmer/util' but never used
  • Test phase
    • [ ] WARNING: Library "magic" is already registered with Ember.
    • [ ] console.trace
    • [ ] DEPRECATION: Should not throw [deprecation id: test]
    • [ ] (node:6689) Warning: Possible EventEmitter memory leak detected. * listeners added. Use emitter.setMaxListeners() to increase limit
    • [ ] (node:6689) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: *): Error: Navigation Timeout Exceeded: 900ms exceeded
    • [ ] Error: Element .does-not-exist not found.
    • [ ] Error: Catch me
    • [ ] Error: the error
    • [ ] Testing paused. UseresumeTest()to continue.
    • [ ] The library version banner on app boot DEBUG: ------------------------------- DEBUG: Ember : * DEBUG: jQuery : * DEBUG: -------------------------------
    • [ ] Error while processing route: *
    • [ ] More context objects were passed than there are dynamic segments for the route: stink-bomb `
    • [ ] WARNING: Binding style attributes may introduce cross-site scripting vulnerabilities; please ensure that values being bound are properly escaped. For more information, including how to disable this warning, see Style affected: *
Updated 22/03/2018 20:56 3 Comments

Occasional 'naturalHeight' error first time app runs


Not sure why this is appearing in the console every so often:

Uncaught TypeError: Cannot read property 'naturalHeight' of undefined
    at VueComponent.objHeight (vuetify.js:13902)
    at VueComponent.imgHeight (vuetify.js:13974)
    at Watcher.get (vue.js:3144)
    at Watcher.evaluate (vue.js:3251)
    at VueComponent.computedGetter [as imgHeight] (vue.js:3507)
    at VueComponent.calcDimensions (vuetify.js:14004)
    at VueComponent.translate (vuetify.js:13990)
    at HTMLImageElement.<anonymous> (vuetify.js:13896)

Doesn’t seem to affect anything, but wonder what triggers it.

Updated 22/03/2018 14:13

Add attachment to note


Hi guys, i’ve seen that #250 introduced a way to drop images to the markdown editor to get it inserted as an attachment. It is then also copied to the /images folder in the configured storage folder. It would be quite nice to have this feature for additional attachment types as well. E.g. I would like to include PDF files as attachment in notes. For me it would be enough if they would be included as a link that can be opened by the OS (e.g. a file:// link). Any Ideas how to do that? Background of the question is that my storage folder is within a folder synchronized by a cloud client and I want to include attachments in it as well.. Thanks for your help :)

Updated 22/03/2018 16:38 1 Comments

Replace lodash functions with ES6 equivalents


We should start using ES6 over lodash when we can. Unfortunately not all of lodash’s functions have a suitable alternative yet, but a good portion do. Ideally we want as fewer dependencies so the install is as small as possible.

_.filter(this.languages, (language: Language): boolean => can be replaced with something along the lines of this.languages.filter((language: Language): boolean =>., _.filter, _.reduce all have ES6 equivalents.

Updated 22/03/2018 19:48 1 Comments

Discord admin commands authorization


So for commands like $execconsolecommand how should CSMM handle permissions?

Currently there are three proposals/implementations:

  1. Discord Server owner can set a specific role (@ Admins for example) in the discord settings on the website. People in discord with this role will have access to admin commands

  2. CSMM - Discord ID Server owner adds admins' discord ID (or CSMM profiles) to a list. This list would work much like the current admin list but still be seperate! This will not give these people full access to your server on CSMM.

  3. CSMM - Admin list Server owners add people to the admin list that is currently implemented. This’ll give those people full access to CSMM aswell as Discord commands.

Updated 22/03/2018 14:38 1 Comments

dependency on `jest-validate` breaks tests in create-react-app


If you are using create-react-app and upgrade to link-staged 7.0.0 all your test starts to fail because of TypeError: environment.dispose is not a function

Reason is that lint-staged 7.0.0 started to depend on jest-validate which depends on jest-environment-node and there is clash between different versions of jest-environment-node from CRA and list-staged.

npm ls jest-environment-node


Updated 22/03/2018 20:09 2 Comments

Unknown BsonType exception when trying to save java.time.Instant field


Hello. I get this error when trying to save document with an java.time.Instant field (I added the com.fasterxml.jackson.datatype.jsr310.JavaTimeModule) to the mappers of JsonObject.

java.lang.IllegalStateException: Unknown BsonType for '1521726105.445000000'
    at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.writeValue(
    at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.lambda$writeDocument$1(
    at io.vertx.ext.mongo.impl.codec.json.JsonObjectCodec.lambda$forEach$0(
    at java.lang.Iterable.forEach(
    at io.vertx.ext.mongo.impl.codec.json.JsonObjectCodec.forEach(
    at io.vertx.ext.mongo.impl.codec.json.JsonObjectCodec.forEach(
    at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.writeDocument(
    at io.vertx.ext.mongo.impl.codec.json.AbstractJsonCodec.encode(
    at org.bson.codecs.BsonDocumentWrapperCodec.encode(
    at org.bson.codecs.BsonDocumentWrapperCodec.encode(
    at com.mongodb.operation.BulkWriteBatch$WriteRequestEncoder.encode(
    at com.mongodb.operation.BulkWriteBatch$WriteRequestEncoder.encode(
    at org.bson.codecs.BsonDocumentWrapperCodec.encode(
    at org.bson.codecs.BsonDocumentWrapperCodec.encode(
    at com.mongodb.connection.BsonWriterHelper.writeDocument(
    at com.mongodb.connection.BsonWriterHelper.writePayload(
    at com.mongodb.connection.CommandMessage.encodeMessageBodyWithMetadata(
    at com.mongodb.connection.RequestMessage.encode(
    at com.mongodb.connection.InternalStreamConnection.sendAndReceiveAsync(
    at com.mongodb.connection.UsageTrackingInternalConnection.sendAndReceiveAsync(
    at com.mongodb.connection.DefaultConnectionPool$PooledConnection.sendAndReceiveAsync(
    at com.mongodb.connection.CommandProtocolImpl.executeAsync(
    at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.executeAsync(
    at com.mongodb.connection.DefaultServerConnection.executeProtocolAsync(
    at com.mongodb.connection.DefaultServerConnection.commandAsync(
    at com.mongodb.operation.MixedBulkWriteOperation.executeCommandAsync(
    at com.mongodb.operation.MixedBulkWriteOperation.executeBatchesAsync(
    at com.mongodb.operation.MixedBulkWriteOperation.access$900(
    at com.mongodb.operation.MixedBulkWriteOperation$2$
    at com.mongodb.operation.OperationHelper.validateWriteRequests(
    at com.mongodb.operation.MixedBulkWriteOperation$
    at com.mongodb.operation.OperationHelper$7.onResult(
    at com.mongodb.operation.OperationHelper$7.onResult(
    at com.mongodb.connection.DefaultServer$1.onResult(
    at com.mongodb.connection.DefaultServer$1.onResult(
    at com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(
    at com.mongodb.connection.DefaultConnectionPool.openAsync(
    at com.mongodb.connection.DefaultConnectionPool.getAsync(
    at com.mongodb.connection.DefaultServer.getConnectionAsync(
    at com.mongodb.binding.AsyncClusterBinding$AsyncClusterBindingConnectionSource.getConnection(
    at com.mongodb.async.client.ClientSessionBinding$SessionBindingAsyncConnectionSource.getConnection(
    at com.mongodb.operation.OperationHelper.withConnectionSource(
    at com.mongodb.operation.OperationHelper.access$100(
    at com.mongodb.operation.OperationHelper$AsyncCallableWithConnectionAndSourceCallback.onResult(
    at com.mongodb.operation.OperationHelper$AsyncCallableWithConnectionAndSourceCallback.onResult(
    at com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(
    at com.mongodb.async.client.ClientSessionBinding$1.onResult(
    at com.mongodb.async.client.ClientSessionBinding$1.onResult(
    at com.mongodb.binding.AsyncClusterBinding$1.onResult(
    at com.mongodb.binding.AsyncClusterBinding$1.onResult(
    at com.mongodb.connection.BaseCluster$ServerSelectionRequest.onResult(
    at com.mongodb.connection.BaseCluster.handleServerSelectionRequest(
    at com.mongodb.connection.BaseCluster.selectServerAsync(
    at com.mongodb.binding.AsyncClusterBinding.getAsyncClusterBindingConnectionSource(
    at com.mongodb.binding.AsyncClusterBinding.getWriteConnectionSource(
    at com.mongodb.async.client.ClientSessionBinding.getWriteConnectionSource(
    at com.mongodb.operation.OperationHelper.withConnection(
    at com.mongodb.operation.MixedBulkWriteOperation.executeAsync(
    at com.mongodb.async.client.AsyncOperationExecutorImpl$2.onResult(
    at com.mongodb.async.client.AsyncOperationExecutorImpl$2.onResult(
    at com.mongodb.async.client.ClientSessionHelper.createClientSession(
    at com.mongodb.async.client.ClientSessionHelper.withClientSession(
    at com.mongodb.async.client.AsyncOperationExecutorImpl.execute(
    at com.mongodb.async.client.MongoCollectionImpl.executeSingleWriteRequest(
    at com.mongodb.async.client.MongoCollectionImpl.executeInsertOne(
    at com.mongodb.async.client.MongoCollectionImpl.insertOne(
    at com.mongodb.async.client.MongoCollectionImpl.insertOne(
    at io.vertx.ext.mongo.impl.MongoClientImpl.insertWithOptions(
    at io.vertx.ext.mongo.impl.MongoClientImpl.insert(
    at com.webgenerals.tsapp.microservices.subscriptions.impl.SubscriptionServiceImpl.saveSubscription(
    at io.vertx.ext.web.impl.RouteImpl.handleContext(
    at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(
    at io.vertx.ext.web.handler.impl.BodyHandlerImpl$BHandler.doEnd(
    at io.vertx.ext.web.handler.impl.BodyHandlerImpl$BHandler.end(
    at io.vertx.ext.web.handler.impl.BodyHandlerImpl.lambda$handle$0(
    at io.vertx.core.http.impl.HttpServerRequestImpl.handleEnd(
    at io.vertx.core.http.impl.Http1xServerConnection.handleLastHttpContent(
    at io.vertx.core.http.impl.Http1xServerConnection.handleContent(
    at io.vertx.core.http.impl.Http1xServerConnection.processMessage(
    at io.vertx.core.http.impl.Http1xServerConnection.handleMessage(
    at io.vertx.core.http.impl.HttpServerImpl$ServerHandlerWithWebSockets.handleMessage(
    at io.vertx.core.http.impl.HttpServerImpl$ServerHandlerWithWebSockets.handleMessage(
    at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(
    at io.vertx.core.impl.ContextImpl.executeFromIO(
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(
    at io.netty.util.concurrent.SingleThreadEventExecutor$
Updated 22/03/2018 14:35

Fork me on GitHub