Contribute to Open Source. Search issue labels to find the right project for you!

Add SSL on AWS domain


AWS Certificate Manager – Deploy SSL/TLS-Based Apps on AWS

AWS Certificate Manager

Updated 30/04/2017 08:05

kubernetes service account access

proudcity/wp-proudcity, cmds not running in jenkins because of the RBAC permissions upgrade in Kubernetes in 1.6.0.

I set this up, so this should be fixed, but either the config is wrong, the host servers need to be restarted, or something else:

Updated 29/04/2017 01:08

Fix npm publish-canary


Please walk through the following steps for reporting an issue about Orion:

Short description of your issue:

See There is an error fatal: No tags can describe 'b1573875151199ee807070d943059103b121d19f'. Whenever CI runs on master.

Updated 28/04/2017 20:43

Document on debugging Hadrian


I think it would be nice to have a document on how to debug Hadrian. For example, what tools you typically use to debug it, whether there is some magic switch in either Hadrian or GHC that provides useful information about the build etc.

While it would take quite some time to write a complete guide on this, we can always start with something trivial and do it incrementally. It should be not be a burden.

Updated 28/04/2017 12:53 1 Comments

Reduce Yapsy log severity when plugins are unavailable


On Windows we are not using hytra for tracking and hence the plugin export system is not available yet. Yapsy tries to load the plugins nevertheless, and when it fails it writes errors to the console and log that might scare our users.

ERROR 2017-04-27 17:35:34,581 PluginManager 58812 57100 Unable to import plugin: C:\conda3\envs\ilastik-1_2\ilastik-meta\ilastik\ilastik\plugins_default\tracking_h5_event_export
Traceback (most recent call last):
  File "c:\conda3\envs\ilastik-1_2\lib\site-packages\yapsy-1.10.423-py2.7.egg\yapsy\", line 488, in loadPlugins
    candidate_module = imp.load_module(plugin_module_name,plugin_file,candidate_filepath+".py",("py","r",imp.PY_SOURCE))
  File "C:\conda3\envs\ilastik-1_2\ilastik-meta\ilastik\ilastik\plugins_default\", line 8, in <module>
    from hytra.core.jsongraph import getMappingsBetweenUUIDsAndTraxels, getMergersDetectionsLinksDivisions, getMergersPerTimestep, getLinksPerTimestep, getDetectionsPerTimestep, getDivisionsPerTimestep
ImportError: No module named hytra.core.jsongraph
ERROR 2017-04-27 17:35:34,584 PluginManager 58812 57100 Unable to import plugin: C:\conda3\envs\ilastik-1_2\ilastik-meta\ilastik\ilastik\plugins_default\tracking_mamut_export
Traceback (most recent call last):
  File "c:\conda3\envs\ilastik-1_2\lib\site-packages\yapsy-1.10.423-py2.7.egg\yapsy\", line 488, in loadPlugins
    candidate_module = imp.load_module(plugin_module_name,plugin_file,candidate_filepath+".py",("py","r",imp.PY_SOURCE))
  File "C:\conda3\envs\ilastik-1_2\ilastik-meta\ilastik\ilastik\plugins_default\", line 5, in <module>
    from mamutexport.mamutxmlbuilder import MamutXmlBuilder
ImportError: No module named mamutexport.mamutxmlbuilder

We might want to make this look less severe, but we shouldn’t hide it completely.

Updated 28/04/2017 12:45

add PR preview build


To buildbot, just like the cookbook preview we already have.

It would be good to have previews of * notebooks (only the changed files) * doxygen

This will make reviewing much easier, in addition to giving as a test for the notebooks on PR level

Has to be done by core dev with buildbot access, so either of @karlnapf @iglesias @lisitsyn @vigsterkr

Fernando, we discussed this, so I assigned you

Updated 26/04/2017 21:19 1 Comments

Optimize Docker Host infrastructure


Painpoints: - randomly choosing what type of connection will be tested (unix vs. tcp) - CV published images with local docker (unix or tcp) requires self-registration but with multiple testing workers it severely interfere with other testing - local docker (unix or tcp) has issue with SELinux as foreman-selinux is not compatible with docker-selinux so lets avoid using local docker for tcp at least (always external tcp vs. local unix socket testing) - unix socket docker has outstanding bug - 400 malformed header pending (due to outdated excon gem)

Remedies: - separate unix socket vs. tcp docker host tests (by introducing UnixSocketDockerTestCase) - avoid using unix socket docker with CV published images (as you have to register satellite to itself) - externalize tcp docker host to avoid registration to itself - external tcp docker will utilize these CV published images testing - extrenal tcp dockers will be provided the similar way as are client VMs using context manager DockerHostMachine bound to libvirt image of preconfigured docker/host or better? atomic

Updated 28/04/2017 12:36 3 Comments

Unskip tests once new analyzer is published


A number of dart_style tests are currently disabled because they only pass when dart_style is run from within the bleeding edge analyzer inside the Dart repo (

When used with an analyzer that can parse all the new syntax, they do pass (yay), but we want them to always be run. As soon as the analyzer folks publish a version of analyzer to pub that parses all the latest syntax, upgrade dart_style to rely on that and then remove the “(skip: …)” markers from all of the skipped tests.

Updated 25/04/2017 23:54

Suggestion: need label for timed-out issues


IMHO it would be useful to have separate cemetery for timed out issues. Sometime issues become accepted and all agrees those are useful. But if not implemented for a long time, those are closed. I can understand such practice, but it would be useful to track back such issues somehow. Now useful ideas just become lost.

I think, new label will solve this problem.


Updated 26/04/2017 03:44 9 Comments

Add FOSSA License Scanner to CI


As part of foundation ops, we’ve been looking into ways to add open source license compliance processes to the org in a way that lets JSF projects maintain their autonomy. We’re currently looking at adopting FOSSA to help us continuously scan for licenses across JSF projects/their deep dependencies and automate compliance work.

FOSSA works like a CI or license linter – on each commit it scans all source files in a project and its dependencies for license violations. It can then automatically trigger Slack notifications, block PRs that bring in deps with incompatible licenses, collect raw copyright headers to generate attribution files/notices and more. They’re focused a lot on making compliance really easy to run in the background for developers, and they’re already running on a bunch of open source JavaScript projects.

Updated 25/04/2017 15:27 1 Comments

CI improvements


I think we may need to drop the Mac builds. They take a very long time, probably due to limited hardware from Travis.

It would also be interesting to look into (& for code coverage reports. We have a lot of untested code at the moment, but I’d like to improve that over time.

Updated 19/04/2017 20:47 3 Comments

Look into modeling protocols on Wikidata


Perhaps start with examples from that are cited in the literature or on Wikipedia, e.g. - - -

Note that protocols can have DOIs, but not all of them do.

Updated 19/04/2017 06:25

Unhandled Exception: System.IO.FileNotFoundException


Hello, I successfully built native application with /t:LinkNative . Unfortunatelly I got below exception when I try to run application. What should I do ? tablonzo@coco:~/Desktop/DJ_PRAWDZIWY_NATIVE/bin/Release/netcoreapp1.1/native$ ./DJCompositeFeedSrv_NETCore Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly ‘System.Xml.XmlDocument’. The system cannot find the file specified. File name: ‘System.Xml.XmlDocument’ at DJCompositeFeedSrv_NETCore!<BaseAddress>+0x49af62 at DJCompositeFeedSrv_NETCore!<BaseAddress>+0x496f7d at DJCompositeFeedSrv_NETCore!<BaseAddress>+0x48457d at DJCompositeFeedSrv_NETCore!<BaseAddress>+0x48378d at DJCompositeFeedSrv_NETCore!<BaseAddress>+0x47ac5a Aborted (core dumped)

Updated 25/04/2017 16:50 1 Comments

Better exception handling for wagon's serve command when using -f option


To catch those situations when the daemonized process is no longer running but the file is still hanging around…

# Error description:
No such process

# Backtrace:

    /usr/local/src/locomotivecms_wagon/lib/locomotive/wagon/commands/serve_command.rb:55:in `kill'
    /usr/local/src/locomotivecms_wagon/lib/locomotive/wagon/commands/serve_command.rb:55:in `stop'
    /usr/local/src/locomotivecms_wagon/lib/locomotive/wagon/commands/serve_command.rb:22:in `start'
    /usr/local/src/locomotivecms_wagon/lib/locomotive/wagon/commands/serve_command.rb:13:in `start'
    /usr/local/src/locomotivecms_wagon/lib/locomotive/wagon.rb:39:in `serve'
    /usr/local/src/locomotivecms_wagon/lib/locomotive/wagon/cli.rb:286:in `serve'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/thor-0.19.4/lib/thor/command.rb:27:in `run'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/thor-0.19.4/lib/thor/invocation.rb:126:in `invoke_command'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/thor-0.19.4/lib/thor.rb:369:in `dispatch'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/thor-0.19.4/lib/thor/base.rb:444:in `start'
    /usr/local/src/locomotivecms_wagon/bin/wagon:12:in `<top (required)>'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bin/wagon:23:in `load'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bin/wagon:23:in `<top (required)>'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/cli/exec.rb:74:in `load'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/cli/exec.rb:74:in `kernel_load'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/cli/exec.rb:27:in `run'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/cli.rb:335:in `exec'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/vendor/thor/lib/thor/invocation.rb:126:in `invoke_command'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/vendor/thor/lib/thor.rb:359:in `dispatch'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/cli.rb:20:in `dispatch'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/vendor/thor/lib/thor/base.rb:440:in `start'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/cli.rb:11:in `start'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/exe/bundle:32:in `block in <top (required)>'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/lib/bundler/friendly_errors.rb:121:in `with_friendly_errors'
    ~/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/bundler-1.14.6/exe/bundle:24:in `<top (required)>'
    ~/.rbenv/versions/2.3.1/bin/bundle:23:in `load'
    ~/.rbenv/versions/2.3.1/bin/bundle:23:in `<main>'
Updated 24/04/2017 18:05

`az` help text


When there are mistakes in uses of the az cli, the output often comes out in a way that is not too helpful. As a random example, see the below. It looks like the output is made by printing the basic command+subs and then the list of arguments with some automatic line wrapping. The thing is that the commands tend to be very nested and therefore very long, so there’s little room for the argument list. In particular, note that in my example the destination and source mandatory arguments are wrapped in a very confusing way.

It would be much better if the arguments were * specified at a fixed indentation from the left side (or maybe following some “arguments:” header * listed with only one argument per line * listed in a better & consistent order, eg, mandatory arguments first, optional ones next, and finally generic ones (-h / --output / etc)

(As a side note, this style of command-line is very fitting for someone who is coming from a powershell background, but the overall design is weird in a linux world: like the lack of non-flagged (“positional”) arguments, and the flags that have multiple values as in the --metadata below.)

$ az storage blob upload-batch --destination "" --source "..."
usage: az storage blob upload-batch [-h] [--output {json,tsv,table,jsonc}]
                                    [--verbose] [--debug] [--query JMESPATH]
                                    [--lease-id LEASE_ID]
                                    [--account-key ACCOUNT_KEY]
                                    [--content-language CONTENT_LANGUAGE]
                                    [--connection-string CONNECTION_STRING]
                                    [--account-name ACCOUNT_NAME]
                                    [--max-connections MAX_CONNECTIONS]
                                    [--if-modified-since IF_MODIFIED_SINCE]
                                    [--pattern PATTERN] --destination
                                    DESTINATION [--type BLOB_TYPE] --source
                                    SOURCE [--validate-content]
                                    [--content-encoding CONTENT_ENCODING]
                                    [--content-md5 CONTENT_MD5]
                                    [--if-match IF_MATCH]
                                    [--content-disposition CONTENT_DISPOSITION]
                                    [--metadata METADATA [METADATA ...]]
                                    [--dryrun] [--if-none-match IF_NONE_MATCH]
                                    [--maxsize-condition MAXSIZE_CONDITION]
                                    [--sas-token SAS_TOKEN]
                                    [--content-type CONTENT_TYPE]
                                    [--content-cache-control CONTENT_CACHE_CONTROL]
                                    [--timeout TIMEOUT]
                                    [--if-unmodified-since IF_UNMODIFIED_SINCE]
az storage blob upload-batch: error: incorrect usage: destination cannot be a blob url
Updated 14/04/2017 14:46 1 Comments

Make comparisons linkable


e.g. someone could navigate directly to “highqualitygifs - reactiongifs”, or any other valid combination.

This could be tough to configure: * Get Apache or Flask to return the index page for urls like * Update the Flask server to host the API endpoints at /api. They aren’t prefixed in development because I’m not sure how to change the prefix for different environments with Flask. In production, the WSGI app is mounted at /api because I wasn’t able to figure out how to mount it at root (/) and still serve the index.html from Apache.

On the frontend, it would mean adapting to use the latest version of react-router

Updated 14/04/2017 03:30

[flow] upgrade from 0.43 to 0.44


There are three minor issues

  yarn flow v0.21.3
$ flow 
Launching Flow server for /home/ubuntu/debugger.html
Spawned flow server (pid=18553)
Logs will go to /tmp/flow/zShomezSubuntuzSdebugger.html.log
179:   if (Object.keys(prefsTabs).length == 0) {
           ^^^^^^^^^^^^^^^^^^^^^^ element of Object.keys. Expected object instead of
179:   if (Object.keys(prefsTabs).length == 0) {
                       ^^^^^^^^^ empty array literal

204:     let cursor = getSearchCursor(cm, state.query, pos, modifiers);
                                          ^^^^^^^^^^^ null. This type is incompatible with the expected param type of
 12: function getSearchCursor(cm, query: string, pos, modifiers: SearchModifiers) {
                                         ^^^^^^ string

211:       cursor = getSearchCursor(cm, state.query, location, modifiers);
                                        ^^^^^^^^^^^ null. This type is incompatible with the expected param type of
 12: function getSearchCursor(cm, query: string, pos, modifiers: SearchModifiers) {
                                         ^^^^^^ string

Found 3 errors
error Command failed with exit code 2.

yarn flow returned exit code 1
Updated 13/04/2017 21:07

Run prettier on CI


It would be great if we could run prettier on CI before we do any linting.

This helps with the use case that the code is not already prettified. I mention this because it is somewhat annoying to prettify code pre-commit these days. It’s a small thing, but would be nice to move to CI

Updated 13/04/2017 21:04

Use inbuilt python element tree


For some reason we used the libxml ElementTree python compatibility module instead of using the actual cElementTree from python2 and 3. I have no idea why. It makes the builds fail when lxml is missing and we aren’t really using anything special from libxml. Let’s use what’s included in python.

This should help in the future when we want to enable SSG builds on Windows and MacOSX. It’s not as simple to install libxml2 on these platforms as it is on Linux distributions.

I haven’t replaced it in all the tools, there are some tools that we no longer use that I skipped over. I will create another PR to remove them from the repo.

Updated 26/04/2017 17:34 3 Comments

tests의 변경.


현재의 tests/의 내용을 example로 하고 test/에는 js test를 위치시킨다.
mocha등 테스트 유닛을 gui에는 부적합할 것 같으니 현재 test(html style)를 유지한다.

test를 gh-pages로 만드는 것도 고려해보자.

Updated 30/04/2017 02:05

App user login & transaction security [13]

  • [ ] user signs in with username and password
  • [ ] store credentials in android keystore (given devices are encrypted):
  • [ ] secure app with 4 digit pin (-> move to own issues)
  • [ ] define how exactly authentication should work - probably basic auth
  • [ ] define how to deal with lost passwords (see hzi comment below)

Authentication research

When (not) to Use OAuth2

You should only use OAuth if you actually need it. If you are building a service where you need to use a user’s private data that is stored on another system — use OAuth. If not — you might want to rethink your approach! Don’t Use OAuth2 for Sensitive Data. If you’re building an application that holds sensitive data (like social security numbers, etc.) — consider using OAuth 1.0a instead of OAuth2 — it’s much more secure. There are other forms of authentication for both websites and API services that don’t require as much complexity, and can offer similar levels of protection in certain cases. Namely: HTTP Basic Authentication and HTTP Digest Authentication. source:

OAuth2 introductions & examples

  • intro to oauth2:
  • Spring REST API + OAuth2 + AngularJS:
  • Android OAuth2 with AccountManager & Retrofit:

    Using basic auth vs. digest auth

    Use digest auth only when it’s not possible to force SSL source:

    Things to deal with when using basic auth

  • In order to use the service, the client needs to keep the password somewhere in clear text to send it along with each request.
  • The verification of a password should be very slow (to counter brute force attacks), which would hamper scalability of your service. On the other hand, security token validation can be quick source:
Updated 11/04/2017 14:30 1 Comments

Reduce bundle size


Right now, we have a bundle size of 777KB for the minified version of this library. That is probably too much.

Mid-term goals: - [ ] Move to webpack ( #103 ) - [ ] Remove dependencies that are obsolete - [ ] Investigate in minification options - [ ] Exclude tests (are they included right now?) - [ ] Split up library in modules that can be included as required - [ ] Investigate in further possibilities

Updated 10/04/2017 23:48 2 Comments

Fork me on GitHub