Contribute to Open Source. Search issue labels to find the right project for you!

qt5-qtwebengine for aarch64 and armhf isn't in the binary repo


qt5-qtwebengine is the whole chromium browser, with a bit of QT glue. It takes hours to compile, even with a fast CPU. I’ve tried to compile it for x86_64, armhf and aarch64.


  • x86_64: compiled, and is available in the repo now (I did not test using it, that would be helpful)
  • armhf failed at the very end when linking libQt5WebEngineCore linking ..../lib/ collect2: fatal error: ld terminated with signal 11 [Segmentation fault], core dumped compilation terminated distcc[12345] ERROR: compile (null) on localhost failed
  • aarch64: hangs forever (left it there for hours) on [327/237] LINK gn fairly early for some reason.

NOTE: I gave it 50 GB of SWAP just in case, so running out of memory shouldn’t be the problem (and that would have gotten other output).

How to help

Try to compile it on your own PC for foreign arches. Especially for aarch64 it fails early for me, so it would be good to know if it’s just me or not. If it’s possible, trying to compile on a native device (e.g. @opendata26 with his Shield TV) would also be helpful (does it run through there?).

To build it, change arch="x86_64" back to arch="all" in aports/main/qt5-qtwebengine/APKBUILD, then run either: * pmbootstrap build --arch=aarch64 qt5-qtwebengine * pmbootstrap build --arch=armhf qt5-qtwebengine

Updated 14/12/2017 20:57

Look into registries of conferences


A quick web search brings up things like–charlottesville/conferences/ but none of them are providing open data, and some of them look less trustworthy than others.

Updated 14/12/2017 18:20

[WIP] Add fossildb


DO NOT MERGE This is not yet ready, because: - we need to adapt our development server to use fossildb, too, and - braingames-libs should probably be merged and the current master version used here then.

Mailable description of changes:

  • TODO

Steps to test:

  • TODO


  • related to scalableminds/kube#65

  • [ ] Ready for review
Updated 14/12/2017 19:52

Mat-icon : broken links for Material icons font


Bug, feature request, or proposal:


What is the expected behavior?

References to the Material icons font should link to the correct URL : - on the Overview tab, in the Font icons with ligatures paragraph - on the API tab, in the Directives paragraph

What is the current behavior?

Current links does not link to the correct URL.

Is there anything else we should know?

The text from the API tab is correct ( but the link is not ( FYI, this is the same erroneous link that is used on the Overview tab.

Updated 14/12/2017 17:05

Introduce API-Docu, build automatically


User Story

As a:


I want:

to have an API-Documentation

so that:

I can see all URL-Paths and their functions.

Acceptance criteria:

  • [ ] All API-Functions are commented
  • [ ] Build the Documentation on build time
  • [ ] The documentation should be generated only on tagged builds
  • [ ] Publish the build (as GH Pages)
  • [ ] The version of the docu should be switched

Additional info:

Please tag this issue if you are sure to which tag(s) it belongs.

Updated 14/12/2017 11:32

Lean issues


I updated to the latest lean version and it doesn’t run very well. At first it was taking ages to display tactics goals and I’m now get the error [Lean] excessive memory consumption detected at 'expression equality test' (potential solution: increase memory consumption threshold) for all of the theorems in my xnat.lean file. Scott seemed to have a similar issue but I wasn’t quite sure how he solved it. When I ran leanpkg build to install mathlib (I don’t know if I need to do this every time I update?) it froze parsing line 12 of 20171124-decimals.lean (and eventually froze my computer).

Edit: Restarting seems to fix the memory issue. I wonder if there is some kind of memory leak.

Updated 14/12/2017 03:47

Preprocessor: Per-token extension queries dramatically slow down preprocessor


Context: I’m trying to switch to glslang preprocessor from mojoshader preprocessor that we use in our build pipeline; performance is important since full preprocessed source is the compilation cache key, so we preprocess all our shaders to check if they need rebuilding. On a specific shader pack, mojoshader takes ~90ms including various overheads, glslang preprocessor takes ~280ms.

I found that a very significant portion of this time is spent in TPpContext::tStringInput::scan (and possibly other callsites) calling extensionTurnedOn (which does std:;string construction and std::map lookup).

Replacing extensionTurnedOn body with return false drops preprocessing time from 280ms to 160ms which makes me think that all supported extensions should be cached in primitive variables somewhere.

Updated 14/12/2017 07:24 1 Comments

Deprecation warnings in _theming.scss


Bug, feature request, or proposal:


What is the expected behavior?

No warnings during build.

What is the current behavior?

I see the following self-explaining deprecation warnings: ``` ng build –prod 10% building modules 4/5 modules 1 active …\<project_root>l\src\theme.scssDEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/theming.scss DEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of c:/Work/Dita 10% building modules 6/6 modules 0 activeDEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/theming.scss DEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/theming.scss DEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/theming.scss DEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/theming.scss DEPRECATION WARNING: Passing null, a non-string value, to unquote() DEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/theming.scss will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/theming.scss DEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/theming.scss DEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular 69% building modules 740/741 modules 1 active …itatWeb\<project_root>l\src\theme.scssDEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of <project_root>/node_modules/@angular/material/_theming.scss DEPRECATION WARNING: Passing null, a non-string value, to unquote() will be an error in future versions of Sass. on line 1483 of c:/Work/Date: 2017-12-13T20:43:31.018Z Hash: 0fe6fd8d1d7e608d6e79 Time: 112769ms chunk {0} polyfills.0c6625b3fc183e5d4767.bundle.js (polyfills) 96.4 kB [initial] [rendered] chunk {1} main.9b8205e180b9751fc93c.bundle.js (main) 1.47 MB [initial] [rendered] chunk {2} styles.525ebb3f34a68462be9d.bundle.css (styles) 46 kB [initial] [rendered] chunk {3} inline.b240d1c2aa548af8fc2a.bundle.js (inline) 1.45 kB [entry] [rendered]

#### What are the steps to reproduce?

1. Create a new app with CLI.
2. Add Material packages to `package.json`:

“dependencies”: { “@angular/animations”: “^5.1.1”, “@angular/cdk”: “^5.0.1”, “@angular/common”: “^5.1.1”, “@angular/compiler”: “^5.1.1”, “@angular/core”: “^5.1.1”, “@angular/flex-layout”: “2.0.0-beta.10-4905443”, “@angular/forms”: “^5.1.1”, “@angular/http”: “^5.1.1”, “@angular/material”: “^5.0.1”, “@angular/platform-browser”: “^5.1.1”, “@angular/platform-browser-dynamic”: “^5.1.1”, “@angular/router”: “^5.1.1”, “core-js”: “^2.5.3”, “rxjs”: “^5.5.5”, “zone.js”: “^0.8.14” },

3. Create a custom `theme.scss` in `src` folder and register it in `.angular-cli.json`:

@import ‘~@angular/material/theming’;

$typography-config-default: mat-typography-config(); $typography-config-custom: ( button: mat-typography-level(18px, 18px, bold) / This line causes warnings / );

$typography-config: map-merge($typography-config-default, $typography-config-custom); @include mat-core($typography-config);

`` 4. Runng serve` and see the warnings.

What is the use-case or motivation for changing an existing behavior?

Which versions of Angular, Material, OS, TypeScript, browsers are affected?

Angular 5.1.1, Angular Material 5.0.1, TypeScript 2.4.2.

Is there anything else we should know?

Updated 14/12/2017 22:38 7 Comments

Feature suggestion: Netkan source for forum threads



SearchAndRescue has had errors on for some time now, and the latest indexed version is out of date. (Fixed by KSP-CKAN/NetKAN#6092)

This mod is hosted on DropBox because the author prefers not to use SpaceDock or GitHub. This requires manual maintenance of the metadata. (There’s a SearchAndRescue.netkan file, which essentially just automates the process of populating download_size and download_hash, because everything else has to be filled in manually.)


I was trying to think of ways to improve this, and hit upon the idea of trying to get download links from forum threads, with a value like this in a netkan file:

    "$kref": "#/ckan/forum/123456-Topic/",

Proposed format, broken out by pieces of text between forward slashes:

  • Standard #/ckan kref prefix
  • forum to indicate the link is on a KSP forum thread
  • 123456-Topic to indicate the thread-specific part of the thread’s URL, to be appended to
  • to specify a link search string to be matched

Netkan could:

  1. Download the HTML for the forum thread (or even better, use an API if one exists)
  2. Parse it looking for links
  3. Return the first link that matches the search string from the kref
  4. Download and process the file as normal to generate a ckan file

This might be somewhat more automated than the current process for a mod like SearchAndRescue.


This method would probably be a bit error-prone. It would be sensitive to the exact formatting of a post; an author might rearrange their list of downloads and find that the wrong ones were now being checked. But as long as the requirements were simple and clear, it ought to be possible to keep a thread formatted in a parseable way.

Less clear are the expectations that users might develop. Authors might expect that dependencies or version requirements could be pulled from their threads, which probably isn’t feasible given the requirement of free form natural language processing. We could try inventing a simplified metadata language for specifying such things, but that could turn this into a very large project with requirements for reporting syntax errors, etc.

CKAN’s currently indexed downloads are overwhelmingly on SpaceDock, GitHub, and


However, since nearly all mods have forum threads, some authors may be tempted to change their mods' metadata to check the forum thread. Obviously this should be avoided whenever possible; a forum thread should only be used for DropBox-style hosts that have no formal organization of releases.

Updated 14/12/2017 01:57

Refactor everything config to profile


Right now, it’s hard to distinguish and name things because the concepts of request configuration, command line options and profile are mixed together. With more complex features like the daemon (#26) it will get harder to understand how things work.

This issue is to identify which parts should live where and refactor everything to go in the right place. So far, there are at least the following responsibilities:

  • [ ] command line options: anything that the user can pass in the CLI as arguments
  • [ ] profiles: where to find profile files, what is loaded from profiles
  • [ ] request configuration: from the previous two replace variables, complete URLs and build a request
  • [ ] daemon configuration: where to store PID and log files
Updated 13/12/2017 03:18

Login / Logout for is buggy


While login to works in general, there are some usability issues:

  • [ ] One can not log out from discuss itself. You have to logout from wK and then be auto logged off from discuss as well.
  • [ ] When logged out, navigating to shows you the 401 error message. There is no way / redirect to login.

Perhaps, we can find a way around these issues within the new Silhouette framework.

<a href=“” target=“_blank”>Log Time</a>

Updated 11/12/2017 13:31

Convert old XmlHTTPRequest + jWorkflow loading procedure to pure fetch and Promise



Better use standard API, which is documented and correctly implemented in all major browsers, instead of a non-standard stuff like jWorkflow (which uses horrible ways of realizing the asny/sync worklfow stuff).


  • Use the fetch API instead of the XmlHTTPRequests API
  • Use the Promise Chain API instead of the jWorkflow library
Updated 11/12/2017 11:42

Proper testing


The tests really need to be better. Some categories of tests to include (in no particular order):

  • Functionality of each EVM operation
  • Variables and lattice objects
  • Def sites
  • constant folding
  • propagation of variable references between distant blocks
  • Memory and state system (once memory_abstraction is eventually merged)
  • Widening
  • Stack Freezing
  • Generic decompilation of a graph, checking that it has the right structure
  • jump mutation / throw generation
  • disassembly
  • TSV output
  • settings, both that they are set correctly, and that they have their desired effects
Updated 11/12/2017 11:41

Improved graph graphics


Desiderata: - A legend for graph colours; - Less obstructive side panel; - Better interactivity, (zooming/panning, search, improved display format for stacks etc., links between def and use sites); - Better style

Something like Plotly or Bokeh may be handy here if the thing is to be interactive, but perhaps it’s just better to have a JS viewer which just reads a json dump of the CFG.

Updated 11/12/2017 11:32

Analytics input and output


A bunch of analytics are collected during dataflow analysis. Add a flag to toggle whether this occurs or not, and make it clear how to output this stuff. Maybe make an exporter which produces the analytics only, and nothing else.

The analytics information should be a graph object member; then exporters can be modified to easily output this data along with the rest of it.

Updated 11/12/2017 09:29

Automated CI


We need automated Windows and Unix builds.

I could use my personal accounts for this. But if i recall correctly, the scalameta org has accounts in place.

@olafurpg Would you be able to help with this?

Updated 11/12/2017 19:19 2 Comments

Look into KDE Slimbook


“With the KDE Slimbook the KDE Community, in cooperation with Slimbook, takes a new leap in providing us all - users and creators alike - with a safe, private and productive experience. We combined the productive environment of Plasma, KDE applications, the technological advances in KDE Frameworks and the agile pace of KDE software releases with the Slimbook hardware in a unique cooperation to bring you the KDE Slimbook. We didn’t make it just for you - we made it for us. A laptop that we, as creators and makers, lacked. A machine that ships with us in mind. Private from the first boot, secure for all users and where productivity is its main focus. A machine for creators.”

Updated 12/12/2017 23:39 1 Comments

Login screen to TabBasedApp and then log out back to Login screen


Issue Description

I haven’t found a proper solution for displaying a login screen before a TabBasedApp. I haven’t found a way to switch between a single screen app and a tab based one, which would be the most ideal way so I could have the login/signup process as a “single screen app” and after logging in, switch to a tab based app. I’ve tried having a modal show before the tabs show, but have had no luck. I’ve also tried having a conditional for if the user is logged in, show tabs & if not, then show single app login. However, there’s no way to log in or out, which would require switching between the two app types. Maybe I’m missing something, but if someone can give a concrete example, that would be great. I’ve only found open ended answers in past issue #’s or answers that don’t solve logging in and out.

Steps to Reproduce / Code Snippets / Screenshots

I’ve tried: if (!isLoggedIn) { Navigation.showModal({ screen: "example.Login", }); } else { Navigation.startTabBasedApp({ tabs: [] }) }

Something like this was suggested in #1442, but if someone is not logged in and brought to the example.Login, there is no way to switch to the tabs after logging in and after logging out, no way to go back to the example.Login.


  • React Native Navigation version: 1.1.300
  • React Native version: 0.49.2
  • Platform(s) (iOS, Android, or both?): Both
  • Device info (Simulator/Device? OS version? Debug/Release?): iPhone 8 iOS 11.1 Simulator
Updated 14/12/2017 10:42 2 Comments

bug(build): nightly build not publishing for material


Bug, feature request, or proposal:

The nightly builds are not getting updated for material, but they are getting updated for the cdk

What is the expected behavior?

Both packages should be updated on the nightly build with each commit

What is the current behavior?

Nightly builds are not being published for material2

What are the steps to reproduce?

npm i angular/material2-builds

Updated 12/12/2017 08:12 1 Comments

Provide fresh auto updater in releases


This pull request replaces #2204, which was closed because its approach wasn’t appropriate. See that PR for details of the problem being solved.

New approach:

  • Add AutoUpdater.exe to the list of downloadable files when creating a new release on GitHub
  • Update client to look for this file and use it, falling back to the old URL if it’s missing

This change assumes that the list of assets in the release will be in the same order as they are in the .travis.yml file. I believe this would be the case because it’s an array rather than a dictionary, and the easiest way for Travis to implement it would be a simple foreach loop, but I was not able to find documentation or a concrete example to confirm this.

We should mention in the next release notes that users do not need to download AutoUpdater.exe manually.

Partially fixes #2140. The error will still occur when updating to this release, because the old clients will still use the 2-year-old auto updater exe, but updating from this release to the next release will use the latest auto update code. If we publish one last manual release to , then the current clients will update properly as well.

Updated 09/12/2017 19:09

RPM package for Fedora

  • [x] Create basic RPM SPEC for Fedora.
  • [ ] Rename temporary package name to final.
  • [ ] Create CI task to build package on Koji after each commit.
  • [ ] Remove all third-party libraries from repository.
  • [ ] Use packaged dependencies: minizip, libtgvoip, etc.
Updated 09/12/2017 14:51

CKAN's GitHub downloads are breaking the rules


GitHub downloading needs a rewrite

(I debated whether to add this as a comment to #1817, but it seems like too much text and detail for that.)


Currently if CKAN downloads many files from GitHub at the same time, they often fail with HTTP status code 403-Forbidden. #1817 contains an example, but these reports are common and I’ve definitely seen it happen myself several times.


The GitHub API uses 403 codes for throttling; you get 60 unauthenticated requests per hour, and any beyond that return a 403. I encountered this while working on an unrelated project, and I had to use a GitHub token to allow 5000/hour, passed in the HTTP request headers:

Authentication: token <OAuth token here>

Currently CKAN’s downloads do not go through the GitHub API, so this does not necessarily indicate exactly what’s going on with them. However, it establishes that 403-Forbidden is sometimes used for throttling, and it becomes more relevant later in discussion of the API.

Sample API data for releases, minus the author and uploader fields since they’re long and not relevant to this issue:

  "url": "",
  "assets_url": "",
  "upload_url": "{?name,label}",
  "html_url": "",
  "id": 7538924,
  "tag_name": "v0.7.8",
  "target_commitish": "master",
  "name": "Frictionless toilet",
  "draft": false,
  "prerelease": false,
  "created_at": "2017-08-28T06:00:32Z",
  "published_at": "2017-08-28T06:09:07Z",
  "assets": [
      "url": "",
      "id": 4682631,
      "name": "",
      "label": null,
      "content_type": "application/zip",
      "state": "uploaded",
      "size": 114372,
      "download_count": 5243,
      "created_at": "2017-08-28T06:08:01Z",
      "updated_at": "2017-08-28T06:08:02Z",
      "browser_download_url": ""
  "tarball_url": "",
  "zipball_url": "",
  "body": "Fix glitches when the settings file is invalid"

The zip file that we want to download is associated with assets[0], and there are two fields for it, url and browser_download_url. This becomes important later.

Investigation summary

I used the “Contact GitHub” link to reach out to GitHub about how their download throttling works. Surprisingly, the person who replied understood exactly what I was talking about and how to fix it :+1:. It turns out that these problems happen because CKAN is not using GitHub as intended. From my conversation with the very helpful support person:

If I understood your message correctly, it seems like you’re programmatically downloading resources from, is that right? If that’s so, then you shouldn’t be doing that. wasn’t build for programmatic use like that, it was built for humans. For programmatic use, you should be using the API. The API has well defined rate limits and caching behavior you can rely on, while doesn’t. That doesn’t mean that doesn’t have any rate limits, it only means that you can be rate limited at any time and without warning.

So, we’d like to ask you to switch and use the API for downloading the data you need, and respect the defined rate limits (that’s what a good citizen app should be doing, instead of hitting

(“good citizen” was my phrasing in my original message, so don’t take that as an unprovoked criticism of our civic virtues.)

If I’m interpreting that code snippet correctly, you’re using the browser_download_url link, which, as the name suggests, is intended to be used by human users via a browser.

For downloading release assets via the API, you should be using this endpoint:

Notice this note: “To download the asset’s binary content, set the Accept header of the request to application/octet-stream. The API will either redirect the client to the location, or stream it directly if possible. API clients should handle both a 200 or 302 response.”

That would be the “url” field of a particular asset (which are listed when you fetch a release e.g. via, but with the addition of the special Accept header.

Key points:

  • The URL from the field we’re using currently (browser_download_url) is for users and browsers only, not applications. It can be throttled, but there is no explicit policy or workaround.
  • We should be using the GitHub API for downloads. Currently we use it in the Netkan code that finds new releases, but for downloads we effectively impersonate a browser.
  • This can be done by requesting the url field instead of browser_download_url and setting a custom HTTP header:

    Accept: application/octet-stream

    I tested this with wget, and setting the Accept header did indeed give me the download. Without this header, it returns a JSON object describing the asset.

Changes needed to stop abusing browser_download_url

GitHub-specific downloading metadata & logic

When downloading from GitHub, we need to send the custom HTTP header. This cannot be accomplished simply by swapping out the bad URL for the good URL in the download metadata field.

Proposed new metadata field:

  • github_download - The assets[0].url value from the API

Specific changes:

  • The spec/schema would need to be updated to allow this field.
  • Netkan would need to be updated to generate this field.
  • CKAN would need to be updated to check for the presence of this field, which would then trigger an alternate download method that sets the custom header.

UI to handle 403 statuses

If a GitHub download returns a 403 status, we should handle the exception and notify the user that their downloads are being throttled. We could direct them to the setting (see below) and web page dealing with GitHub auth tokens, and/or advise them to wait 60 minutes for their limit to reset. can be used to get the exact limit and timing numbers.

GitHub token handling

Users will be limited to 60 GitHub downloads per hour, because this is the limit of the GitHub API. 140+ mod installs are pretty commonly mentioned on the forums, and reinstalling everything from scratch is a common method for dealing with compatible upgrades, so some users would probably encounter this limit and not appreciate the 60-minute wait to be able to download more. The only way around this is to use a GitHub auth token, which boosts the limit to 5000/hour per token.

It would be nice to ship a single internal auth token for all of CKAN, since then users would have the 5000/hour limit by default without having to worry about any of the details. More responses from the GitHub contact person:

Including a single pre-defined token with the app so that this token is used by all users of the app is possible. You could create a scopeless token here and include that. A scopeless token doesn’t have any special permissions – it can be used for read-only access of public data. So, it would be safe in that way. However, someone could easily take your token from the app, and then drain the API quota for the user who owns the token by making lots of unnecessary API requests. At that point, the app would stop working for everyone who uses the app.

Deliberate abuse like that is unlikely, but assuming 200 downloads per active user per hour, a 5000/hour limit across all CKAN users would support 25 active users in a given hour. The number of active users at a given time isn’t known, but the latest CKAN release has over 60000 downloads, so it’s probably more than 25. If we were able to determine the limit we needed per hour, we could divide it by 5000 and then generate that many tokens and pick one randomly per request, but that might not be in the spirit of the API’s rules.

A setting

We could create a new settings field called GitHub Auth Token, where the user could fill in their own tokens to allow more downloads. This could be instead of or in addition to any built-in tokens we may or may not use, and it should support all the UIs.

Multi-pass approach

  1. Try with no authentication at all. This would succeed for the first 60 requests per user per hour, probably the majority but not all.
  2. For the remaining requests that fail, retry with a single hard coded auth token. As long as we only use this as a fallback, the 5000/hour limit would only apply to downloads in excess the 60/hour.

Migration concerns

If Netkan was updated to use this new scheme tomorrow, current CKAN clients would break unless the old download field was still populated. So we should not remove support for the old metadata immediately; GitHub downloads should use both download and github_download until all clients are updated.

Or just download serially

The API docs say:

Make requests for a single user or client ID serially. Do not make requests for a single user or client ID concurrently.

So even with a token, CKAN’s parallel download method would still be in violation of the letter of the law.

As a halfway measure, we could try scaling back the parallelization of downloads.

  1. Check whether a download URL contains “”
  2. If so, add it to a pool of downloads to be handled serially
  3. Handle all other downloads normally
  4. When a download finishes, if it contains “”, then start a new download from the pool

This might solve the problem without messing with all the API/token stuff. We would still technically be misusing GitHub, but users should no longer encounter failed downloads as frequently.

Updated 14/12/2017 04:49 4 Comments

Host a minimum viable product in AWS


Begin working on the infrastructure to host this application once the basic functionality is in place. I will be ready to work on this bug when the following work is complete:

  • [ ] Companies can be added and a listing can be retrieved
  • [ ] Domains can be added under companies and a listing can be retrieved
  • [ ] A specific domain can be clicked on to return an empty overview page (page content will be populated with plugin output later)
  • [ ] The website should be locked behind some simple authentication until authorization and access control work is completed
  • [ ] Celery tasks should function (domain enumeration triggered via /scan endpoint) and update database with scan results

With that functionality complete, I can evaluate the AWS infrastructure in this bug. This bug will be complete when I have a pipeline in place with automated testing, linting and deployment.

Updated 08/12/2017 03:38

Configuration object creation (similar to powershell)


There is small but growing divide the between capability of AZ CLI and Azure Powershell.

Of particular note is object creation in Powershell. In Powershell, a complete configuration object can be created using multiple commands. Example ($gw): ```

Get the application gateway

$gw = Get-AzureRmApplicationGateway -Name AdatumAppGateway -ResourceGroupName AdatumAppGatewayRG

Get the existing HTTPS listener

$httpslistener = Get-AzureRmApplicationGatewayHttpListener -Name appgatewayhttplistener -ApplicationGateway $gw

Get the existing front end IP configuration

$fipconfig = Get-AzureRmApplicationGatewayFrontendIPConfig -Name appgatewayfrontendip -ApplicationGateway $gw

Add a new front end port to support HTTP traffic

Add-AzureRmApplicationGatewayFrontendPort -Name appGatewayFrontendPort2 -Port 80 -ApplicationGateway $gw

Get the recently created port

$fp = Get-AzureRmApplicationGatewayFrontendPort -Name appGatewayFrontendPort2 -ApplicationGateway $gw

Create a new HTTP listener using the port created earlier

Add-AzureRmApplicationGatewayHttpListener -Name appgatewayhttplistener2 -Protocol Http -FrontendPort $fp -FrontendIPConfiguration $fipconfig -ApplicationGateway $gw

Get the new listener

$listener = Get-AzureRmApplicationGatewayHttpListener -Name appgatewayhttplistener2 -ApplicationGateway $gw

Add a redirection configuration using a permanent redirect and targeting the existing listener

Add-AzureRmApplicationGatewayRedirectConfiguration -Name redirectHttptoHttps -RedirectType Permanent -TargetListener $httpslistener -IncludePath $true -IncludeQueryString $true -ApplicationGateway $gw

Get the redirect configuration

$redirectconfig = Get-AzureRmApplicationGatewayRedirectConfiguration -Name redirectHttptoHttps -ApplicationGateway $gw

Add a new rule to handle the redirect and use the new listener

Add-AzureRmApplicationGatewayRequestRoutingRule -Name rule02 -RuleType Basic -HttpListener $listener -RedirectConfiguration $redirectconfig -ApplicationGateway $gw

Update the application gateway

Set-AzureRmApplicationGateway -ApplicationGateway $gw ```

Why does this matter/needed?

Application Gateway can take a whole day(!) to deploy due to each individual change taking anywhere from 5-45 minutes to apply.

Powershell avoids this by generating the configuration object and then applying it in bulk to the Application Gateway, resulting in only 1 operation.

Presumably this will continue to be the case with more and more PaaS offerings in Azure. Is the right answer here to build this capability in AZ CLI or perhaps the right answer is to improve mgmt PaaS offerings that are so cludgy? This would be a suggestion for the former, as to some extent understand why Application Gateway takes some time to apply changes (multiple nodes, etc).

Updated 14/12/2017 21:47 1 Comments

Improve Sentry error handling


User Story

As a:


I want:

to only catch Errors that represent a real error.

so that:

I can see quickly when something goes wrong

Acceptance criteria:

  • [x] BadRequestErrors etc. in backend are not pushed to Sentry
  • [ ] Add source maps to sentry
  • [ ] Tag Backend and Frontend as environments
Updated 14/12/2017 12:23

Look into FOX: Federated knOwledge eXtraction Framework


as per

“FOX is a framework for RDF extraction from text based on Ensemble Learning. It makes use of the diversity of NLP algorithms to extract entities with a high precision and a high recall. Moreover, it provides functionality for Keyword Extraction and Relation Extraction.”

Came up at .

Updated 07/12/2017 07:12

Testsuite: Run UIs in Qemu and check running processes and more



  • When pmbootstrap qemu gets killed, it now takes down the Qemu process with it
  • test/ got a new optional --build parameter, which makes it build all changed packages instead of just checking the checksums
  • We run this before running the testsuite now, so all changed packages get built before running tests (otherwise tests would hang without any output while a changed package is building)
  • New testcase, that zaps all chroots, installs a specific UI (xfce4 and plasma-mobile currently, easy to extend), runs it via Qemu and checks the running processes via SSH.
  • Version checking testcase: rewritten to include Alpine’s testsuite file in our source tree, so we don’t need to clone their git repo anymore. Now it is enabled for Travis.
  • All this gives us a nice 8% code coverage boost
  • Increased the hello-world pkgrel to verify that the Travis job is working.


Plasma doesn’t work with Qemu’s --display=none yet, so we can only test if polkitd is running.


  • [x] pmbootstrap build device-samsung-i9100 and other device packages: automatically build for the right arch instead of telling the user that it needs to be specified. Only that way it works automatically, and that’s more userfriendly anyway. We can add a NOTE in the debug log (pmbootstrap log), which tells that we’ve assumed that this is what the user means.
  • [x] I bet this won’t work on first try with Travis 😉 - for some reason, it seems to run the hello-world package building in background, so it clashes with the Python testcases.


Read through the code, you don’t necessarily need to run it locally (as it runs on Travis).

Updated 09/12/2017 13:57 5 Comments

Look into Disqover


As per and .

“DISQOVER is a semantic search platform that integrates disparate life sciences data. The data is made navigable through a novel indexing engine. Complex queries can be run intuitively and are delivered at speed, enabling you to find smarter data faster with DISQOVER.”

Seen a demo at .

Updated 06/12/2017 09:05

Organize Styling through Project


Right now, we have inline styles and a css file. With approval from @knod, we’d like to standardize this to just the css file(s). There are three steps in our CSS journey, each better than the last, but all of which may not be necessary.

  1. Move all inline styles to a CSS file.
  2. Compact our myriad inline style blocks in the CSS class to reusable classes so that the CSS file is organized and not a mile long.
  3. Introduce some sense of modules/scoping/variables (LESS?) or other tools that may help organize the complexity of a large amount of styling. This step is probably not necessary unless the CSS turns out to be unexpectedly gnarly.
Updated 10/12/2017 20:43

Fork me on GitHub