Contribute to Open Source. Search issue labels to find the right project for you!

Refactor versioning


First versioning library is semver. We also need to add a Docker one (#2025) and others include PEP440.

We should wrap these with utility functions to reduce the number of “native” functions required by each versioning library. e.g. the versioning library can support a “compare” function and then the wrapper library implements isLessThan, isLessThanOrEqualTo, isGreaterThan, etc.

Updated 27/05/2018 08:22

nrf52: remove bitfield dependency


Pull Request Overview

This pull request removes an unused dependency in the nrf52 crate.

Testing Strategy


TODO or Help Wanted

Blocked on the two nrf52 to new register PRs.

Documentation Updated

  • [x] Kernel: Updated the relevant files in /docs, or no updates are required.
  • [x] Userland: Added/updated the application README, if needed.


  • [x] Ran make formatall.
Updated 26/05/2018 23:12

Documentation links broken


The documentation links both in the README and Cargo, pointing to, is broken here: I get a mostly empty website saying “The requested resource does not exist”.

Updated 27/05/2018 08:20 4 Comments

(yarn install in exmaple) error An unexpected error occurred: "Reduce of empty array with no initial value".


When I want following instruction install and setup to run this tabler-react, I got trouble when yarn install in example folder. it show up error on screen.

error An unexpected error occurred: “Reduce of empty array with no initial value”

Arguments: /usr/local/bin/node /usr/local/Cellar/yarn/1.7.0/libexec/bin/yarn.js install

PATH: /Users/jackalkao/anaconda3/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin

Yarn version: 1.7.0

Node version: 8.11.2

Platform: darwin x64

Trace: TypeError: Reduce of empty array with no initial value at Array.reduce (<anonymous>) at PackageLinker.<anonymous> (/usr/local/Cellar/yarn/1.7.0/libexec/lib/cli.js:50461:43) at (<anonymous>) at step (/usr/local/Cellar/yarn/1.7.0/libexec/lib/cli.js:98:30) at /usr/local/Cellar/yarn/1.7.0/libexec/lib/cli.js:109:13 at <anonymous> at process._tickCallback (internal/process/next_tick.js:188:7)

npm manifest: { “name”: “tabler-react-example”, “homepage”: “”, “version”: “0.0.0”, “private”: true, “license”: “MIT”, “dependencies”: { “d3-scale”: “^2.0.0”, “prop-types”: “^15.6.1”, “react”: “^16.2.0”, “react-c3js”: “^0.1.20”, “react-dom”: “^16.2.0”, “react-google-maps”: “^9.4.5”, “react-router-dom”: “^4.2.2”, “react-scripts”: “^1.1.1”, “react-simple-maps”: “^0.12.0”, “react-syntax-highlighter”: “^7.0.2”, “tabler-react”: “link:..” }, “scripts”: { “start”: “react-scripts start”, “watch”: “react-scripts start –watch”, “build”: “react-scripts build”, “test”: “react-scripts test –env=jsdom”, “eject”: “react-scripts eject”, “predeploy”: “npm run build”, “deploy”: “gh-pages -d build” }, “devDependencies”: { “bootstrap”: “^4.1.0”, “jquery”: “^3.3.1”, “popper.js”: “^1.14.3” } }

yarn manifest: No manifest

Updated 26/05/2018 16:44 1 Comments

Update webpack to the latest version 🚀


Version 4.9.0 of webpack was just published.

<table> <tr> <th align=left> Dependency </th> <td> <code>webpack</code> </td> </tr> <tr> <th align=left> Current Version </th> <td> 3.12.0 </td> </tr> <tr> <th align=left> Type </th> <td> dependency </td> </tr> </table>

The version 4.9.0 is not covered by your current version range.

If you don’t accept this pull request, your project will work just like it did before. However, you might be missing out on a bunch of new features, fixes and/or performance improvements from the dependency update.

It might be worth looking into these changes and trying to get this project onto the latest version of webpack.

If you have a solid test suite and good coverage, a passing build is a strong indicator that you can take advantage of these changes directly by merging the proposed change into your project. If the build fails or you don’t have such unconditional trust in your tests, this branch is a great starting point for you to work on the update.

<details> <summary>Release Notes</summary> <strong>v4.9.0</strong>

<h1>Features</h1> <ul> <li><code>BannerPlugin</code> supports a function as <code>banner</code> option</li> <li>allow <code>serve</code> property in configuration schema</li> <li>add <code>entryOnly</code> option to <code>DllPlugin</code> to only expose modules in the entry point</li> <li>Allow to choose between <code>webpack-cli</code> and <code>webpack-command</code></li> <li>improve error message when JSON parsing fails</li> <li>allow BOM in JSON</li> <li>sort <code>usedIds</code> in <code>records</code> for stablility</li> </ul> <h1>Bugfixes</h1> <ul> <li>align module not found error message with node.js</li> <li>fix behavior of <code>splitChunks</code> when request limit has reached (caused suboptimal splitting)</li> <li>fix handling of RegExp in records (caused absolute path in records)</li> <li>fix handling of circular chunks (caused missing <code>webpack_require.e</code>)</li> <li><code>runtimeChunk</code> is even generated when all modules are moved by <code>splitChunks</code> (caused multiple runtime chunks instead of single one)</li> <li>string ids are no longer recorded (caused duplicate chunk ids)</li> <li>fix link to migration guide in error message</li> </ul> <h1>Internal changes</h1> <ul> <li>add more typings</li> <li>Use travis stages</li> <li>add <code>many-pages</code> example</li> </ul> </details>

<details> <summary>Commits</summary> <p>The new version differs by 1757 commits ahead by 1757, behind by 8.</p> <ul> <li><a href=“”><code>bb0731d</code></a> <code>4.9.0</code></li> <li><a href=“”><code>be6bdff</code></a> <code>Merge pull request #7385 from moondef/moondef-patch-1</code></li> <li><a href=“”><code>b77addd</code></a> <code>Merge pull request #7187 from byzyk/enhancement/prettierignore</code></li> <li><a href=“”><code>2f3e7d4</code></a> <code>Merge pull request #7331 from dev-drprasad/add-jsdoc-annotations-cached-merge</code></li> <li><a href=“”><code>70c608c</code></a> <code>Merge pull request #7387 from webpack/bugfix/record-string-ids</code></li> <li><a href=“”><code>69567a1</code></a> <code>update test case to reflect change</code></li> <li><a href=“”><code>8af0320</code></a> <code>Merge pull request #7344 from asapach/master</code></li> <li><a href=“”><code>713292f</code></a> <code>update bot for jest tests</code></li> <li><a href=“”><code>79aa13d</code></a> <code>Merge pull request #7386 from webpack/bugfix/runtime-chunk</code></li> <li><a href=“”><code>67717ab</code></a> <code>Merge pull request #7383 from webpack/ci/improvements</code></li> <li><a href=“”><code>72a45ab</code></a> <code>speed up CI</code></li> <li><a href=“”><code>f026310</code></a> <code>only record number ids</code></li> <li><a href=“”><code>25c7b07</code></a> <code>Fix link</code></li> <li><a href=“”><code>374376d</code></a> <code>fixes #7382</code></li> <li><a href=“”><code>aa99385</code></a> <code>added a note about production mode</code></li> </ul> <p>There are 250 commits in total.</p> <p>See the <a href=“…bb0731d7897fb8a7369efd9d2f9bf0a8c08d546d”>full diff</a></p> </details>

<details> <summary>FAQ and help</summary>

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper. </details>

Your Greenkeeper bot :palm_tree:

Updated 26/05/2018 10:44 2 Comments

doc: add jdalton to collaborators


Fixes: #20828

<!– Thank you for your pull request. Please provide a description above and review the requirements below.

Bug fixes and new features should include tests and possibly benchmarks.

Contributors guide: –>


<!– Remove items that do not apply. For completed items, change [ ] to [x]. –>

Updated 26/05/2018 23:51 6 Comments

Error when run ipfs-cluster-service on BeagleBone or Raspberry PI


I downloaded ipfs-cluster-service for arm into my Raspberry PI and run ipfs-cluster-service daemon. But I totally received this error:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x4 pc=0x11540]

goroutine 46 [running]:
sync/atomic.addUint64(0x131c005c, 0x1, 0x0, 0x131c9150, 0x8)
    /usr/lib64/go/1.9/src/sync/atomic/64bit_arm.go:31 +0x4c
gx/ipfs/QmduQtUFqdq2RhM84yM2mYsdgJRAH8Aukb15viDxWkZtvP/go-libp2p-floodsub.(*PubSub).Publish(0x131c0000, 0x716b54, 0xf, 0x13124630, 0x5d, 0xa2, 0x0, 0x1e)
    gx/ipfs/QmduQtUFqdq2RhM84yM2mYsdgJRAH8Aukb15viDxWkZtvP/go-libp2p-floodsub/pubsub.go:556 +0x64*Monitor).PublishMetric(0x12d5caa0, 0x70dc03, 0x4, 0x12d16c60, 0x22, 0x0, 0x0, 0xa5f1d075, 0x1531f22e, 0x1, ...) +0x39c*Cluster).pushPingMetrics(0x12d64320) +0xc0
created by*Cluster).run +0x48
Updated 25/05/2018 20:33 6 Comments

Implement rules that directly produce Subsystems from options


Blocked by #5788 and #5831.

Once both prerequisites are in place, we’ll be able to write rules that accept environment variables and the contents of files outside the buildroot as Params that propagate down into subgraphs. With that infrastructure in place, we should be able to install rules that: 1. trigger options parsing 2. produce the relevant set of Subsystems for the rules that are in use

Updated 25/05/2018 16:37

Whitelisting of Parole Board IPs for Intranet and PeopleFinder


Service name

Intranet and PeopleFinder

Service environment

  • [ ] Dev / Development
  • [ ] Staging
  • [X ] Prod / Production
  • [ ] Other

Impact on the service

Parole Board users are currently unable to access MoJ Intranet and People finder.

Problem description

User at the Parole Board have recently migrated to Office 365 and as a result have new IP addresses. The IP addresses need to be white listed for Intranet and PeopleFinder.

Current Intranet team do not know how to add IP addresses and need assistance. As part of the work. The team would like some guidance on how to do this in future.

Contact person

Kiera Poland, @Kiera,

Updated 25/05/2018 15:04 1 Comments

Zero length app-data in 0-rtt handshake


New test script idea

What TLS message this idea relates to?

Application Data

What TLS extension this idea relates to?


What is the behaviour the test script should test?

the script should negotiate 0-RTT handshake where the early data is accepted by server, send zero-length application data encrypted messages using early data keys, validate they were accepted by server

Are there scripts that test related functionality?


Additional information

blocked by #205

Updated 25/05/2018 12:30

util: improve display of iterators and weak entries


This is based on #20831 and I am going to rebase when that lands.

This patch changes the way how map iterator entries are displayed by using the same style as for regular maps. It also improves iterators in general as well as WeakSets and WeakMaps by indicating how many entries in total exist. This information was not available before but due to the new preview implementation it is now available and should be displayed as well.


<!– Remove items that do not apply. For completed items, change [ ] to [x]. –>

  • [x] make -j4 test (UNIX), or vcbuild test (Windows) passes
  • [x] tests and/or benchmarks are included
  • [x] documentation is changed or added
  • [x] commit message follows commit guidelines
Updated 26/05/2018 13:44 5 Comments

VBMS Claim Service no longer accepts long addresses


For some reason, the VBMS claim service now requires the address line 1 to be less than 20 characters.

However, the address line 1 is often greater than 20 characters in BGS, so we are experiencing a handful of errors with the message:

The maximum data length for AddressLine1  was not satisfied: The AddressLine1  must not be greater than 20 characters.

We need to resolve this issue.

Updated 24/05/2018 17:40 2 Comments

Close Button/TypeAhead on Click outside


I have added an onblur event to the inputtext atom that changes the state of the button to not be expanded if you click away from the input. However, it is now competing with the button on click event - any help/intelligence would be appreciated.

Blocker: Intelligence Levels.

I have added a gif of the issue i cant solve relating to the click event below.


Updated 24/05/2018 18:42

Fix for dragging of rotation handles of BoundBoxRig


Current method of rotation is not consistent; the object is rotated different directions depending on orientation of object.

This commit adds an option (rotateAroundPivot) which allows the user to rotate an object in a more natural way.

To avoid breaking existing usage, this new rotation is enabled via a bool in BoundingBoxRig.

Existing behaviour: boundingboxgizmoold

New behaviour: boundingboxgizmonew

Updated 24/05/2018 19:05 2 Comments

Add a crude trace-based scoping mechanism for sample names


Blocked by #1161

This PR adds a new handler contrib.autoname.scope that adds a prefix to a sample site name (I needed something like this for a different project): ```python @scope(prefix=“a”) def model(): return pyro.sample(“x”, Normal(0., 1.))

assert “a/x” in poutine.trace(model).get_trace() ```

If no prefix is provided but a function is, poutine.scope will try to use the function’s name as a prefix: ```python @scope def model(): return pyro.sample(“x”, Normal(0., 1.))

assert “model/x” in poutine.trace(model).get_trace() ```

Beyond naive prepending, it supports recursion: ```python @scope def model(r=True): return model(r=False) if r else pyro.sample(“x”, Normal(0., 1.))

assert “model/model/x” in poutine.trace(model).get_trace() ```

Additionally, the PR includes a modification to TraceMessenger that allows using the trace to add a counter suffix to names that have already appeared: ```python @scope def model(): pyro.sample(“x”, Normal(0., 1.)) # x pyro.sample(“x”, Normal(0., 1.)) # x_0 pyro.sample(“x”, Normal(0., 1.)) # x_1

the PR currently makes strict=False the new default

set strict=True for the old behavior (raising an error on duplicate sites)

tr = poutine.trace(model, strict=False).get_trace() assert “model/x” in tr assert “model/x_0” in tr assert “model/x_1” in tr ```

Updated 25/05/2018 01:05 7 Comments

Can't build on Ubuntu 18.04


dotnet –info: ``` .NET Command Line Tools (2.1.200)

Product Information: Version: 2.1.200 Commit SHA-1 hash: 2edba8d7f1

Runtime Environment: OS Name: ubuntu OS Version: 18.04 OS Platform: Linux RID: ubuntu.18.04-x64 Base Path: /usr/share/dotnet/sdk/2.1.200/

Microsoft .NET Core Shared Framework Host

Version : 2.0.7 Build : 2d61d0b043915bc948ebf98836fefe9ba942be11 mono --version : Mono JIT compiler version (tarball Thu May 3 09:42:09 UTC 2018) Copyright © 2002-2014 Novell, Inc, Xamarin Inc and Contributors. TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug Interpreter: yes LLVM: supported, not enabled. GC: sgen (concurrent by default) ``` I am using the latest master (Last commit - . I followed the steps as listed to build omnisharp-roslyn and on running ./, I get the error:

Got a SIGSEGV while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries 
used by your application.

Error: One or more errors occurred.
    GitVersion: Process returned an error (exit code 134).

Please help.

Updated 24/05/2018 20:42 1 Comments

[2.1.1] Handle incoming HTTP requests being canceled gracefully (#2314)


Port #2314 to release/2.1

The diff is a little funky because the original PR (#2314) was based on ( which had been merged prior to that. Both are candidates for 2.1.1 though, so the final merge will include both changes if both are approved.

⚠️⚠️⚠️ DO NOT MERGE ⚠️⚠️⚠️ release/2.1 has not yet been opened for servicing fixes. ⚠️⚠️⚠️ DO NOT MERGE ⚠️⚠️⚠️

Updated 25/05/2018 21:38

Update to Qt 5.9 LTS


Cannot update to Qt 5.9 because of the bug in highlighter

And the bug about having underlines only of white color:

Found duplicates:

Probably will need to rebuild Qt and fix as suggested: 4ffdd865b09c8f595dcfc034ea6f3b5e07469b9f in qtbase

Revert change in QTextDocumentLayout::documentChanged:

for (QTextBlock blockIt = startIt; blockIt.isValid() && blockIt != endIt; blockIt =
     emit updateBlock(blockIt);
Updated 24/05/2018 07:32 1 Comments

Adding shape and color to line chart renders two legends


Hi guys,

I would like to setup a line chart with predefined color and shape scales. But I struggle already with the default scales, because vega-lite renders two different legends. One for each of the two encoding channels.

  "$schema": "",
  "description": "A scatterplot showing horsepower and miles per gallons for various cars.",
  "data": {"url": "data/cars.json"},
  "mark": {"type": "line"},
  "encoding": {
    "x": {"field": "Year", "type": "temporal"},
    "y": {
      "aggregate": "mean",
      "field": "Miles_per_Gallon",
      "type": "quantitative"
    "shape": {"field": "Origin", "type": "nominal"},
    "color": {"field": "Origin", "type": "nominal"}

bildschirmfoto 2018-05-23 um 11 26 28

It would expect the chart to have a single, combined legend, which is the case for mark type “point”.

  "$schema": "",
  "description": "A scatterplot showing horsepower and miles per gallons for various cars.",
  "data": {"url": "data/cars.json"},
  "mark": {"type": "point"},
  "encoding": {
    "x": {"field": "Year", "type": "temporal"},
    "y": {
      "aggregate": "mean",
      "field": "Miles_per_Gallon",
      "type": "quantitative"
    "shape": {"field": "Origin", "type": "nominal"},
    "color": {"field": "Origin", "type": "nominal"}

bildschirmfoto 2018-05-23 um 11 34 37

This issue might be related to issue F with custom shapes.

BTW, the vega editor shows the warning [Warning] shape dropped as it is incompatible with "line"., although it seems to be valid since (Macro) adding shape to line should automatically add another point layer has been resolved. ;-)

Updated 24/05/2018 17:17 8 Comments

Insert classification to database


This PR is dependent on #234 PR. This is branch aims insert classifications to the ProjectClassification table. I am facing a problem while adding the form element to the Projects/settings.html.twig. I have tried to add a classification field to ProjectType Form builder. But I am getting one error screen shot 2018-05-23 at 2 34 23 pm I have tried another method to insert the classification data in another page(Project Classification page). And there I am able to insert the data.

Updated 27/05/2018 11:21 1 Comments

All servers - security patching


Several vulnerabilities were recently discovered in the Linux kernel which may allow escalation of privileges or denial of service (via Kernel Crash) from an unprivileged process. These CVEs are identified with tags CVE-2018-1000199, CVE-2018-8897 and CVE-2018-1087.

Waiting for the security team to assess the risk and need for performing updates and restarts.

Updated 23/05/2018 09:10

zfs command hangs


Sometimes zfs commands that create or destroy datasets hang which in turn makes agent hang too root 4781 0.0 0.0 35244 3432 ? S May22 0:00 zfs destroy -r subutai/fs/wp-39D-2gl-3-4 root 4785 0.0 0.0 23624 1240 ? D May22 0:00 /bin/umount -t zfs /var/lib/lxc/wp-39D-2gl-3-4/opt root 9954 0.0 0.0 35244 3380 ? D 05:13 0:00 zfs create subutai/fs/Container-1-ugZ-oq1-3-4 root 18635 0.0 0.0 35244 3460 ? D May22 0:00 zfs create subutai/fs/Container-1-y0d-c7a-6-3 root 18665 0.0 0.0 35244 3384 ? D May22 0:00 zfs create subutai/fs/Container-1-pdc-g2j-3-3 root 32215 0.0 0.0 35244 3344 ? S May22 0:00 zfs destroy -r subutai/fs/ansible-server-152-mgL-yoc-3-3 root 32219 0.0 0.0 23624 1284 ? D May22 0:00 /bin/umount -t zfs /var/lib/lxc/ansible-server-152-mgL-yoc-3-3/rootfs

Updated 23/05/2018 07:37 5 Comments

Teach retry() about payerErrors


BUILD ON #715 - This is part 2. It adds:

  • retry() argument for validation errors: retry(PaymentValidationErrors errorFields)
  • PaymentValidationErrors dictionary
  • PayerErrorFields dictionary

The following tasks have been completed:

  • [x] Confirmed there are no ReSpec errors/warnings.
  • [ ] Added Web platform tests (link)
  • [ ] added MDN Docs (link)

Implementation commitment:

  • [ ] Safari (link to issue)
  • [ ] Chrome (link to issue)
  • [ ] Firefox (link to issue)
  • [ ] Edge (public signal)

Impact on Payment Handler spec?


Via payerErrors, the user now knows what’s actually wrong with the payment… however, there is no eventing model, so validation of user input cannot be done incrementally: That’s part 3.

async function doPaymentRequest() {
  const request = new PaymentRequest(methodData, details, options);
  const response = await;
  try {
    await recursiveValidate(request, response);
  } catch (err) { // retry aborted.
  await response.complete("success");

async function recursiveValidate(request, response) {
  const promisesToFixThings = [];
  const payerErrors = await validatePayerInput(response);
  if (!payerErrors) {
  await response.retry({ payerErrors });
  return recursiveValidate(request, response);


<!– This comment and the below content is programatically generated. You may add a comma-separated list of anchors you’d like a direct link to below (e.g. #idl-serializers, #idl-sequence):

Don't remove this comment or modify anything below this line.
If you don't want a preview generated for this pull request,
just replace the whole of this comment's content by "no preview"
and remove what's below.


<a href=“” title=“Last updated on May 23, 2018, 4:36 AM GMT (e8a49e0)”>Preview</a> | <a href=“…e8a49e0.html” title=“Last updated on May 23, 2018, 4:36 AM GMT (e8a49e0)”>Diff</a>

Updated 23/05/2018 04:36

Concurrency limit per action


<!— Provide a concise summary of your changes in the Title –>


<!— Provide a detailed description of your changes. –> <!— Include details of what problem you are solving and how your changes are tested. –> After #2795 which only provides system-wider concurrency settings, this will allow per-action concurrency limit.

To use it: * specify whisk.concurrency-limit configs e.g. whisk.concurrency-limit { min = 1 max = 500 std = 1 } * enable concurrency in actions using: "limits":{"concurrency":125} in the action json

In a later PR, additional changes will be useful in cli, e.g. to enable a concurrency flag: wsk action create --concurrency 125

Related issue and scope

<!— Please include a link to a related issue if there is one. –> - [ ] I opened an issue to propose and discuss this change (#????)

My changes affect the following components

<!— Select below all system components are affected by your change. –> <!— Enter an x in all applicable boxes. –> - [ ] API - [x] Controller - [ ] Message Bus (e.g., Kafka) - [ ] Loadbalancer - [x] Invoker - [ ] Intrinsic actions (e.g., sequences, conductors) - [ ] Data stores (e.g., CouchDB) - [ ] Tests - [ ] Deployment - [ ] CLI - [ ] General tooling - [ ] Documentation

Types of changes

<!— What types of changes does your code introduce? Use x in all the boxes that apply: –> - [ ] Bug fix (generally a non-breaking change which closes an issue). - [x] Enhancement or new feature (adds new functionality). - [ ] Breaking change (a bug fix or enhancement which changes existing behavior).


<!— Please review the points below which help you make sure you’ve covered all aspects of the change you’re making. –>

  • [x] I signed an Apache CLA.
  • [x] I reviewed the style guides and followed the recommendations (Travis CI will check :).
  • [ ] I added tests to cover my changes.
  • [ ] My changes require further changes to the documentation.
  • [ ] I updated the documentation where necessary.
Updated 25/05/2018 16:21 1 Comments

Project-level gear visibility


Adds a projects key to GearDoc that affects /gears visibility. No other routes are affected.

With dcm2niix containing projects: [ "5" ] and dicom-mr-classifier containing projects: [ ]:

$ 'api/gears' | jq '.[]'

$ 'api/gears?project=5' | jq '.[]'

$ 'api/gears?project=6' | jq '.[]'


It’s possible that the projects filtering could find its way into dbutil.paginate_find syntax instead; patches accepted.

Updated 23/05/2018 20:47 6 Comments

Failed to resolve: org.mozilla.telemetry:telemetry:1.2.0


Steps to reproduce

Downloaded .zip from and extracted. Opened Android Studio 3.1.2 and imported the extracted file as a new project. Let Android Studio build the app

Expected behavior

The application should build without error

Actual behavior

App does not build Message of “Failed to resolve: org.mozilla.telemetry:telemetry:1.2.0” is displayed. This references line 168 in app, that states “implementation ‘org.mozilla.telemetry:telemetry:1.2.0’”

Device information

Android Studio 3.1.2 on Ubuntu 18.04

Updated 22/05/2018 18:22 1 Comments

Identify best practise for managing MFA for root accounts


How do we manage the credentials assigned from AWS. For example do all AWS users have access to the google group account that could receive the password reset for the root account? What options are we looking at for storing of the MFA seed for the root account and potentially others, since we shouldn’t store both the password and the MFA seed alongside each other.

Speak with TAM and get the best practises, policies/permissions

Updated 22/05/2018 12:08 1 Comments

Per-bucket access keys


This change enables support for having a separate access key/secret per bucket.


Minio is quite useful, but it’s challenging to properly manage bucket access when there is only a single root-level API access key. Handing out the all-access-pass master key to users is risky, as it invites abuse or mistakes from any actor using the service. Users have also been asking for such a feature, as in #5199, #5305, #4186, #2998, #811, and #2751. Responses to these requests have usually been a “no” with responses such as “this feature doesn’t fit in the microservices architecture”, “it’s in the federated service”, or “spin up multiple instances instead”. I believe that with only minor changes to the overall architecture it’s possible to add support for multiple keys, in a way that doesn’t bloat the code or ruin the microservice flavor of minio.


  • The primary set of master credentials is still used for administrative backend tasks, RPCs, and as an all-access key to all buckets.
  • In addition, a single key/secret can be set for each bucket. This key can be used on the web, in v2 and v4 requests, and with the streaming API. The per-bucket keys cannot enumerate the contents or existence of other buckets. The per-bucket key is currently stored in the main config.json file.
  • The per-bucket key cannot be used to delete the bucket or change the key.
  • For the purposes of ListBuckets, if the same API key is set for multiple buckets, then ListBuckets shows all buckets that correspond to the key used.

While this might not satisfy every possible use case, I believe it will be useful for a large population of users asking for this feature. Or at least me :)

Outstanding issues

  • Currently only one key per bucket can be stored. This could probably be expanded with a little more effort, and some UI changes.
  • I’ve hijacked the password change web UI since I’m no expert in web stuff. The password change UI now controls the per-bucket key, leaving no way to change the master key/secret. This could probably be tweaked with a minimal UI change (eg a switch saying “Change per-bucket key”, or maybe two buttons instead of one for “Update Bucket Key” and “Update Master Key”).
  • No unit tests.
  • No documentation.

How Has This Been Tested?

I’ve tested this on my local machine manually, with the web UI and the “mc” tool.

Types of changes

  • New feature (non-breaking change which adds functionality)


  • [x] My code follows the code style of this project.
  • [x] My change requires a change to the documentation.
  • [ ] I have updated the documentation accordingly.
  • [ ] I have added unit tests to cover my changes.
  • [ ] I have added/updated functional tests in mint. (If yes, add mint PR # here: )
  • [x] All new and existing tests passed.
Updated 25/05/2018 21:01 4 Comments

Stop updating OD user data when MicroMasters updates


Currently, when a user updates their profile (manually) or gets a email change from edX (automatically on login) we synchronize the change to open discussions. edX + MicroMasters is the source of truth.

When open discussions allows users to modify their profiles and account settings, we want to let it be it’s own source of truth. We need to stop updating user data from MicroMasters.

This issue is blocked by,, and an issue that hasn’t been written yet to allow users to change their e-mail in open discussions.

Updated 22/05/2018 01:36

CORE-AAM: Add conditional platform mappings for nameless forms


From @matatk on January 24, 2017 19:7

This is a spin-off from #513 which resolves that region landmarks should only be considered as such if they are named (via aria-labelledby or aria-label).

Background: I have been researching how screen-readers expose landmarks to users, because I work on a WebExtension that does the same. This issue has been filed as a spin-off from issues filed in other W3C specs (linked below).

This issue proposes that the same approach be applied to form regions, for the reasons mentioned in that thread, and in the related HTML-AAM issue #82 but repeated here for convenience:

  1. There are instances in the wild of <form> elements wrapping whole pages [1,2,3] - these are not really valid landmarks, so considering them as such would add noise and confusion.
  2. When a <form> is ‘genuine’: if it lacks a label, there’s not really any useful landmark information there (it’s easy enough to navigate to the form/first form control for most users anyway), so adding all unlabelled forms would add too much noise to the landmark navigation.

Related issues: - Related remote issue: - Analagous issue in this repo, relating to region: #513

Copied from original issue: w3c/aria#514

Updated 21/05/2018 23:20 3 Comments

Report CSV Export


Allow users to request a CSV of the records included within a report. This will be the same format as exposed in #226, just adjusting which records are included.

The export option should be present when viewing a report.

Is blocked by impactasaurus/server#103.

Updated 21/05/2018 20:31 1 Comments

Automated Pocket Feed On The Homepage


This issue captures pre-reqs to delivering an automated feed of stories from mozillaHQ pocket account. It will be unblocked once we reach a decision on the homepage.

User Accounts

The current mozilla account is a shared login among a few mozilla employees. This should be a proper ldap account with 2 factor authentication enabled.

  1. Request from IT an LDAP user with Janis and any other folks who need access assigned (they have done this before).
  2. Once that is done sign up for pocket via -> and choose the "Sign Up with Google" option.
  3. Reach out to either Justin or Matt to migrate the existing account over.

Retrieving Data

Below is the note from Matt regarding the API:

Re: the API - the endpoint you specified will return you “saves” - basically anything that has saved to the specified account. It will not return you the recommended posts you put on If you want those, you will either need to use the approach Nick outlined -or- if you want to use the API, I think we could provide you access to a non-publicized endpoint that would return the recommended posts in JSON format (similar to the response you referenced).

Pocket Contacts

  • - Can assist with all things technical.
  • - Can assist with anything account or community related.
Updated 21/05/2018 15:46

TLS 1.3 Finished under application data keys


New test script idea

What TLS message this idea relates to?


What is the behaviour the test script should test?

the server can change the write keys as soon as it has sent Finished message if it didn’t sent CertificateRequest message, that means it has calculated application data keys.

Check if server will not accept Finished message encrypted using application data keys instead of handshake keys (if it won’t fall back the decryption to the new set of keys)

Are there scripts that test related functionality?


Additional information

Depends on #388

Updated 21/05/2018 14:02

Define PaymentResponse.retry() method


part of #705

The following tasks have been completed:

Implementation commitment:

  • [ ] Safari (link to issue)
  • [ ] Chrome (link to issue)
  • [x] Firefox - currently P3 bug
  • [ ] Edge (public signal)

Impact on Payment Handler spec?



This pull request gets us here… the user doesn’t yet know what’s actually wrong with the payment, but at least they know something is wrong.

async function doPaymentRequest() {
  const request = new PaymentRequest(methodData, details, options);
  const response = await;
  try {
    await recursiveValidate(request, response);
  } catch (err) { // retry aborted.
  await response.complete("success");

async function recursiveValidate(request, response) {
  const promisesToFixThings = [];
  const errors = await validate(request, response);
  if (!errors) {
  await response.retry();
  return recursiveValidate(request, response);


<!– This comment and the below content is programatically generated. You may add a comma-separated list of anchors you’d like a direct link to below (e.g. #idl-serializers, #idl-sequence):

Don't remove this comment or modify anything below this line.
If you don't want a preview generated for this pull request,
just replace the whole of this comment's content by "no preview"
and remove what's below.


<a href=“” title=“Last updated on May 23, 2018, 4:35 AM GMT (1784871)”>Preview</a> | <a href=“…1784871.html” title=“Last updated on May 23, 2018, 4:35 AM GMT (1784871)”>Diff</a>

Updated 23/05/2018 04:35 6 Comments

Shared map disappears when dashboard sorted by date


See First solution (reverted)

Second solution (blocked by Rails 4) It seems the best solution for this is this hack (that only works with Rails 4): shared_vizs = Carto::SharedEntity.where(recipient_id: recipient_ids(user)).select(:entity_id).uniq query = query.from(["visualizations JOIN (#{shared_vizs.to_sql}) shared_ent ON = shared_ent.entity_id")])

So we are probably going to take some weeks (or even a couple of months) to get this one fixed. It has a workaround that is not too bad, so I think that’s ok. But there is really no good way to solve this :(

I’d also add this in a comment: ```

The problem here is to manage to generate a query that PSQL will correctly optimize. The problem is that the

optimizer seems to have problem determining the best plan when there are many JOINS, such as when VQB is called

with many prefetch iptions, e.g: from visualizations index.

This is hacky but works, without a performance hit. Other approaches:

- Use a WHERE IN (SELECT entity_id…).

psql does a very bad query plan on this, but only when adding the LEFT OUTER JOIN synchronizations

- Use a CTE (WITH shared_vizs AS (SELECT entity_id …) SELECT FROM visualizations JOIN shared_vizs)

This generates a nice query plan, but I was unable to generate this in postgresql

- Create a view for shared_entities grouped by entity_id, and then create a fake model to join to the

view instead of the table. Should work, but adds a view jus to cover for a failure in Rails

- Use GROUP BY, ...

For some reason, Rails generates a wrong result when combining group with count

- Use a JOIN query.joins("JOIN (#{shared_vizs.to_sql} ...)")

Rails insists in putting custom SQL joins at the end, and psql fails at optimizing. This would work

if this JOIN was written as the first JOIN in the query. psql uses order to inform the optimizer.

This is precisely what this hacks achieves, by tricking the FROM part of the query


Updated 21/05/2018 08:23

Update pyflakes to 2.0.0


This PR updates pyflakes from 1.6.0 to 2.0.0.

<details> <summary>Changelog</summary>

### 2.0.0 - Drop support for EOL Python &lt;2.7 and 3.2-3.3 - Check for unused exception binding in `except:` block - Handle string literal type annotations - Ignore redefinitions of `_`, unless originally defined by import - Support `__class__` without `self` in Python 3 - Issue an error for `raise NotImplemented(...)`


<details> <summary>Links</summary>

  • PyPI:
  • Changelog:
  • Repo: </details>
Updated 24/05/2018 19:38

WIP: tools: update ESLint to 5.0.0


This is still a work in progress. ESLint 5.x hasn’t actually been released yet. This PR contains ESLint v5.0.0-alpha.3. The two commits also need to be rebased in the opposite order to avoid a broken commit from being in master.

  • [x] make -j4 test (UNIX), or vcbuild test (Windows) passes
  • [x] tests and/or benchmarks are included
  • [x] commit message follows commit guidelines
Updated 25/05/2018 20:10 6 Comments

Establish "key" naming scheme


In issue #314 “keys” will be used to uniquely identify different CanonicalItems

Right now, the CanonicalItem seeds file has a bunch of defaults set for it, but these are very arbitrary. Since a separate system will be coupled with this one via these keys, this will be difficult to change once in motion, so we should get it right the first time. :)

We should really have some programmatic way to determine a key string from the CanonicalItem name. This method should be put into the CanonicalItem model so that it can be reliably calculated.

Updated 19/05/2018 22:26

Rewrite the Workspaces widget


Workspaces is taffybar’s most complex widget, and I wrote it at a time when I had not yet established some of the best practices that I have put in place today. It desperately needs a rewrite (not only because it still uses gtk2hs). Here is a breakdown of what needs to be done:

  • [ ] Use wnck haskell-gi/haskell-gi#167
  • [ ] Use a record instead of a type class for the workspaceWidgetController data type
  • [ ] Get rid of the strange behavior of keeping the same image objects around and swapping their pixbufs. Instead, each window should have its own image that is always associated with that window
  • [ ] Move icon handling out into its own module. Share an icon cache across all of taffybar
  • [ ] Migrate to gi-gtk
Updated 23/05/2018 06:48

PD Bugs: New File -> white screen


steps to reproduce

  • Create a new file
  • Add some labware & steps
  • Go back to the file page & create a new file again
  • Get white screen


Creating a new file currently doesn’t clear out existing data/state, so it blows up when you create a new file when there is stuff in the app state.

implementation notes

Laying the groundwork for the ‘load file’ functionality will probably make this go away, so I’m marking it blocked.

Updated 18/05/2018 21:27

Use buffer to periodically apply updates to search index


In issues #605 and #606, our initial approach for incrementally update the ES index based on user actions was to perform real-time asynchronous updates. In the process of testing out different scenarios, we found that there was no comprehensive way of dealing with race conditions/document version conflicts using this approach. Instead of updating the index in real time, we are going to maintain buffers of IDs for objects that need to be updated in the index, and periodically fetch those objects from reddit, serialize them, and update them in the index. This indexing approach is discussed in more detail in the OD search technical doc here:

Acceptance criteria: - Change current indexing actions to add document IDs to time-increment-specific ID sets in Redis - Create a cron job that processes the Redis ID sets after the given time increment has passed - ‘processes’ = fetches all objects from Reddit that correspond to those IDs, serializes those objects, and updates/creates them in the index - ID sets should be deleted after they are processed - All changes to posts/comments should be reflected in the index after the cron job is done processing for each time increment. This includes changes that trigger updates in related objects (e.g.: if a post is set to removed=true, all related comment documents need to be updated to have removed=true as well)

Updated 23/05/2018 22:52 1 Comments

Do not display shared queries in "Dashboard - My saved queries"


Currently when a user shares a query, it does show that query in “My saved queries” in the dashboard. However that section should only show queries “saved” by the user. will provide backend support for a flag that can distinguish shared vs saved queries. Make sure on the UI “My saved queries” section in the dashboard only shows saved queries.

Example of screenshot below. 1. Go to File Repo and select Tissue Type is Saliva 2. Click on Share - copy short URL 3. You can reproduce step 2 x 4 additional times

The query is displayed in My Saved queries 5 times. <img width=“1042” alt=“screen shot 2018-05-18 at 7 33 03 am” src=“”>

Updated 24/05/2018 19:35 1 Comments

Authorization code is requested a second time after inputting the credentials


Build 1.0 (1128) Device: iPhone 7 iOS 11.4b5

Steps to reproduce: 1. Go to Sign In screen 2. Log in with invalid credentials until “Authorize this sign-in” message is shown 3. Enter the authorization code received by email 4. Enter the correct credentials

Results: - The “Authorize this sign-in” message is shown a second time and a new email is received. After inputting the second authorization code, the user is signed in to FxA.

Updated 23/05/2018 20:53 5 Comments

Fork me on GitHub