Contribute to Open Source. Search issue labels to find the right project for you!

Can we replace Paranoia functionality with module_permissions?


We utilise the Paranoia module in Drupal 7, but it can have undesirable side effects, usually as a result of overly zealous permission setting by module maintainers.

We can potentially replace the core of it’s functionality with module_permissions - allowing the delegation of the ability control modules (and their permissions), whilst still maintaining a module/permission blacklist.

Any thoughts/comments?

Updated 22/03/2018 23:39

Slack for StockadeBrewCo and HeardAgency


I have created a Slack account for StockadeBrewCo. Invitations have been sent:

  • abyrnes@
  • dheard@
  • sascha@

I have integrated Shopify, GiutHub and ZOHO. Please join when you have sometime, I need to transfer it to you guys, then you can transfer to Sascha.

This is another communication channel that can significantly improve workflow and ultimately a professional tool that will empower your client and Heard.

If you invite me to HeardAgency Slack as an admin, I will integrate GitHub for you guys as well.

development channel.

This will take 15minutes in total and help innovate, streamline, improve your workflow. Communication has been an issue for Heard Agency as you are using primitive tools like email. For the life of the project (and others for Heard movign forward), you can significantly improve the communication and notification process with developers and see real-time what is being worked on and done using GitHub integration. There’s so many more.

That way, much of the anxiety that is expresssed daily would not be an issue through improved notification and self investigation. It empowers you guys!

Updated 22/03/2018 23:40

Support deserializing a flattened internally tagged enum


This would be valuable to support if possible. Serialization already works but deserialization does not.

extern crate serde_derive;

extern crate serde;
extern crate serde_json;

#[derive(Serialize, Deserialize, Debug)]
struct Data {
    id: i32,
    payload: Enum,

#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "type")]
enum Enum {

#[derive(Serialize, Deserialize, Debug)]
struct A {
    field_a: i32,

#[derive(Serialize, Deserialize, Debug)]
struct B {
    field_b: i32,

fn main() {
    let data = Data {
        id: 0,
        payload: Enum::A(A { field_a: 0 }),

    let j = serde_json::to_string(&data).unwrap();
    println!("{}", j);

    println!("{}", serde_json::from_str::<Data>(&j).unwrap_err());
can only flatten structs and maps at line 1 column 31
Updated 22/03/2018 23:35

Limit number of partitions that data chunks of a "large" blob can reside on


Currently Ambry sprays the data chunks of a given “large” blob across all writable partitions (i.e. any partition is eligible to receive a data chunk). This can create some problems 1. If one chunk is lost, the whole blob is considered lost. Spraying it across multiple partitions spread over multiple nodes increases the area of failure 2. One disk being slow/unavailable can have a multiplying effect if it stores a single chunk of many large blobs

In order to restrict these effects, it may be useful to limit the number of partitions the data chunks of a particular “large” blob can reside on. At current time, GET operations request 4 chunks in parallel and maintain the number until the blob is currently served. It may be enough to have 4 * x partitions receive the data chunks (4,8,12 etc) to serve any parallelism requirements.

The partition picking can be random to start with and can be made more intelligent based on requirements (which need to be evaluated) 1. Pick from distinct disks and nodes 2. Pick from distinct disks but limit number of nodes across which these disks are spread (to lessen impact of node failures) 3. … Each of the options comes with its own upsides and downsides.

Updated 22/03/2018 23:35

New elements and repos ideas


This is a list of ideas of elements that could be implemented in this org: - <integrated-terminal> - A terminal that could be fully customized greatly styled (Huge Project) - <screen-capture> - A custom element that allows the user to select parts of the screen to capture

Other Repos that might be implemented: - @electron-elements/utils repo - A utility modules that could provide basic utilities like TemplateManager and AttributeManger in send-feedback and much more

Updated 22/03/2018 23:25 1 Comments

Armor defense stat range


Is there any interest in including the range in the defense stat that the armor pieces have? Like the non-upgraded base defense value and the fully upgraded end value of them?

I could really use the info to show the result of the mixed set on my set builder/sharer.

Updated 22/03/2018 23:35 1 Comments

Record Federalist's Design Philosophy


Description of feature or bug

Federalist has a lot of people rotating on and off the project right now.

While we have a program README, naming Federalist’s priorities, we don’t have design principles.

We do have: - Content Rules - Identified content values from a content review - A lot of opinions from Will scattered across e-mails and Slack

Definition of done

  • [ ] We have recorded Federalist’s design philosophy components.

After evaluating, edit this part:

Level of effort - medium

Implementation outline (if higher than “low” effort):

  • [ ] We articulate the components of a design philosophy
  • [ ] Workshop led by @kategarklavs
  • [ ] Synthesis of workshop
  • [ ] Components drafted
  • [ ] Components reviewed
Updated 22/03/2018 23:20

Fix general visual aspects

  • Make logo more tight
  • move me section:
    • Change to violet to remain visible
  • Move ‘the first’ to the right. Closer to Angular
  • Review typography. Use Poppins. For ng 2017 and for Same for buttons
  • In ‘Get the latest news’ add period.
  • Change footer color to the right one
  • Add notification bar at the top
Updated 22/03/2018 23:17

Exception using client.download_media


While downloading a great number of media files I’m getting this exception that is nor treated nor raised

Traceback (most recent call last):
  File "/home/user/tmp/telegram_backup/lib64/python3.6/site-packages/pyrogram/client/", line 338, in download_worker
  File "/home/user/tmp/telegram_backup/lib64/python3.6/site-packages/pyrogram/client/", line 2143, in get_file
  File "/home/user/tmp/telegram_backup/lib64/python3.6/site-packages/pyrogram/client/", line 554, in send
    r = self.session.send(data)
  File "/home/user/tmp/telegram_backup/lib64/python3.6/site-packages/pyrogram/session/", line 402, in send
    return self._send(data)
  File "/home/user/tmp/telegram_app/lib64/python3.6/site-packages/pyrogram/session/", line 388, in _send
    Error.raise_it(result, type(data))
  File "/home/user/tmp/telegram_backup/lib64/python3.6/site-packages/pyrogram/api/errors/", line 67, in raise_it
pyrogram.api.errors.exceptions.flood_420.FloodWait: [420 FLOOD_WAIT_X]: A wait of 1369 seconds is required

and the exception is reported again and again until flooding wait time pass, the file is downloaded and then it is raised again with every download command.

the code which produces the exception is:

filename = client.download_media(message, block=True, file_name=name)

Updated 22/03/2018 23:40 3 Comments

Revisit trait vs fitness policy


For the types fwdpp::additive_diploid and fwdpp::multplicative_diploid, #49 (fwdpp 0.5.6) introduced constructor arguments to allow these types to handle “fitness” vs “trait” calculations. Internally, these types store a std::function<double(const double)> to handle the final conversion of genetic values. Storing std::function makes it impossible to determine which policy was used to construct an object.

To fix this:

  1. Enum class enumerating 4 possible policies.
  2. Replace the std::function with a class holding an enum value + the std::function.
  3. Throw exceptions from constructor if invalid choice is passed in.
Updated 22/03/2018 23:27

Separate Level and Tree mode from Aggregation


Right now we have the following behavior:

Tree layout is always unaggregated Level layout starts aggregated but can be unaggregated

We should separate that out with dedicated menu options.

Aggregation / not-aggregated in level mode stays the same, i.e., it applies recusively.

Aggregation in tree mode is added, lower levels of aggregated nodes are not shown.

This has some important implications for partial aggregation, which we do want to support.

Updated 22/03/2018 23:10

Add race comittee documents for scoring


Not quite fitting the name of this repo, but this is probably the best place for creating check lists for the race committee so they can make sure all details needed are announced in the morning, and all data required for scoring is available to them.

Updated 22/03/2018 23:09

Issue 1489


Fixes #1489

Changes proposed in this pull request: - Puts certain checks under the ExpertMode parameter. - Also made a few tweaks to checkids and priority numbers - Adds a file of Checks to the documentation folder

How to test this code: - Default mode (Expert =0) should be a bit faster - Running in Expert = 1 should, in some cases, give you extra warnings.

Has been tested on (remove any that don’t apply): - Case-sensitive SQL Server instance - SQL Server 2016 - SQL Server 2017

Updated 22/03/2018 23:07

*-UnitySetup API should align with Package/Module APIs already well known.

  • [ ] Find-UnitySetupInstaller -> Find-UnitySetupInstance, should find all versions if called without flags.
  • [ ] Install-UnitySetupInstance -> Shouldn’t require installers from Find.
  • [ ] Get-UnitySetupInstance -> Should support filters from Select-UnitySetupInstance
  • [ ] Select-UnitySetupInstance -> Drop cmdlet entirely
  • [ ] Update-UnitySetupInstance -> Should install latest version, use flags to signal patch or beta allowed.
Updated 22/03/2018 23:02

Allow folding a files content in diff view


Currently there is no way to fold up a large diff for a file in the diff view so you end up having to scroll through all it’s content to get to the diff of the next file in a commits diff view. Would be nice to be able to fold one and/or all files and then unfold them individually.

Updated 22/03/2018 22:59

stopover type


Probably the most important feature FPTF is missing at the moment (IMHO) is the possibility to express departures and arrivals at a specific station (almost every public transport API has an interface for this).

I therefore propose a new type that could look like this (using fptf@1 keywords, disregarding the discussions in #27, #33 or #34 about possible changes in fptf@2 for now).

    type: 'stopover', // alternative proposals: 'arrival', 'departure', but this could be misleading since the accurate term in english would be "arrival or departure"
    station: '12345678', // station/stop object, required, name could be misleading since stops should also be valid. maybe 'halt' would be a better name - or two different keys "station" and "stop", but that's also probably not the best solution
    platform: '4-1', // string, optional
    arrival: '2017-03-17T15:00:00+02:00', // ISO 8601 string (with destination timezone), required if `departure` is null
    arrivalDelay: -45, // seconds relative to scheduled arrival, optional
    departure: '2017-03-16T20:00:00+01:00', // ISO 8601 string (with station/stop timezone), required if `arrival` is null
    departureDelay: 120, // seconds relative to scheduled departure, optional
    schedule: '1234', // schedule id or object
    mode: 'train', // see section on modes, overrides `schedule` mode
    subMode: …, // reserved for future use, overrrides `schedule` subMode
    public: true, // publicly accessible?, overrides `schedule` public
    operator: 'sncf' // operator id or operator object, overrides `schedule` operator

Any opinions / further proposals?

Updated 22/03/2018 22:58

NLX 1.1.3 - re-try to use prompt param for autologin


See also: where prompt was disabled for NLX 1.1.3

TLDR: - RP GET /authorize.. 302 - … GET /login - we detect there is no prompt setting and that we should try autologin and we 302 - .. GET /authorize?prompt=none - Account is a Google account, auth0 302 to google - GET <= promp param is forwarded by auth0 here - google 302 to auth0 callback with access_denied code (which is not a valid OpenID Connect code) - auth0 302 to the RP forwarding the error code - User is stuck/ can’t login

Possible fixes: A) Google sends a valid error code such as login_required or interaction_required and auth0 sees it, then retries with prompt=login (this might already work, but I haven’t tested) (most preferred)

B) We disable the Google social connector, add a new custom connector that uses Google OIDC and drop the prompt= param ourselves, or handle the return code ourselves

C) We don’t follow standard and don’t implement prompt (least preferred) Note: GitHub may suffer from the same issue, not tested.

Updated 22/03/2018 22:55

Record time spent on various sites


This is a(n): - [x] New feature proposal - [ ] Reporting a keyword - [ ] Error - [ ] Proposal to the Search Engine - [ ] Other


Record the time :clock10: spent by the user on various domains.

A separate page must be listing this data for the user to arrive at his/her own insights.

We can use this data to provide useful insights to the user and develop an understanding of the mental state of the user to provide better results/ services. The privacy of the user will not be disrupted as the data will be used locally and no one will have access to it unless the user shares it.

This feature will pave the path towards providing user-specific environment/ features in Quark. :+1:

Updated 22/03/2018 22:53

Extended year format in date strings


Feature Request


The ECMAScript simplified ISO 8601 date-time string specification includes support for signed six-digit extended years. Currently only four-digit years are supported. Enhance the date and datetime validation types to support extended years according to ES5.


Some date strings that should be accepted:

  • -270999-03-21T15:00:59.008Z
  • +060964-11-05T06:53:16.366Z
  • +000000-01-01T00:00Z
  • -012345-02-22
Updated 22/03/2018 22:52

Create a TCP reordering module


In some cases, TCP packets get scrambled on the network and don’t arrive to the other side in the same order that they were sent. The purpose of this module is to re-order the in the same way they were sent, meaning according to the TCP stream order. This sounds a little like TCP reassembly, but the difference is that this module should only re-order packets, nothing more. It should be simpler and more lightweight than TCP reassembly

Updated 22/03/2018 22:50

ScreenGridLayer: Make `colorRange` and `colorDomain` experimental


<!– If it is feature/enhancement –>

Background <!– RFC or feature description –>

  • Make colorRange and colorDomain experimental
  • Un deprecate minColor and maxColor.
  • Use minColor and maxColor when they are provided otherwise use colorRange and colorDomain , if none provided fallback to default minColor and maxColor.
  • Update examples to use minColor and maxColor.

    To Do List <!– For developer only –>

    • [ ] Add label and assign to milestone <!– Must be major or minor milestone –>
    • [ ] Coding
    • [ ] Doc update
    • [ ] Whats new update <!– If feature is visible to user –>
    • [ ] Test
Updated 22/03/2018 22:48

Proposal: SSN-Extensions ontology


I’ve drafted a proposal for a small ontology which extends SSN in two directions:

  1. Add ObservationCollection class, for sets of observations that share a common foi|op|sensor|procedure etc - aids discovery and also streamlines serialization
  2. Add hasUltimateFeatureOfInterest to link an Observation|Sampling|Actuation to the intended subject, useful for cases where a Sample is the proximate feature of interest, to support intentionality and data discovery

The goal would be for this to be issued by the SDWIG as a W3C Note, to supplement the SSN Ontology.


Updated 22/03/2018 22:59 1 Comments

clustering: support own node id in closer peers


Right now in order to start a 3 node nats-streaming-server cluster you must do:

node0: --cluster_id a --cluster_peers b,c
node1: --cluster_id b --cluster_peers a,c
node2: --cluster_id c --cluster_peers a,b

Doing this doesn’t work:

node0: --cluster_id a --cluster_peers a,b,c
node1: --cluster_id b --cluster_peers a,b,c
node2: --cluster_id c --cluster_peers a,b,c

This doesn’t make life easy on a kubernetes statefulset.

If nats-streaming was simply pruning its own ID from the cluster_peers list, it would be very easy to write

node0: --cluster_id $(HOSTNAME) --cluster_peers ns-0,ns-1,ns-2
node1: --cluster_id $(HOSTNAME) --cluster_peers ns-0,ns-1,ns-2
node2: --cluster_id $(HOSTNAME) --cluster_peers ns-0,ns-1,ns-2
Updated 22/03/2018 22:47 2 Comments

[WIP] Add firebase login sample


The first sample added 🚀

Notes: - I do not invest too much time in the GoogleFirebaseAuth class. Since all the calls from Google and Firebase are static, I did not want to over complicated the jobs because of those thirty-part libraries - Please add the google-services.json to build the sample

Updated 22/03/2018 22:58

Display rotation


Adds 0, 90, 180 and 270 degree rotations to the display. Usable like so:

let mut disp = Builder::new()

Also adds the get_dimensions() method so consumers of this driver can position stuff relative to the display size.

Updated 22/03/2018 22:39

Filter: prepend


A preprend filter would add a given number of a character to a sprite. This would be useful to shift sprites without having to change the original sprite. The arguments would be the character you want to prepend and the number of times you want it to be prepended. As an example: "Filter": ["0", 64] would add the character 0 64 times before the start of a sprite.

Updated 22/03/2018 22:40

Improve multi-ikey support


Currently only one TelemetryContext object can be constructed on appInsights level and only instrumentation key set in the initial config is honored in many places inside sdk. At the same time user can create its own TelemetryProcessor that will override ikey for some given conditions. Given that it means that in theory multi-ikey scenario can be constructed, but it’s not supported correctly on all levels.

Updated 22/03/2018 22:27

Long-term exploratory idea: on-site prompts to provide translations


This is a sketched out idea for a big feature set that would enable crowdsourced translations on plots2; please don’t begin implementation until we think it through and consider the pros/cons/feasibility. Thanks!

Internationalization/localization right now are handled through the i18n gem, and stored in this directory:


There are several problems:

  1. There’s not a great workflow for: soliciting translations
  2. …copying translations into the relevant .yml files (as above)
  3. …checking that they’re correct by a fluent speaker
  4. …attempting to start new languages and reaching out to speakers
  5. …dealing with what to display if a translation is not present
  6. …making it easy for coding newcomers to add new features with text that ought to be translated


A few break-out ideas:

  • [ ] currently translatable text is represented like: <%= t('') %> – what if the “key” could instead of being this mysterious series of letters, could be the English phrase itself, so in this case, “Blog”? Then, to add a new section, you’d just add: <%= t('Blog') %> which is readable
  • [ ] what if the t() method defaulted to the passed string if the actual translation entry weren’t available, so if above, no English or other translation is found at /config/locales/en.yml it would just show the string “Blog” – this would allow people to begin making text translation-ready without needing to add an extra file or entry
  • [ ] our Dangerfile could try to detect t(...) additions and suggest adding the corresponding entry to the English translation.
  • [ ] (extra) we should have a script to find unused translations and remove them

The key idea, though would be:

  • [ ] for each use of the t(...) translation method, we could append a little globe icon next to text with the link (if the text is “Blog”) – for the currently set language of the user, if any.
    • this could be done by using another helper in /app/helpers/application_helper.js like i(...) which searches for a translation and if it finds one, passes on to t(...) or defaults to the passed string
  • [ ] initially, this could just go to a page where it guides you step-by-step how to submit a translation via GitHub, linking to the file like
  • [ ] Later, this could instead prompt the user (via a web form, a bot, or something) to translate the text, which would be submitted in the appropriate place in /config/locales/___.yml once they choose a language to translate to, without having to use GitHub.
  • [ ] Once done, they could be prompted to translate other strings. There really aren’t that many per language, to be honest

Why this is a good idea

In this way, we could:

  1. make adding new code which is compatible with the translation system easy
  2. recruit translators through itself
  3. make adding translations pretty easy (and a good first-timers-only task!)

I think a starting point would be to make the i(...) internationalization replacement for i18n’s t(...), which defaults to a string, and to begin moving translations into a single long file to make it easier to find the right one.

Again, please don’t yet start on this; we have lots of big projects going already. But it’s an idea for the future.

Also note: Transifex bot and Transifex CLI

Updated 22/03/2018 22:26

Fork me on GitHub