Contribute to Open Source. Search issue labels to find the right project for you!

[FEATURE]: Improve Overall Design (color pallete, disposition, docs, etc)


<!– Please fill out each section below, otherwise your issue will be closed.

Before opening a new issue, please search existing issues:

## A note on adding features to this repo

Every feature needs to strike a balance - complex features are less likely to be worked on.

This means that not every feature request will be added, but hearing about what you want is important. Don’t be afraid to add a feature request! –>


All the work done so far has come from developers (I think 🤔 ), it would be really nice if we could get some help improving our design and user experience

Right now only the Home Page is visible in the live version, the other pages the project needs to run locally (some screenshots attached)

I think it would be cool to create a /docs/design/ folder and add some douments like: - [ ] Design Guidelines - [ ] Color Palete - [ ] Screens design files - [ ] Contributing to design - [ ] Mobile versions (future)

Some useful links for the task: - - - -


Home Page image

DateTimeFormat image


I know this is a lot to ask, and the project is still changing fast but this would be really great 🎉 I know the community is 💪 so this will 🚢 for sure 🚀

Any question regarding this issue just tag me in the comments and will get back to you as soon as possible

👐 👐 👐

Updated 14/10/2019 19:22

Replacing chromedriver-helper with webdrivers

|                                                                    |
|  NOTICE: chromedriver-helper is deprecated after 2019-03-31.       |
|                                                                    |
|  Please update to use the 'webdrivers' gem instead.                |
|  See  |
|                                                                    |

The article Replacing chromedriver-helper with webdrivers might help with migration to the webdrivers gem.

Updated 14/10/2019 19:21

Pin messages


Is your feature request related to a problem? Please describe. Sometimes people like a message a lot because it contains some valuable information or a link. Pocket enabled this feature in the Book Of The Month channel that if a message gets 3 pin reacts that message gets pinned in the channel. So can we have this feature in the discussion wings channels at least since they are open to all and maybe citadel?

Describe the solution you’d like If three or more people react with a pin to the message the message gets pinned.

Describe alternatives you’ve considered Haven’t really thought of alternatives a part from pinging a helper everytime you wish to pin something

Additional context I’ll try to find a screenshot and get back to you.

Updated 14/10/2019 19:20

create a makefile target for running tests with code coverage and visualize them


the golang language has the feature to create and display code coverage of unit- tests


  • research about this interesting feature
  • create a makefile target like make coverage this will execute tests with code coverage By default we don’t need it , so we will have make coverage and make test will run without coverage. The make coverage will run go test with some flags

  • this task should create an html golang report and then open a fenster in web-browser to visualize it.


We don’t want to enable codecoverage or similar in travis or CI because codecovarage can be a really wrong metric for running in PRs or automation.

We just want to use code coverage from time to time to see what we could perhaps cover but there should be a human analyzing it.

As use-case could be:

As dev I run make coverage and this will open automatically the browser fenster to the code-coverage of golang in html

Then I can analyze from time to time what perhaps isn’t covered by tests. ( we can’t cover everything by tests since we don’t have binary).

The code-coverage thing should be manual human process .

Updated 14/10/2019 19:05

Travis-CI should upload builds to a temporary file locker like for more streamlined testing.


Requirements for completion

  • [ ] Windows binary upload to
  • [ ] Linux binary upload to
  • [ ] Discord API connected push notification to send the returned URL’s and a message containing the details regarding what PR the tests uploaded binaries are from and potentially a link back to that said PR to shortcut anyone having to review the binary.
Updated 14/10/2019 18:53

Joins are not working


Hi, I’m trying to GET posts with joined tables - like in example in documentation ( But as response I have list without joined elements. Should I do something more to run joins?

Updated 14/10/2019 19:17 2 Comments

Нужны подробности по ДЗ


@coursar Я правильно понял, надо создать класс родитель “Тариф” и к нему создать несколько дочерних, в которых будут инициализироваться нужные данные, и добавляться новые при необходимости?

По большому счету, общего среди них всех только название, цена и ссылка “Подробнее”:

То есть надо создать класс родитель Тариф, потом еще около 15 дочерних классов, для каждого прописать недостающие свойства и инициализировать?

Updated 14/10/2019 18:21

Indexed source information could not be retrieved from Autofac.Extensions.DependencyInjection.pdb. Error: Symbol indexes could not be retrieved.


Describe the bug This is the same as but for Autofac.Extensions.DependencyInjection.

The PDB that is built is the “portable” one but we need the “Full” PDB for many TFS configurations (e.g. TFS 2012; other ticket said other TFS versions affected).

To reproduce Build on older TFS server e.g. TFS 2012, which doesn’t support portable PDBs for indexed sourcing.

Resolution As with just build the project setting PDB to full instead of portable.

Updated 14/10/2019 18:22 1 Comments

Create an item factory


I would like to have an object called ItemFactory that can generate instances of different game items. This should be able to take a json file which defines the items that can be generated and gives them an id and all of the parameters needed for generating the item. The item can the be generated by id. Items can also be registered to the factory by a method call.

This issue is open to discussion so please comment if you have a better solution.

Updated 14/10/2019 18:13

32-bit support


KRF currently makes x86 specific assumptions, and may make x86_64-specific assumptions. We should try to eliminate the latter (if they exist) so that it can be built with support for x86_32-specific syscalls (e.g., iopl(2) in #6).

This has two parts:

  • Investigating the general feasibility of x86_32 bit builds + ensuring that KRF actually works when built in 32-bit
  • Refactoring the codegen/specs to only generate syscalls for the relevant platform (e.g., not emitting iopl(2) on x86_64)
Updated 14/10/2019 18:10

Make links to comments have the full comment link in the URL



Currently, a (username) commented on your pull request DM message from the Github bot will link to the associated pull request or issue, but will not include the full link to the comment. Because of this, the user is not taken directly to the comment when they click the link.

To improve UX of this feature, the user should be taken to the comment or review that caused the notification.

Ticket Link

Updated 14/10/2019 18:08

Шаг 7: Перевод в бинарный вид.


Как правильно (более красиво) сериализовывать данные в бинарный вид? Нужно ли для этих целей создавать отдельную структуру данных? Пробовал через unsafe код, не получилось. Или же все делается переводом каждого поля в набор byte[] с дальнейшей их помещение в List<byte>() и дальнейшем вызовом ToArray(). Интересует именно альтернативный вариант.

Updated 14/10/2019 18:09 1 Comments

Implement Random button for Tiles.


We have few buttons in the web-page which needs to be implemented.

Please Implement Random button for this issue.

  • [ ] On select Random button change its color to yellow
  • [ ] On select Random set all the tile with random image from the data set we have

We want to have as many people as possible contribute to open-connect-art and teach people how to contribute and take part in the Hacktoberfest

Take a look at the project’s for more details.

Updated 14/10/2019 17:58

Implement Flicker button on web page


We have few buttons in the web-page which needs to be implemented.

Please Implement Flicker button for this issue.

  • [ ] On select Flicker button change its color to yellow
  • [ ] On select Flicker make all the tile background change its background in random color

We want to have as many people as possible contribute to open-connect-art and teach people how to contribute and take part in the Hacktoberfest

Take a look at the project’s for more details.

Updated 14/10/2019 18:09 3 Comments

design decentralized network protocol for joinmarket


There has been some informal discussion about designing a truely decentralized network for joinmarket (see but so far no specific protocol design efforts have been made.

In this issue I would like to discuss and design a specification for such a decentralized protocol for joinmarket.

The below is a proposal draft. It is not perfect yet. Please review it and share any thoughts or issues on it.


Design a secure, decentralized protocol suitable for joinmarket. Makers must be able to publish their coinjoin offers, Takers must be able to retrieve a list of all available Makers' offers and information on how to contact them.

Specific design goals: * not less secure than the current IRC-based protocol * decentralized * providing strong anonymity for all parties

Protocol Idea

All communication must happen over Tor onion v3 services using WebSocket connections.

We assume three different roles in the network:

Directory Server

  • Maintains list of available offers from all Makers
  • Maintains list of other Directory Servers
  • Uses a gossip protocol to share new offers/Directory Servers with other known Directory Servers
  • Shares above lists with interested parties on demand (most importantly Takers)
  • Registers itself periodically with other Directory Servers
  • Has a (preferably) long-lived onion address


  • Publishes coinjoin offers periodically to Directory Servers
  • Has an ephemeral onion address linked to their offers


  • Bootstraps with a JM-shipped list of available Directory Servers
  • Retrieves additional servers from known Directory Servers
  • Retrieves Maker offers from known Directory Servers
  • Contacts Makers based on the received offers to carry out coinjoins

Protocol Details

Directory Server

  • Runs a (non-TLS) WebSocket server behind a Tor onion v3 service, running on port (55000,60000(

    Rationale: Tor onion services provide fail-safe anonymity for all parties with strong e2e encryption. Moreover, Tor onion services provide reliable message routing and basic network-level attack defenses. WebSocket is a relatively low-overhead network protocol with broad support in not only all major programming languages but also web browsers. This may allow running a full JM client in a (tor-capable) browser. Ports are limited to avoid abusing the network for DDoS against unrelated onion services (eg web services).

  • Provides pluggable modules (or features) that can be used to share information. Basic modules shipped with joinmarket at the current stage could be directory_servers, jm_offers and a base module to list available modules. Module names must match ^[a-z_]+$.

    Additional modules could be: snicker, tx_broadcast

    Rationale: Keeping the features explicitly modular makes it easy to deploy new features. The implementation overhead should be negligible.

  • Modules provide an unauthenticated way to add new entries. Old entries will time out after a certain time (1h?). New entries will be forwarded to other known Directory Servers.

    Open question: Verify submitted entries to some extend?

    Open question: How to design the gossip protocol to allow renewing timeouts without flooding the network? Attach a timestamp? How about attacks?

    Rationale: All Directory Servers should always be able to accurately reflect the current state of the network.

  • Exact data submitted/provided by each module depends on the module:


    • onion v3 hostname (without .onion)
    • supported modules


    • onion v3 hostname (without .onion)
    • same data as current IRC offer implementation
    • (possibly switch out [sw](rel|abs)offer with some kind of flags system); each offer could allow to attach a list of arbitrary flags like [name[=value],…] where name and value match ^[A-Za-z0-9_]$ with flags like type=lsw or type=sw (for legacy/p2sh segwit or native segwit offers). This also easily allows backward-compatible extensions, eg for better sybil resistance. Flag names must be sorted by ascii value and each name must only occur at most once. This reduces leaking identifying information. There should be a length restriction of (100?) characters.


  • Runs a (non-TLS) WebSocket server behind a Tor onion v3 service, running on port (60000,65000(

  • Provides interface to initiate coinjoins with this Maker, similar to the existing IRC protocol.

    Todo: Update existing protocol to strip out unneeded encryption/authentication


  • Contacts hardcoded Directory Servers for more Directory Servers
  • (?maintains own list of Directory Servers after initial bootstrap?)
  • Contacts a random subset of (3?) known Directory Servers to retrieve offers
  • Selects appropriate makers based on existing selection algorithms and contacts them to do coinjoins

Todo: Individual protocol messages

The current protocol is documented at

Suitable protocol messages need to be designed for each known IRC command.

Currently no data serialization format is defined for the data sent through the WebSocket connection. I think protobuf would fill this gap nicely.

Related Projects

Several interesting projects have been mentioned in and I would like give a short rationale on why I excluded the other mentioned projects from this design draft.


dead, could be used by Takers to contact Makers but would seemingly make things only more difficult to implement while not providing an obvious benefit.

Not suited for collecting list of Maker offers.

Bittorrent DHT

not dead, but otherwise same as Subspace


just looks complicated, haven’t tried to understand how exactly it works. Don’t know if suited at all.


centralized infrastructure? could work but seems kludgy for joinmarket’s purposes


seems to be designed for running a centrally managed service in a decentralized manner

Further Thoughts

This is somewhat related to #371 but not necessarily dependant on it. The protocol should be designed in a way that makes it easy to extend.

Are there any known shortcomings in the protocol messages we have right now that should to be addressed in a protocol upgrade?

Updated 14/10/2019 17:45

fix: Use different database for each user


Right now all the messages are getting stored in the same SQLite database which is message_store.sqlite and this approach is wrong which will lead to the mixing of user messages.


Make a new directory in which we will store different databases for each user. The SQLite database filename will be based upon the username of the user. This way we can separate users who are using the same instance of tmessage locally.

Updated 14/10/2019 17:37

os: Windows: Stat fails on special files like 'con'


<!– Please answer these questions before submitting your issue. Thanks! –>

What version of Go are you using (go version)?

<pre> $ go version go version go1.13.1 windows/386 </pre>

Does this issue reproduce with the latest release?


What operating system and processor architecture are you using (go env)?

<details><summary><code>go env</code> Output</summary><br><pre> $ go env set GOHOSTARCH=386 set GOHOSTOS=windows </pre></details>

What did you do?

<!– If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on is best. –> try test_Stat-con.go to reproduce the issue ```go package main

import ( “os” )

func main() { _, err := os.Stat(“con”) if err != nil { panic(err) } } ```

I have debugged and located that GetFileAttributesEx fails on special files like ‘con’:

and only ‘nul’:

reserved names CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9: CONIN$ and CONOUT$:


What did you expect to see?

We can access these special Windows files, especially con for printing to stdout.

What did you see instead?

I found sometimes we need to use stdout as a file: But con doesn’t work. By doing some research, I located the code here as described. GetFileAttributesEx and CreateFile of Stat failed on the reserved file name con, and only ‘nul’ is considered as the special case.

Updated 14/10/2019 17:47 1 Comments

Deterministic instance names for operators


What would you like to be added: Make kudo install use operator name for the instance by default instead of randomized name.

Given the following operator.yaml, kubectl kudo install should create an instance named spark instead of e.g. spark-ftzp6f: name: "spark" version: "0.0.1" kudoVersion: 0.7.3 kubernetesVersion: 1.15.0 appVersion: "2.4.4" maintainers: # maintainers: url: tasks: deploy: resources: #list of resource plans: # plans

Why is this needed: When users run kubectl kudo install multiple times they end up having multiple operator instances installed with indistinguishable (not human-distinguishable) names. To make this more deterministic, users have to provide --instance flag. The default behavior with randomized instance names complicates the UX and requires users to provide extra-parameters to make it deterministic.

Other package installation CLIs (e.g. helm or dcos) install a package with a default name and don’t allow to create two instances with the same name so the user needs to provide a different name for the non-default service instance.

Another aspect of this is to prevent users from the accidental installation of multiple operator instances in a single namespace when there’s expected to be only one instance. If users want to install another instance of the same operator in a single namespace, they should provide new instance name explicitly so there’s no random naming involved.

Updated 14/10/2019 17:39 1 Comments

Outdated information about black incompatibilities


Bug report

What’s wrong

Documentation states that project is is not compatible with black. As far, as I know, some points are not relevant now.

for some reasons black uses “ that almost no one uses in the python world

There is option for skipping this check now:

Line length. Violating rules by 10%-15% is not ok. You either violate them or not. black violates line-length rules.

Black now supports changing line length too. In our setup it has not problems with flake8.

There is still issue with trailing commas, I guess. While this does not trigger flake8 itself.

How is that should be

Sentence And there’s no configuration to fix it! must be somehow corrected, for example by adding instructions for making black work with project.

System information

Not relevant.

Updated 14/10/2019 19:23 3 Comments

Make access to KV store atomically safe when saving/removing subscriptions


If you’re interested please comment here and come join our “Contributors” community channel on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please join our “Plugin: GitHub” community channel.

New contributors please see our Developer’s Guide and our Plugins Guide.


For subscriptions of GitHub projects to MM channels, all of the subscription configuration is stored in the subscriptions portion of the KV store slice that is used by the Github plugin. In order to avoid race conditions and conflicts when two users are currently editing subscriptions in general, we should wrap the operations in a function like atomicModify used in the Jira plugin.

Ticket Link

Updated 14/10/2019 16:41

RFC: Managing tags without extra markup



Currently, react-head utilizes data-rh attributes in order to manage the tags that it creates. By default, react-head avoids touching tags that it doesn’t know about (that don’t have data-rh attributes). This is so that static tags that are part of a static HTML template are not touched.

There’s been a few issues (both in this project and react-helmet) from folks who are interested in: 1) Avoiding the extra markup, 2) Having react-head manage tags that it doesn’t explicitly own (e.g. a default fallback title tag that’s defined as part of a static HTML template),

While these two items are certainly separate, they are closely related and we might be able to kill two birds with one stone depending on our approach.

Potential Solutions

Option 1: Manually created whitelist definition

One solution, as proposed in #84 is to define an explicit whitelist prop:

/* on server */
const headTags = [];
<HeadProvider headTags={headTags} whitelist={['title', '[name="description"]', '[property^="og:"]'}>
  <App />

/* on client */
<HeadProvider whitelist={['title', '[name="description"]', '[property^="og:"]'}>
  <div id="app">
    <Title>Title of page</Title>
    // ...

This solution has the following benefits:

1) Easily implementable 1) Completely opt-in

While this works, it has the following drawbacks:

1) Creating that whitelist prop isn’t straight-forward (you have to be familiar with uncommon CSS selectors, [foo^=bar]) 1) You have to provide this same selector set to the server and the client (more manual overhead) 1) It doesn’t explicitly solve for Problem 2 stated above (although it could depending on implementation).

Implementation Details

The implementation can be explored in the WIP PR by @CanRau

Option 2: Auto-Whitelist

Expanding on Option 1, we could auto-generate the necessary “selector set” as you go (opt-in to this behavior with autoWhitelist prop on HeadProvider):

/* server */
const headTags = [];
<HeadProvider headTags={headTags} autoWhitelist>
    // ... within any component, head tags are rendered as normal
    <Meta name="blah-long-name" content="lots of content here" />
    <Title>Title of page</Title>

/* on client */
  <App />

Note that autoWhitelist prop isn’t very expressive/meaningful without context. Maybe we can come up with a better name? raw? clean? Any other ideas?

Implementation Details

On the server, as we render head tags, if any of them include a whitelist prop, we build up the selector set necessary to identify which tags were inserted by react-head (basically the selector set that was manually created in Option 1 above). We return this as a renderable DOM element as part of the headTags[] array with some unique ID that we grab from the DOM and provide as context during client rendering:

const headTags = [
  // normal head tags
  '<meta name="blah" content="foo" />',
  '<title>Title of page</title>',

  // new: the generated whitelist selector set, to be injected during SSR
  // we'd basically JSON.parse() this content on the client and use
  // as part of the querySelectorAll: 
  `<script id="react-head-whitelist" type="text/template">
      'meta[name="blah-long-name"][content="lots of content here"]',

CONs: 1) Sort of bloats the server rendered payload (we’d basically duplicate all tags twice). There’s some optimizations we could do here, e.g. select just the first few characters of each attribute instead of the full thing: meta[name^="blah"][content^="lots"]

PROs: 1) No additional setup required (as @CanRau suggested, the API doesn’t need to change much!) 1) Completely opt-in so it’s non-breaking (current behavior would be the default). Maybe we can flip this with a future major release. 1) Depending on how we generate the whitelist selector set, we could also solve for Problem 2 (e.g. in the above example the selector for the title tag is just title, instead of also including the full text content). Alternatively we could force opt-in to mangling by requiring an additional specific prop, <title mangle>Title of page</title> but not crazy about that idea.

Option 3: Individually opt-in to whitelist

As a tweak to Option 2, we could individually require opt-in, if we wanted to support whitelisting only individual elements, we could support an explicit whitelist prop on each head tag instead of an autoWhitelist prop on the HeadProvider.

  <title whitelist>Title of Page</title>

In this scenario you’d have a mix of data-rh and #react-head-whitelist selectors.

Not sure how popular this option would be and it complicates the implementation slightly so I’d almost rather have it be all or nothing. Thoughts?


  1. Opt-in
  2. Selective SSR bloat vs all


  1. Complicates implementation
  2. Potentially confuses the mental model for users

Option 4: The Nuclear Option

The above assume that folks don’t want react-head to manage tags that it hasn’t explicitly rendered. There is a “nuclear” option of allowing folks to opt-in to having react-head mangle all tags:

/* server */
const headTags = [];
<HeadProvider headTags={headTags} manageAllTags>
    // ... within any component, head tags are rendered as normal
    <Meta name="blah-long-name" content="lots of content here" />
    <Title>Title of page</Title>

/* on client */
<HeadProvider manageAllTags>
  <App />

If manageAllTags is set, we can avoid rendering data-rh attributes and just have a very greedy selector on the client, something like: head > title, head > meta … which would select all tags in <head /> whether or not they were created by react-head. This would work but would require that users express all their tags in the React tree and not just rely on static HTML templates. This is okay if we make this opt-in and explain thoroughly in the documentation.


  1. Simple to implement
  2. Simple mental model
  3. No SSR payload bloat


  1. All or nothing
  2. May be surprising for folks who don’t read the docs :)
Updated 14/10/2019 19:11 7 Comments

Trigger IP localization for sub-pages in Pimcore


Use case: We are offering a mix of localized and non-localized content on our website. For the assignment of localized content to relevant users we are using Global Targeting Rules and the respective Condition called Geographic Point which comes as a standard feature with Pimcore (our CMS system).

Current approach: The targeting rules are active and the web browser prompts every user that arrives on our landing page with a pop-up asking to allow localization of the IP address. While this is generally fine, we would like this popup to appear only when users hit a localized section of our website.

Existing configuration: Pimcore 6.1.2; Global Targeting Rule/ Settings/Action Scope = Hit, ../Conditions = URL (RegExp) AND Target Group AND Geographic Point, ../Actions = Assign Target Group AND Redirect

Question: Is there any options to trigger the IP localization only when localized sub-pages of the websites are being visited and if so, how do we need to change the conditions of the Global Targeting Rules?

Updated 14/10/2019 16:28

Experiment with whether we can hand off code to a preinstalled optimized runtime


From some rough tests, Watt macro expansion when compiling the runtime in release mode is about 15x faster than when the runtime is compiled in debug mode.

Maybe we can set it up such that users can run something like cargo install watt-runtime and then our debug-mode runtime can detect whether that optimized runtime is installed; if it is, then handing off the program to it.

Updated 14/10/2019 18:49 3 Comments

Set up a benchmark of the runtime


We’ll want to take some representative proc macro(s) to compile to wasm, and set up a benchmark that times how long it takes to expand some representative inputs.

This would be a necessary first step toward beginning to optimize the Watt runtime.

Updated 14/10/2019 16:16



Discriminators need: - better tests (because tests are examples of how to use it too) - better / actual documentation (#16) - rework the already existing test(s) to be better understandable

specific question for @B-Stefan: (i already asked on discord, but remade it to here) why does the (base) model of inheritanceClass(tests/models) have an discriminatorKey which is type of number & always 100? -> i try to ask this to better understand the test

Updated 14/10/2019 16:13

Weekly Translation of Pontoon Projects (Week 42/19)


Stringhe della settimana

❗️ La lista delle stringhe da tradurre viene aggiornata più volte nell'arco della settimana

Informazioni sul progetto

Questo issue serve a gestire le traduzioni dei siti Mozilla sulla piattaforma Pontoon.

Ogni settimana uscirà un nuovo issue con una lista dei progetti che contengono stringhe mancanti. Puoi sempre trovare lo issue della settimana corrente sulla bacheca del progetto. Chi vuole farsi avanti, a seconda della propria disponibilità, si “prenoterà” per le stringhe. Questo per organizzare meglio l'attività di traduzione ed evitare che più persone lavorino sulle stesse stringhe.


Prima di iniziare:

Prendere in carico il progetto:

  • lascia un messaggio nella sezione Commenti, scrivendo di quale progetto vuoi occuparti (importante per avvisare gli altri che ci stai lavorando tu). Scrivi anche il tuo nome utente accanto al nome del progetto sulla lista qui in alto (usa l'icona della matita per modificare il testo. Se non sei ancora tra i Collaborators non hai i permessi per modificare la lista. Scrivi solo nei commenti e qualcuno con i permessi aggiungerà il tuo nome alla lista.)
  • attieniti alle istruzioni del capitolo 3 e del capitolo 4
  • per qualsiasi problema o domanda scrivi pure in questo issue
  • quando la tua traduzione è pronta, lascia un commento nello issue per dare l'OK al QA
  • rimani a disposizione per il QA, ti verrà chiesto se vuoi confermare le modifiche del revisore o meno
  • e buona traduzione! 🎊

Link Utili

Updated 14/10/2019 16:36 1 Comments

Open discussion on collection normalization



While working on #20 to fix a bug related to Sequence handling I noticed that depending on whether we are building the AST, interpreted the code, creating a local variable, a parameter, an attribute, etc. collections are not handled the same way, which can confuse tools, maintainers and users.

For instance:

class HelloWorld {

    Sequence(Integer) fieldInts;    // create an EList of type ClassType(BasicEList)
    0 ..* Integer multipleInts;

    def void run() {
        Sequence(Integer) list2 := Sequence{1, 2, 3};    // create an EList of type SequenceType

        self.fieldInts->sum();         // typechecker error, interpreter OK
        list2->sum();                  // typechecker OK, interpreter OK 
        self.multipleInts->sum();      // typechecker OK, interpreter OK
        self.mysum(list2);             // typechecker & interpreter errors
        self.mysum(self.fieldInts);    // typechecker & interpreter OK
        self.mysum(self.multipleInts); // typechecked error, interpreter OK

    def int mysum(Sequence(Integer) list3) { // expects an EList of type ClassType(BasicElist)
        result := list3->sum();  // typechecked with an error but interpreted as expected

Handling all those cases makes the code harder to understand and can lead to duplications.


As already discussed with @dvojtise I believe we should think of a unified way to handle collections (for both users and maintainers).

In my opinion, the best we could do is to consider that min..max and Sequence(T) are different concrete syntaxes of the same abstraction and hence: - either manage to implement them equally - or provide a kind of abstraction that would allow to use AQL Sequences and EMF EList in the exact same way.

Since we heavily rely on AQL to interpret expressions I don’t know if that would be really easy to implement, though.

@fcoulon, @pjeanjean, would you have any suggestion on this?

Updated 14/10/2019 15:34

[Enhancement] Add a validation for email address in validate_email


Currently we have only one validation for email address verification in validate_email which is if @ is present in string or not. We can use parseaddr Python util to check if the string is a valid email address or not.

Updated 14/10/2019 15:35 2 Comments

Global values.yaml no longer working


<!– Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!

Please do not report security vulnerabilities with public GitHub issue reports. Please report security issues here: –>

What happened: We are providing global values from the parent chart to the configMap but they are not getting picked up, instead are getting overridden by the local values.yaml values

What you expected to happen: We would expect the global variables set in the values.yaml of the parent chart to be picked and assigned

How to reproduce it (as minimally and precisely as possible): Deploying any one of the helm chart

Anything else we need to know?:

Environment: - Kubernetes version (use kubectl version): 1.13 - Ruby version (use ruby --version): - OS (e.g: cat /etc/os-release): Mac OS - Splunk version: 1.2.0 - Others:

Updated 14/10/2019 19:15 3 Comments

Add alembic migrations to the project


Bitcart backend API uses gino orm(kind of orm), which is executing sqlalchemy core queries via async database driver. But there are no migrations yet, so user data might be broken if we won’t add migrations. All is needed is to setup alembic with that project, and add applying migrations code somewhere in But as bitcart is using async orm, we might need to find alternate solution for that if it is possible. Also any ideas on using a better orm are appreciated. In future we might switch to edgedb at all.

Updated 14/10/2019 15:00

Add authentification to API


Our bitcart backend API is written in fastapi. It is located in file andapi/ folder. Currently it is accessible by everyone. We should add authentification, either Token, or JWT, or some other kind of auth. Some useful links:

The user model is defined in validation for output in API), and database model in Also, when user is logged in API should be restricted to displaying only those wallets, store, products, invoices that user have. User has is_superuser value, if it is True, then all data should be displayed. If user is not superuser, users list might not be accessible at all, or there should be a new endpoint added: /users/current Or /profile Returning logged user info.

Updated 14/10/2019 14:56

Add batch routine to find and record sites


I have a desire to accumulate the results of a number of Google searches in a database of Sites or Businesses. I’m wondering how easy it is to do this using Google search. Do I need an API key or can I simply perform a load of searches from the command line and parse the results? How many people have done this before?

Updated 14/10/2019 14:52

Build: Linkerd install fails if the username contains an underscore


Bug Report

I have been trying to setup Linkerd for local development on my Arch Linux machine following the comprehensive development configuration.

According to the guide building the docker images using DOCKER_TRACE=1 bin/mkube bin/docker-build runs fine. However, on attempting to install Linkerd using bin/linkerd install | kubectl apply -f - leads to failure with Error: dev-55e4fd18-srv_twry is not a valid version message.

What is the issue?

I was able to track down the issue to the bin/ script. Basically, the issue is that my username on the system is srv_twry which contains an underscore. The docker images are generated with the username concatenated with the SHA hash of the HEAD commit. As a result, the image name also contains an underscore.

The linkerd install commands validates the image name and doesn’t allow underscores in the image name, hence the error.

How can it be reproduced?

Change your username to have an underscore and follow the comprehensive development configuration instructions.

Logs, error output, etc

Error: dev-55e4fd18-srv_twry is not a valid version

linkerd check output



  • Kubernetes Version: 1.16
  • Cluster Environment: Minikube
  • Host OS: Arch Linux
  • Linkerd version: N/A

Possible solution

  1. Force me to change my username: Please don’t :)
  2. Remove underscores and other forbidden characters from the username before using it in the tag.
  3. Allow underscores in the version names.
Updated 14/10/2019 16:00

Make a hierarchy file for the used components.



Since the Whatsapp Web page is made with React, everytime they re-deploy, the class names changes and all the project fall apart. 💔


A great idea from Naveen Prasanth was to make a file with the hierarchy of the components so we can make calibration and update the class names automatically when the application starts.


The left panel that has the chatters has the next hierarchy html/body/div/div/div/div/div/div/div/div/div/div

Updated 14/10/2019 14:39

ADCS Plugin :: ADCS Web enrollment deprecation


Lemur ADCS plugin uses ADCS Web enrollment for performing certificate management (certsrv). And I learn that Microsoft ADCS web enrollment is outdated and probably deprecated. Can you please let me know if there is any alternate for ADCS Web enrollment that Lemur already/plans to provides.

Thanks, Sudhakar

Updated 14/10/2019 16:50 1 Comments

Checking base during edgelist_from_base()


There is a bit of code in NetworkBuilding.R that has some adverse effects:

## Checking base message("Checking base...") base = try({ checkBase(base) }) if (class(base)[[1]] == "try-error") { stop("Cannot compute the network: the database is not correctly formated or contains errors. The database must first be checked with the function 'checkBase()'. See the vignettes for more details on the workflow of the package.") }

I think our workflow is designed to allow checkBase() to be performed separately, to allow the users to think about what is wrong in or with the database. However, the above code forces the complete function to be run as a first step, even if the input data has already been checked.

To me there seem to be three options here: 1- Don’t check the database in this function (which creates the possibility that the user will pass an unchecked database to the function, with all potential problems that it might create). 2- Create a input flag for checking the database. (No guarantee that the above will not happen, but by actively having to set it to FALSE the user at least has to think about it). 3- Leave it as it is. This can be rather slow, as the database checking does take quite a bit of time for larger databases. On the other hand, it does create a fully integrated function to move from raw data to network in one go… The question is whether this is what we want.

I’d prefer option 1 or 2, but like to hear you’re thoughts before adjusting the code.

Updated 14/10/2019 14:51 3 Comments

Targets try to evaluate GetDirectoryName('') and cause "path is not of a legal form" error


C:\Users\appveyor.nuget\packages\tunnelvisionlabs.referenceassemblyannotator\1.0.0-alpha.77\build\TunnelVisionLabs.ReferenceAssemblyAnnotator.targets(129,47): error MSB4184: The expression “[System.IO.Path]::GetDirectoryName(‘’)” cannot be evaluated. The path is not of a legal form.


Line in question:

It appears that _NetStandardReferences.Identity is not present. Maybe because of <Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">?

Is there a way to avoid using an underscore name? I think I remember hearing that they use underscores as a way to communicate that they can and will break you if you use the name.

Updated 14/10/2019 15:22

Issue with jekyll serve after installation



I’m running jekyll 3.8.5 and ruby 2.5 on windows.# I installed 1.0.2 version of the plugin.

after installing and adding into gemfile as per rubygems website when I run jekyll serve locally i get the following error.

Traceback (most recent call last):
        14: from C:/Ruby25-x64/bin/jekyll:23:in `<main>'
        13: from C:/Ruby25-x64/bin/jekyll:23:in `load'
        12: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/jekyll-3.8.5/exe/jekyll:11:in `<top (required)>'
        11: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/jekyll-3.8.5/lib/jekyll/plugin_manager.rb:51:in `require_from_bundler'
        10: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/bundler-1.17.1/lib/bundler.rb:114:in `require'
         9: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/bundler-1.17.1/lib/bundler/runtime.rb:65:in `require'
         8: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/bundler-1.17.1/lib/bundler/runtime.rb:65:in `each'
         7: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/bundler-1.17.1/lib/bundler/runtime.rb:76:in `block in require'
         6: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/bundler-1.17.1/lib/bundler/runtime.rb:76:in `each'
         5: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/bundler-1.17.1/lib/bundler/runtime.rb:81:in `block (2 levels) in require'
         4: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/bundler-1.17.1/lib/bundler/runtime.rb:81:in `require'
         3: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/jekyll-google-tag-manager-1.0.2/lib/jekyll-google-tag-manager.rb:4:in `<top (required)>'
         2: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/jekyll-google-tag-manager-1.0.2/lib/jekyll-google-tag-manager.rb:4:in `require'
         1: from C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/jekyll-google-tag-manager-1.0.2/lib/jekyll-google-tag-manager/version.rb:5:in `<top (required)>'
C:/Ruby25-x64/lib/ruby/gems/2.5.0/gems/jekyll-google-tag-manager-1.0.2/lib/jekyll-google-tag-manager/version.rb:6:in `<module:Jekyll>': superclass mismatch for class GoogleTagManager (TypeError)

I’m a real novice at ruby and jekyll, could you advise, is this an issue with the plugin or something to do with my setup?

Gemfile and config pasted below.

Many Thanks,


GEMFILE: ``` source “”

Hello! This is where you manage which Jekyll version is used to run.

When you want to use a different version, change it below, save the

file and run bundle install. Run Jekyll with bundle exec, like so:


bundle exec jekyll serve


This will help ensure the proper Jekyll version is running.

Happy Jekylling!

gem “jekyll”, “~> 3.8”

If you want to use GitHub Pages, remove the “gem "jekyll”“ above and

uncomment the line below. To upgrade, run bundle update github-pages.

gem “github-pages”, group: :jekyll_plugins

If you have any plugins, put them here!

group :jekyll_plugins do gem “jekyll-feed”, “~> 0.6” gem “jekyll-paginate”, “~> 1.1” gem ‘jekyll-google-tag-manager’, ‘~> 1.0’, ‘>= 1.0.2’ end

Windows does not include zoneinfo files, so bundle the tzinfo-data gem

gem “tzinfo-data”, platforms: [:mingw, :mswin, :x64_mingw, :jruby]

Performance-booster for watching directories on Windows

gem “wdm”, “~> 0.1.0” if Gem.win_platform? ```


# Welcome to Jekyll!
# This config file is meant for settings that affect your whole blog, values
# which you are expected to set up once and rarely edit after that. If you find
# yourself editing this file very often, consider using Jekyll's data files
# feature for the data you need to update frequently.
# For technical reasons, this file is *NOT* reloaded automatically when you use
# 'bundle exec jekyll serve'. If you change this file, please restart the server process.

# Site settings
# These are used to personalize your new site. If you look in the HTML files,
# you will see them accessed via {{ site.title }}, {{ }}, and so on.
# You can create any custom variable you would like, and they will be accessible
# in the templates via {{ site.myvariable }}.
title: Jekyll Netlify Boilerplate
description: >- # this means to ignore newlines until "baseurl:"
  Write an awesome description for your new site here. You can edit this
  line in _config.yml.
baseurl: "" # the subpath of your site if applicable, e.g. /blog
url: "" # the base hostname & protocol for your site, e.g.

# Plugins
#plugins: ["jekyll-paginate, jekyll-google-tag-manager, jekyll-feed"]
plugins: ["jekyll-paginate"]

# Permalink format (/blog/ is ignored for pages)
permalink: /blog/:title

# Enable section IDs in frontmatter, useful for identifying current page
# (used as a hook for styling etc)
section: true

# set to 'true' to enable Netlify CMS (/admin) in production builds
netlifycms: true

# set to 'true' to enable Google Analytics tracking code in production builds
analytics: false

    container_id: GTM-XXXXXX

# Compress HTML (in liquid via layouts/compress.html)
  clippings: all

# set some common post defaults
      path: "" # an empty string here means all files in the project
      type: "posts" # previously `post` in Jekyll 2.2.
      layout: "post" # set the correct default template for a post
      section: "post" # set the root section name

# Build settings
markdown: kramdown
style: compressed
  sass_dir: assets/sass

# Kramdown options
  # Prevent IDs from being added to h1-h6 tags
  auto_ids: false

# Include in processing (e.g. Netlify directives)
# Uncomment before use

#  - _redirects
#  - _headers

# Exclude from processing.
# The following items will not be processed.
  - LICENSE.txt
  - netlify.toml
  - feed.xml
  - Gemfile
  - Gemfile.lock
  - node_modules
  - vendor/bundle/
  - vendor/cache/
  - vendor/gems/
  - vendor/ruby/

        output: true
        output: true
        output: false
        output: false
        output: false
        output: false
        output: false
        output: false
Updated 14/10/2019 18:27 2 Comments

Can dependent terraform templates reference other template inputs in mock?


I have a terraform template environment that is dependent on the inputs and outputs of various modules in a separate terraform template environment.

Here is the tree structure: └── multiple-cluster ├── common-infra (base) | ├── | ├── terragrunt.hcl | ├── -common-infra-west (deployment) | │ ├── | │ ├── terragrunt.hcl | │ ├── | │ ├── | │ ├── └── single-keyvault (base) ├── ├── terragrunt.hcl └── single-keyvault-west (deployment) ├── ├── terragrunt.hcl ├── ├──

Here the single-keyvault environment has a dependency on the modules in the common-infra environment (specifically the keyvault). I need to obtain both the input config and output from the common-infra environment before deploying. I see ther are mock_outputs blocks for dependencies in terragrunt but is there also support for mock_inputs so the configuration from the other deployment is passed to this new terraform template environment?

Here is a sample terragrunt.hcl of the single-keyvault template

inputs = {
    # keyvault, vnet, and subnets are created seperately by azure-common-infra
    keyvault_resource_group =
    keyvault_name =
    address_space =
    subnet_prefixes =
    vnet_name =
    vnet_subnet_id =

    # Cluster variables
    agent_vm_count = "3"
    agent_vm_size = "Standard_D4s_v3"

    cluster_name = "single-keyvault"
    dns_prefix = "single-keyvault"

    resource_group_name = "single-keyvault-rg"

    ssh_public_key = "<ssh public key>"

    service_principal_id = "${get_env("AZURE_CLIENT_ID", "")}"
    service_principal_secret = "${get_env("AZURE_CLIENT_SECRET", "")}"

include {
    path = "${path_relative_to_include()}/../azure-common-infra/terragrunt.hcl"

dependency "common-infra" {
  config_path = "../common-infra"

  mock_outputs = {
    # keyvault_name = "mock-Vault"
    # global_resource_group_name = "mock-rg"
    # address_space = ""
    # subnet_prefixes = ""
    # vnet_name = "mock-Vnet"
    vnet_subnet_id = "/subscriptions/<subscriptionId>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mock-Subnet"
  mock_outputs_allowed_terraform_commands = ["validate", "plan"]
  #Dependency on vnet_subnet_id
  skip_outputs = true
Updated 14/10/2019 18:17 1 Comments

Editor: more info about local "new changes"


Is your feature request related to a problem? Please describe.

I wrote an article on multiple devices over the course of a week.

I finished it on one device and wanted to publish it on another after the weekend.

When I opened the article on the other device, it showed me an incomplete version of the article with the hint that this version has “new changes”.

I freaked out and drove home to see if I had the complete version as local “new changes” on one of my other devices.

Turns out, the article was saved to the backend complete and some of my devices had old versions which they sold me as “new changes”.

Long story short, it cost me 5€ and a 1h subway ride to find out everything was okay and I could simply have “clear"ed the "new changes”.

Describe the solution you’d like

The nice dev-style solution would be a Git(Hub-style) diff of the local version and my remote version, haha. Because it’s nice to see what’s old and whats new, but it’s nicer to see if the old stuff includes some things missing in the new stuff.

Describe alternatives you’ve considered

A simple solution would be some dates and times.

Local version  15:12:11 15.10.2019 - old
Remote version 12:11:00 16.10.2019 - new

Additional context

5€!!! 💩

Updated 14/10/2019 16:44 2 Comments

Consider renaming Store Gateway.


It keeps coming a lot recently:

There is huge confusion between: store vs store gateway vs storeAPI We need to do something with it.

  • storeAPI: a gRPC API which serves as an interface for reading stream of chunked series
  • store gateway: This is a service, a StoreAPI implementation that allows streaming series from object storages.
  • store It is used as API and as gateway, depending on context :man_facepalming:

Proposals/ideas Rename store Gateway to: * bucketGateway * We might even be able to hook it to thanos bucket serve as we have thanos bucket already which allows different things. *bucketBrowser *blockReader`

Updated 14/10/2019 13:58 2 Comments

Issue connecting to data


Thanks for quick response. Still I am not able to get the data in preview too. image

I think there is some change in getting credential with is not discussed in help file

When i create a credential at global level , both youtube analytics, and youtube data v3 api were enabled , i also enabled youtube reporting api and created the json file image

In Oauth Concent screen tabl ,

I was asked to add authorised domain without which redirect url can not be entered , so i did that

Also it ask for scope , if i do not include scope , i get the above forbidden access error but if i add youtube data and analytics scopes , it ask for submit app for verification which is no where discussed in video created for telling how to get oauth . Need your support , Request you to create a new credential for yourself , to see this issue . Old oauth file may still be working for you


I am not able to use this , please suggest how can i get the data , Yes i am the primary owner of the channel , doing from same email id

Originally posted by @sachintholia in

Updated 14/10/2019 14:49 1 Comments

Travis CI ?


Вспомнил, что GitHub дружит с Travis CI и через него можно билды настраивать. В дополнение к #100 можно рассмотреть трэвис, но лично у меня такого опыта нет, ничего сказать не могу. Что-то мне подсказывает, что там не сложно ;-)

Updated 14/10/2019 13:10

Add new awesome testing tool


Before adding a new testing tool, please follow the contributing guide:

Search previous suggestions before making a new one, as yours may be a duplicate. Make an individual pull request for each suggestion. Chose the corresponding section. New categories or improvements to the existing categorization are welcome. Research if the tool you’re including is actually awesome and useful. THanks

Updated 14/10/2019 13:03

Fork me on GitHub