Contribute to Open Source. Search issue labels to find the right project for you!

Complex pages


we want these eventually. We really need complex IDs though.

Maybe we can do a “complex” display were we just display all of the annotation with a specific GO complex ID as an interim if its quick.

So, it would look a little like a publication page

but instead of “Annotations from this publication” it would be “annotations to this complex”

This would be useful/interesting to see phenotypes like this because complex members usually pheno-copy each other. ….and obviously we would expect similar processes to.

It would be handy to also be able to see all of the extensions for a complex together because sometimes the extensions are attached to different subunits….

Updated 29/04/2017 15:53

draft toots


we should add functionality to save (autosave?) toots as drafts. upon opening a new toot-compose buffer, we could offer the choice to resume composing a previously saved toot.

Updated 28/04/2017 18:43

CMSIS kernels for neural networks



We have been writing some computation kernels for running different neural networks on cortex-m based systems using some of CMSIS functions.

We think it might be good to include these in CMSIS-DSP. What would be the best way for this?

Here is an example of a convolution function using CMSIS functions: convolution_example.txt

Thanks, Liangzhen

Updated 25/04/2017 07:18 1 Comments

Podporovat advokátní kanceláře


Hóóódně do budoucna (vyplynulo z testování, tak píšu, jen aby nezapadlo).

Tester měl tendenci vyhledat nikoliv advokáta, ale advokátní kancelář, jejíž jméno zná z novin. Z toho vyplynulo možnost nějak pracovat s advokátními kancelářemi. Konkrétně by to znamenalo: - pro Radima rozšíření informací crawlovaných z ČAKu i o sloupec vpravo, tj. údaje o firmě a dále pak (pro toho, na koho úkol spadne) - vytvořit stránku AK (firmy), kde by byl seznam tam patřících advokátů a informace o AK (+ linky např. na nebo nebo tak něco + na hlídač smluv atp.) - na stránce advokát k advokátu doplnit i název AK, kde nyní pracuje (a ideálně i kde v minulosti pracoval, což budeme vědět, protože teď už trackujeme změny v ČAKu) - asi rovnou podporovat vyhledávání i v názvech advokátních kanceláří - a výsledky advokátních kanceláří např. zobrazovat odděleně od advokátů

Updated 25/04/2017 16:24

Ecmascript ... Atom?


For the last few weeks I’ve been experimenting with alternative approaches to the tmLanguage / sublime syntax regex based approach to highlighting to finally get around the limitations baked into these systems.

Unfortunately the Sublime API doesn’t provide the necessary hooks to perform scoping by other means — but it seems that, although it’s not immediately obvious from the docs, Atom does.

I’m now working on an initial proof-of-concept and wanted interested folks (e.g. @aziz) to know.

If all goes well, the idea is that one will be able to choose ES/TS version from the menu and also toggle support for extensions like JSX and flow. I have a few other ideas, too, but I probably shouldn’t get too ahead of myself yet — until I’ve finished the vertical POC this new project remains somewhat tentative.

Update: The techniques that will be required to do this in Atom will really put the hack in “the hackable text editor”. In addition to providing a custom subclass of Grammar (kinda neat), we’ll need to place some sort of selective wrapper/patch/interceptor over TokenizedBuffer (filthy dirty hak). The viability of this might change if it turns out there are a lot more pieces in play that have strong expectations about how grammar behaviors are implemented.

This is needed because Grammar.tokenizeLine (note: so far I see no indication of Grammar.tokenizeLines ever being called, even on initial load of a file) receives a context stack (which we can shove anything in — good) but doesn’t receive its own line number, only whether it is the first line, so we still are cut off from knowledge of subsequent lines. In our case we will want to either (a) always retokenize all lines or (b) optimize by retokenizing from the last point prior to the first changed line which had no alternative productions that depended on content from the changed line on. I want to try “a” first because although it sounds heavy, I believe a tight parser will be way faster than the node-onigumura solution currently used in atom grammars*, and also because in many cases doing “b” won’t actually help much (the last point of interest could be pretty far back). Indeed, I’ve already optimized the lexer to run super hot, upwards of 7500 tokens per ms.

* This surprised me. Oniguruma is awesome, and I guess they did this for backwards compatibility with existing textmate defs, but syntax highlighting is pretty performance sensitive … weird to not take advantage of how crazy fast native v8 regex is, no?

Updated 23/04/2017 09:24

Create CMS concept for WAVE


Need to have CMS functionality for portals. Base on virtual file system - revisions - approve/unapprove by content admin. Basically a catalog of structured resources: files, configs, NLSmaps, menus and page templates . tbd email templates

Updated 22/04/2017 01:18 1 Comments

continue to think about how to capture dependencies


like componet A formation dependent on componet B localization of A dependent on process B (is cytokinetic ring location dependent on membrane trafficking)


We often have cases where these are deomstrated by drug treatment (TBZ, latrunculin etc) , rather than mutant and cannot be attached to a specific gene product

Updated 21/04/2017 16:22 2 Comments

PEP8 Compliance


I would like the project to be PEP8 compliant. This fix will probably be the last commit before launching the ‘beta’ release. Until then, it’ll be ‘yep.’ compliant.

Updated 20/04/2017 03:18

CursorSeries implementation



This issue is to track (re)implementation of CursorSeries. The new approach makes all CursorSeries implementations sealed (or at least non-virtual) and allows for cursor type specialization, which often improves performance significantly.

Unary Transformations

  • [x] Range - TODO should add MovingNext,MovingPrevious, AtTheEnd, AtTheStart to CursorState enum, this is useful not only in RangeSeries
  • [x] MapValues - TODO should accept a selector Func<TKey,TValue,TResult>
  • [x] Unary Arithmetic - this commit and a couple before implement the arithmetic operators. It will be beneficial at some point to split ArithmeticSeries into AddSeries, MultiplySeries, etc, but not likely before v1.0. Hopefully someone will send a PR for this - it’s one of the first issues that doesn’t require deep knowledge of Spreads internals.
  • [x] Unary comparison (LT, LE, EQ, NEQ, GE, GT) - Derived from MapValuesSeries with a simple lambda. TODO tests, compare performance with ArithmeticSeries implementation.
  • [ ] Cast
  • [ ] Filter - maybe it should be just FilterMap with two lambdas like in LINQ.
  • [ ] Window - should be a struct, must track origin version (not possible for cursors) should have a count (could vary for incomplete windows) and throw if Window cursor evaluation count is not equal or start/end are not equal (this is the only way to detect order version change after Window structs are created, but misses Set for existing key). WindowSeries could be a class but its values should be struct. Possibly range should also be a struct.
  • [ ] WindowWhile - including special extensions for TimeSeries, e.g. Hourly. Both Window and WindowWhile should support a step size, and WindowWhile some rule for initial step. E.g. we usually do not want to calculate an hour window for every tick (but sometimes we do and can, but it rather wasteful and requires rethinking the original reason why one would do this), but want an Hourly window every several minutes.
  • [ ] Repeat - current version is not optimized for lookup near the end value or ZipN uses it incorrectly (it calls TryGetValue of underlying series for every continuous series, for persistent ones it just hits storage every time, big issue).
  • [ ] RepeatWithKey - should return not only value but the previous key
  • [ ] Lag, Shift, ZipLag
  • [ ] Cache/CacheInto
  • [ ] Scan
  • [ ] Moving statistics - SMA, EWMA, MovingMedian, MovingRegression, etc. Could generalize the calculations in the same way SMA is implemented currently, but should specialize for hot cursors such as SMA.
  • [ ] Bind cursor - a flexible template to implement any complex cursor at the cost of too many virtual calls. Good for prototyping and less important cursors. (perf is OK in absolute terms anyways!)

  • [ ] TODO Many more, at least what Deedle has already. This issue will probably never be closed and will turn into documentation at some point.

Binary transformations (via ZipN)

  • [ ] Binary arithmetic
  • [ ] Binary comparison


  • [ ] Fold
  • [ ] Sum/Product/SumProduct/Average/etc. - should specialize on most important inputs (foreach optimization). Should match MovingStatistics.


  • [ ] ZipN should not accept lambda, it should be internal and then wrapped into any specialized implementations.
  • [ ] Zip2 for arithmetic ops with two series should be specialized.
  • [ ] Rewrite async move using Updated property of underlyings.
  • [ ] Support indexed series.
  • [ ] Support parallel calculations for large N (review the idea of adding complexity attribute to each series). This would only be beneficial when we will calculate batches in parallel, otherwise synchronization and method calls will kill all potential benefits. (There was a funny and interesting article recently on how a guy wrote a code in C that was faster on his laptop on a single thread than a Hadoop cluster - we should optimize for single-thread performance first as well!)


  • [ ] RCursor - prepare some data structure that could be consumed by Spreads.R and provide a function name. Will be done in the R project.
  • [ ] RactorCursor - same as R, but sends data to Ractor
  • [ ] AsyncMap() is enough for cases that to not require pinned blittable data, any complex work could be done in a lambda.

General issues

  • [ ] Try to make CursorSeries structs, at least stateless one. Stateless are the ones whose state is defined at construction time, e.g. Range start and end values, and that state doesn’t change during cursor moves. Mutable structs are evil and there will be a lot of problems, but in very limited cases (Window) that could be useful.
Updated 18/04/2017 19:30

Add bench and trace scripts for h


I’m adding a quick way to evaluate how h is doing, so we have some material for our discussions.

  • yarn bench will start some benchmarks (you are welcomed to add more cases).
  • yarn trace will trace the code and generate artifacts that can be used with

I’m going to switch to the official repo but I started this branch on my fork.

Updated 15/04/2017 21:28 6 Comments

Optimizing h


Continuing from #182 and focusing on h.

h is in charge of returning a new vnode. Currently it exposes 2 additional features if I may say: 1. It handles splats (varargs) of children. 2. It concatenates contiguous string and number children into one text node.

Both features have a performance impact.


It’s basically here to support how some jsx transformers work. My feeling is they do it this way because it makes parsing/codegen easier for them.

Pros Cons
• Support all jsx transformers (react-compliant) • Slow (<br>• More bytes

It has been discussed for the babel jsx transformer a while ago: The babel team didn’t do anything on this, @substack developed his own babel transformer for this specific case:

We could either simply drop splats support and align with a clean vdom function signature. This would involve changing of babel transformer and investigate if webpack transformer supports this.

Or we could, as a mentioned here wrap h in a hjsx function that just handle splats, for those who really want to use it.


It concatenates string and number children into one text node.

Pros Cons
• Less nodes to diff • Additional memory<br>• Additional processing<br>• More bytes

It feels that having so many string/number contiguous nodes that there is a real performance benefit to merge them is less than obvious. Still, it allocates a stack and process it for each h call which has a non-negligible performance impact and gc stress.

EDIT: I’m talking about the internal stack here. I wrongly believed it was needed for concatenation, but it’s in fact because of how hyperx make calls to h. So everything here applies to ‘hyperx’ nested children handling instead of concatenation.

My advice on this would be to simply drop it and compare hyperapp diff benchs before and after.


Updated 15/04/2017 17:49 27 Comments

URL shortening for schedule links


Schedule links are messy as they are, with a massive list of section ids visible in the URL. It would be nice to have some form of link shortening made available on-request (when someone clicks the Copy Schedule Link button).

This would request the backend to store the current section IDs as well as a hash which uniquely identifies it. Thus, one could visit an address like, and that last parameter would be sent to the backend instead of the section ID list, and used to generate the schedules.

Updated 13/04/2017 18:28

WIP Rewriting SortedMap with Span


SM should be rewritten with Spans (or anything that returns a ref location by an index).

There are three Spans: keys, values and fields (locker, versions, size). The third one is needed to support off-heap scenario with IPC synchronization (however this is quite complex and for “future” only). (that will be a subclass with its own sync logic, unless sealing is really important)

SortedMap should support: * Shared/Borrowed keys - useful for equal keys optimization and for using SMs as columns of panels (also shared keys); #4 * Internal pooling - when SCM owns a SM, it is safe to pool the instances. Panels should also pool columns. No pooing for SMs created with new in any other place (i.e. no pooling from Dispose()/finalizer, only via an explicit Release() method); #87 * Specialized comparers - and maybe specialized implementations; #100 * Cleaner regular keys code - currently it is spaghetti code with ifs. Also should consider dropping this thing altogether - real data is either small (hours,days) or irregular (sub second). Only seconds and minutes could benefit from this, but all other cases suffer from if branches, virtual calls and great code complexity. * Cursor counting (and exception/warning if finalizer sees non-zero counter); #87 * SM should own buffers and release arrays on disposal (in the shared keys case PooledOwnedBuffer will just decrement its counter); * Non-synchronized mode - in the latest commits BaseSeries implementation doesn’t support it yet, but this is needed for SCM which has its own locking and versions. BeforWrite() could accept a bool and use it where we have while(true), version could be set to -1L and a negative version should be ignored in AfterWrite.


Updated 20/04/2017 04:31 2 Comments

Open post's links in a new tab


Isn’t a good practice open external links in the same tab. Nofish said:

Actually this is because technical limitation. ZeroNet running sites sandboxed iframe where the links with target=“_blank” does not work properly in every browser. Probably related to—the-most-underestimated-vulnerability-ever/

iframes can communicate with the main frame with JavaScript. ZeroNet can create a API for it. When user clicks a link (that is modified for the code work), then the main receives this message with the link metadata, and then does the work.

Updated 10/04/2017 19:23 1 Comments

Library-Validation issues


mySIMBL does not inject into applications with Library-Validation which currently includes:

Safari 10.1 + Xcode 8.0 +

Not sure if/when a fix will come. You can unsign these applications to allow injection but Safari is protected by System Integrity Protection which mean SIP must be off in order to unsign Safari. There are also a host of issues that have been reported related to unsigning Safari 10.1.

-Wolf :wolf:

Updated 23/04/2017 20:25

add possibility to index factors by key value


[migrated from private repo]

At the moment factors are indexed from 1 to K where K is the number of factor.

In some cases, there may be another indexing which the user may want to use. For example if the factors correspond to entries of a matrix, it’s more convenient to refer to a factor using a tuple (i,j).

In order to do that the only thing we care about is to have a mapping from key to index. So I don’t think it’s necessary to modify Factor

  • In the definition of the FactorGraph there should be the possibility for the user to pass an array of keys
  • that should build a mapping which should then be used when required

I believe this should be rather easy to do although it’s late and I need to think a bit more carefully about it.

Updated 07/04/2017 16:47

Report on named ranges


Now that readxl can read from a range, it would be nice to extract the named ranges, for possible use in a subsequent call to read_excel(..., range = ...). There could be a metadata function, like excel_sheets(), but excel_ranges().

Related to and #79 (in which I say we won’t do this, but maybe I was wrong).

cc @sz-cgt @nacnudus

Updated 04/04/2017 21:25



Let’s make a mobile app! That way websockets will work properly on mobile devices.

Maybe even change up the page to recognize mobile devices and show a CTA to our mobile app?

Edit: websockets work fine on mobile, but it should still be an option to have a native app/widget.

Updated 13/04/2017 14:51

Remove Router from core?


Hello! Just looking through HyperApp and I am very impressed! 👍 I am just wondering why Router is in the core package that everyone downloads because it’s just a plugin. Wouldn’t it be better to separate Router from core and put Router and similar plugins elsewhere? It’s just a question, as I was wondering about community plugins/extensibility. E.g. The core hyperapp repo is just h and app and Router and other similar plugins could be in other repos and the user could choose which one they install alongside hyperapp. npm install hyperapp npm install hyperapp-router ...

What do you think?

Updated 19/04/2017 11:19 9 Comments

using F-P links on ontology term pages to show the "full story"


In GO there are 2 ways to get Molecular Function -> Biological process links.

They can either be instantiated in the ontology like so:

f-p links


we can create them at annotation time like:

has substrate fkh2 involved in negative regulation of conjugation with cellular fusion

So if we have

cdc2 MF has substrate drc1 involved in positive regulation of mitotic cell cycle DNA replication

and BP involved in positive regulation of mitotic cell cycle DNA replication

we don’t bother to reiterate the extension (that isn’t a great example, will find a better one).

Anyway, a more complete picture at the process level would show connected activities as well.

So For example this view today

would have instead of tRNA methylation cpd1, gcd10,

tRNA methylation cpd1 tRNA (adenine-N1-)-methyltransferase activity gcd10 tRNA (adenine-N1-)-methyltransferase activity

this might be “a bit much” for all F-P links but it could end up being very usefuk for the F-P links that are ontology embedded.

Especially if GO goes the route of removing all single-step processes, because there would be a loss of specifict here (for example all of the different tRNA methylation processes would collapse into tRNA methylation).

This is really only a vague and not well thought out proposal, but I wanted to note it as a possibility for the future.

Updated 31/03/2017 16:04



What do we want to get done for the next release? Presently, I see a few pain points on our projects:

  • [ ] Big Domains. How can we break up domains into smaller pieces?
  • [ ] Data queries, more efficiency in Presenter models
  • [x] Dispatching multiple actions at the same time and daisy chaining actions. The only place to do this now is at the call point of repo.push(action), or inside of an action as the functional form of actions
  • [ ] Importing actions in a bunch of places, particularly with use in ActionButton/ActionForm

Big Domains

I want to be able to nest domains. I’ve laid out some work here:, but here are my goals:

  • [ ] Make it easier to break up domains that track large, nested, state objects.
  • [ ] Allow reuse of domains that operate on common data structures, like a collection of records, a tree of nodes, or a map of data.
  • [ ] Improve the ergonomics of dealing with nested data, like updating fields from a record within an array


Described in, I want to be able to quickly fetch information out of a repo, track that specific data dependency in a Presenter, and possibly run some sort of additional processing on it.

Dispatching multiple actions, or sequential actions

We’ve never had a good story around this. I’ve laid out some thoughts here: This is what I want to accomplish:

  • [ ] The presentation layer should never have to daisy chain to push actions
  • [x] Dispatching multiple actions should be trackable as a single unit of work
  • [ ] It should be easy to add/remove dependencies of an action, like if later on in a project, you discover you need to make an API call before you do another behavior. You should only have to update that once: in the action layer

Importing actions all over the place

I wrote up some ideas in, but I don’t have an awesome plan for this. I just want a way to prevent the presentation layer from needing to import actions all over the place.

  • [ ] It should be pretty static. I want to catch errors really early if you reference an undefined action, or get an action by the wrong name
Updated 12/04/2017 10:28 3 Comments

any methods to deprecate?


I am curious if we want to deprecate any functionality in RMG. I am thinking that not-often-used and poorly understood functionalities could be removed to allow easier bug checking, code support, and development. We may also be able to just cut some redundant methods. One example I came across would be to remove the method rmgpy/molecule/molecule/Bond/isSpecificCaseOf and integrate its code into rmgpy/molecule/molecule/Bond/equivalent.

If people think it may be worthwhile to have a list of methods to cut, we may want to have a warning placed in docStrings, similar to how Cantera placed some in their code to warn users.

Updated 28/03/2017 21:17

why the use of http.HandlerFunc


Why does Negroni use both http.Handler and http.HandlerFunc in its API? It’s rather inconsistent.

func New(handlers ...Handler) *Negroni

compared with the third parameter in type Handler interface { ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) }

Updated 24/04/2017 21:10 2 Comments

Feature request: Support for C and C++ threads


Your CMSIS-RTOS documentation states:

The CMSIS-RTOS API is a generic RTOS interface for ARM® Cortex®-M processor-based devices. CMSIS-RTOS provides a standardized API for software components that require RTOS functionality and gives therefore serious benefits to the users and the software industry.

  • CMSIS-RTOS provides basic features that are required in many applications or technologies such as UML or Java (JVM).
  • The unified feature set of the CMSIS-RTOS API simplifies sharing of software components and reduces learning efforts.
  • Middleware components that use the CMSIS-RTOS API are RTOS agnostic. CMSIS-RTOS compliant middleware is easier to adapt.
  • Standard project templates (such as motor control) of the CMSIS-RTOS API may be shipped with freely available CMSIS-RTOS implementations.

I agree with the intention of the generic RTOS interface, but I personally would suggest a different approach.

As you know the C11x and the C++11 standard specify thread support, more or less in compliance with and derived from POSIX pthread.

I would prefer support for threads at language level as proposed by the C11x and C++11 standard. CMSIS could provided the required runtime environment to support C11x and C++11 threads, rather than proposing its own threading system.

I am aware that the C11x and C++11 features must be extended by functions which allow to signal something to threads from interrupt context, i.e. a way to wake a thread from an interrupt. But in general I think it would be very beneficial if the engineering effort goes into the compiler and its runtime environment to support multithreading at language level as supposed by the standard.

Is there any plan to support Cx11 and C++ threads in future?

Updated 29/03/2017 13:49 12 Comments

Prevent duplicates in port list


Future Enhancement:

Prevent duplicate “names” from appearing in port list. This is a possibility now since users can custom-name their Wi-Fi Modules. Since BlocklyProp’s Editor doesn’t keep track of the port’s IP address, only the name is passed between BPClient and BP. This means that BPClient can be asked to download to a port name that resolves to two or more IP addresses if there happens to be two or more Wi-Fi Modules in the with exactly the same name.

Suggestion: Enhance to adjust names upon the get_ports() request, to be guaranteed unique (within that particular list) by using part of the device’s MAC address appended to the name(s) when, and only when, a duplicate is found. Keep the modification as short as practical.

Sample, but yet imperfect, code (copy and paste into IDLE and run). This code doesn’t handle possibility that the new modification can match another existing module name and, rather than spend more time now, it’s added here as just a potential starting point for a future enhancement.

NOTE: This includes some helper functions that already exist in… they are only included here to facilitate quick, isolated development in IDLE. See the “Main Code” section for the real point.

# Helper Functions              

def isWiFiName(string, wifiName):
# Return True if string contains Wi-Fi Module record named wifiName
    return getWiFiName(string) == wifiName

def getWiFiName(string):
# Return Wi-Fi Module Name from string, or None if not found
    return strBetween(string, "Name: '", "', IP: ")

def getWiFiIP(string):
# Return Wi-Fi Module IP address from string, or None if not found
    return strBetween(string, "', IP: ", ", MAC: ")

def getWiFiMAC(string):
# Return Wi-Fi Module MAC address from string, or None if not found
    return strAfter(string, ", MAC: ")

def strBetween(string, startStr, endStr):
# Return substring from string in between startStr and endStr, or None if no match
    # Find startStr
    sPos = string.find(startStr)
    if sPos == -1: return None
    sPos += len(startStr)
    # Find endStr
    ePos = string.find(endStr, sPos)
    if ePos == -1: return None
    # Return middle
    return string[sPos:ePos]

def strAfter(string, startStr):
# Return substring from string after startStr, or None if no match
    # Find startStr
    sPos = string.find(startStr)
    if sPos == -1: return None
    sPos += len(startStr)
    # Return string after
    return string[sPos:-1]                

# Main Code

wports = ["Name: 'Jeff's WX1', IP:, MAC: 18:fe:34:f9:ed:c0",
          "Name: 'Jeff's WX2', IP:, MAC: 18:fe:34:f9:ed:c1",
          "Name: 'Jeff's WX1', IP:, MAC: 18:fe:34:f9:ed:c2"]

com_port = "Jeff's WX1"

targetWiFi = [l for l in wports if isWiFiName(l, com_port)]
print len(targetWiFi)

if len(targetWiFi) == 1:
    print com_port, "is unique"
    print com_port, "is a duplacate"
    print wports
    print "Adjusting..."
    for i in range(len(targetWiFi)):
        print "removed occurrance", i
        print wports
        print "adding modified occurrance"
        wports.extend(["Name: '"+com_port+"("+targetWiFi[i][-2:]+")"+targetWiFi[i][len("Name: '"+com_port):]])
        print wports
    print "Done!"
Updated 24/03/2017 19:42

Incorporate plugins


plugins are here as of Go 1.8. Despite some of the general issues and difficulties with getting plugins to compile using objects shared with the main program, it seems likely that we’ll need to do something with them as part of dependency management.

Problem is, I’m not sure much we could really do via static analysis to determine what plugins are necessary. The import names are just strings, and if the focal point during analysis is an individual file in a particular package (with a plugin.Open call site), then the origin of those strings could easily be in a package from an entirely separate project, or generated based on the input to a running program.

If we can’t provide any assurances about the completeness of analysis, then it suggests dep should handle plugins - if it does at all - through an entirely different mechanism than typical import-based dependencies. Perhaps explicit declaration in the manifest? I don’t know.

In any case, this issue is here for some open discussion on the topic.

Updated 25/03/2017 23:16 2 Comments

[Implemented soonTM] New package.json


As we’re aproaching v4.1, I’d say we should get a new package.json. Theres things that don’t need to be there (like bufferutil) and things that work just fine in newer versions.

~I’ve been currently working on a “new” one, and I’ll Pr it and wait for opinions~

  • [ ] Remove bufferutil
  • [ ] Remove bluebird
  • [ ] Add String ("string": "*")
  • [ ] Change random-puppy to random-animal
  • [ ] Replace “money” Fix money
Updated 28/04/2017 17:34 1 Comments

Notice bar for posting information to users


It would be really nice to have a notice bar so we can inform users of planned downtime, new updates, that they should clear their cache, or anything else.

This issue entails creating a bar that will sit below the main navbar, probably at full page width, to display this information. Also, a new API resource /status should be created that will return a small object that will contain a simple string set by the server administrators. In the future this endpoint can be extended to include other information such as uptime.

Updated 22/03/2017 00:20

Slack integration


Integrate laundree with slack as a admin tool. Notify when someone is creating a laundry and when someone is signing up. This should include links and maybe contact info?

What do you think @malenesoeholm

Updated 23/04/2017 17:49 2 Comments

SEO Optimierung des Schulprofils


Wäre es nicht wünschenswert, dass die Schulprofile auch in Google gut ranken, so dass man sie auch auf diesem Kanal finden kann?


Beispielhafte Optimierungen können sein: - html-title-Tag mit dem Namen der Schule individualisieren, Wahrscheinlich auch der Stadt/Kiez Beispiel: ZUCKMAYER-SCHULE Berlin Neukölln - Schulprofil auf (Auch für og:title und twitter Card sinnvoll) - Die URL “lesbar” machen Beispiel: - Mobile Optimierung <img width=“292” alt=“bildschirmfoto 2017-03-21 um 06 53 28” src=“”> - Querverlinkung “Schulen in der Nähe” und/oder eine (Suchemaschinen)-Sitemap aller Schulprofile / Indexseiten

(Mehr Stichworte unter

Updated 30/03/2017 08:58 1 Comments

Fork me on GitHub