Contribute to Open Source. Search issue labels to find the right project for you!

check for android compatibility

kryptokrauts/aepp-sdk-java

we need to check whether the SDK can work with Android.

there was a thread opened in the forum: - https://forum.aeternity.com/t/android-sdk-for-aeternity/4367

I think 3 dependencies could make problems: - BouncyCastle - classpath-conflicts (often solved by using SpongyCastle) - argon2-jvm - see https://github.com/phxql/argon2-jvm/issues/44 - tuweni - we need libsodium working on Android

Updated 18/08/2019 18:13

Create live demo

tongoose/tongoose

Provide a live demo website that would showcase what tongoose can do.

  • A simple (& editable?) mongoose schema on the left
  • The (soon to be) generated typescript output on the right
  • A button to convert

We can host it on github pages or I could host it on my own server & domain @ kipras.org

Updated 18/08/2019 18:06

Implement DB Indexes

NIAEFEUP/nijobs-be

See https://github.com/NIAEFEUP/nijobs-be/pull/20#discussion_r314994151

TL;DR: Should we have indexes in the DB? I’m feeling that this should be a yes, but the complex searching by several fields makes this not very clear.

Updated 18/08/2019 17:43 1 Comments

Support alternateScroll escape

jwilm/alacritty

Currently Alacritty allows sending arrow keys in the alternate screen buffer through the scrolling.faux_multiplier configuration option. However, there is a terminal escape sequence dedicated to set and unset this option.

According to the XTerm documentation CSI ? 1007 h should enable sending arrow keys in the alternate screen mode and CSI ? 1007 l should disable it.

XTerm by default has this option set to be disabled, so the first CSI ? 1007 h is required to enable it. With Termite it is enabled by default and CSI ? 1007 l has to be used to disable it by default. URxvt does not seem to support this.

Alacritty should respect this option and enable/disable what we call faux scrolling using it. Additionally it might make sense to get rid of our configuration option, however that would remove the ability to have separate scroll modifiers for normal and faux scrolling.

The default of Alacritty should be to enable it, since this follows the example of what Termite does and is consistent with what Alacritty has been doing by itself so far.

Updated 18/08/2019 17:11

A long list of issues/ideas

betalike/MS2start

—-START SCREEN—-

  • [ ] Fix icon size to 48x48
  • [ ] Soft-code it so that people can use it as their main start
  • [ ] add animations
  • [ ] Add the ability to add other icons
  • [ ] - find out what colors do they use
  • [ ] fix the “weird” pixelation on the desktop wallpaper on start
  • [ ] add the ability to change the start wallpaper via registry
  • [ ] - possibly make a tweakutility extension for that

—-SETTINGS—- - [ ] Make it work - [ ] “Devices” has a list of the devices plugged into the PC, implement that, hacky or not - [ ] Make it pixel-perfect

—-IMMERSIVE BROWSER—- - [ ] Add the buttons - [ ] - Have them hover.

—-TaskUI (prototype task manager)—- - [ ] Make it functional - [ ] Make it show how much the app is using the CPU, RAM, Hard Drive, etc. - [ ] Have the “graphs”, these existed in this earlier version of Task Manager.

Updated 18/08/2019 18:11

About the Spring1944.net domain

spring1944/spring1944.github.io

Hello,

I wanted to know if it’s possible to have access to domain administration (record…) for future website/service of spring1944. It’s more and more difficult to add an adress, or subdomain to website. For example I wanted to add a subdomain for the media gallery on the website. And perhaps one day we will want to have a proper wiki or something like that. Having access to domain is quite essential for this and I don’t think only one person even if it’s an active member should have, alone, the access to the domain.

@tvo, are you the one who owns the domain ? Is it possible to give access to @jal76, @yuritch, @sanguinariojoe, @PepeAmpere, @specing and/or me (frju365) ?

Thanks in advance for the answer and points of view,

frju365

Updated 18/08/2019 17:02

Build error: QDialog

debauchee/barrier

Hey I am new to Raspbian + Linux in general so this could very well be user error. But when following the steps for setup from the wiki I get the following error on build.

Appreciate the help!

Operating Systems

Server: Windows10:1809 Client: Linux raspberrypi 4.19.58-v7+ #1245 SMP Fri Jul 12 17:25:51 BST 2019 armv7l GNU/Linux

Barrier Version

Current

Steps to reproduce bug

On build from new pull from repo, fresh OS build… etc. Run Linux Procedure for building (from Wiki)

When running “sudo make install” the following error is produced. [ 78%] Building CXX object src/gui/CMakeFiles/barrier.dir/src/AboutDialog.cpp.o In file included from /home/pi/barrier/build/barrier/src/gui/src/AboutDialog.cpp:19: /home/pi/barrier/build/barrier/src/gui/src/AboutDialog.h:23:10: fatal error: QDialog: No such file or directory #include <QDialog> ^~~~~~~~~ compilation terminated. make[2]: [src/gui/CMakeFiles/barrier.dir/build.make:63: src/gui/CMakeFiles/barrier.dir/src/AboutDialog.cpp.o] Error 1 make[1]: [CMakeFiles/Makefile2:917: src/gui/CMakeFiles/barrier.dir/all] Error 2 make: *** [Makefile:152: all] Error 2

Updated 18/08/2019 17:27 3 Comments

Error with cfgWeapons/SMA_Custom_HK416CustomVFG

2bnb/2bnb-extras

Upon entering a mission we have created;

Taking a regular slot in 1-Section (or any section besides Zeus), allows the mission to load with the titled error. However, when a Zeus loads into his slot, the map gets the error and won’t load.

Possibly due to vehicle loadouts, but it wouldn’t explain why a Zeus wouldn’t be able to load in, whereas a regular infantryman would.

Updated 18/08/2019 16:04

Неразбериха с ID треков

MarshalX/yandex-music-api

Какой-то герой когда-нибудь разберется где какой ID использовать. Просто track.id или track.id + ":" + track.album_id

Была идея отойти от их API и создать собственный класс ID поддерживающие разные варианты. TODO тут: https://github.com/MarshalX/yandex-music-api/blob/master/yandex_music/utils/difference.py#L39

Updated 18/08/2019 15:32

Десериализация decomposed у Artist

MarshalX/yandex-music-api

В коде уже давно находится следующее TODO: https://github.com/MarshalX/yandex-music-api/blob/master/yandex_music/artist/artist.py#L69

Без понятия в котором методе вернулось данное поле у артиста, слишком давно было, не записал.

Помощь приветствуется!

Updated 18/08/2019 15:19

Отсутствие метода brief-info у artists

MarshalX/yandex-music-api

Только что был замечен метод, который вызывается при переходе на артиста, например, из поисковой выдачи или просто списка артисов приложения “Яндекс.Музыка” из Microsoft Store.

Пример URL'a запроса: https://api.music.yandex.net/artists/1613497/brief-info Ответ внушительный (огромная простыня json'a): https://codepaste.ml/28832948/

Если есть желание помочь - оберните этот метод, напишите класс(ы) для result'a (большинство уже готово).

Updated 18/08/2019 15:14

Create linedrawings for all patterns

freesewing/freesewing

We have a (new) LineDrawing React component which is supposed to show a linedrawing of a given pattern. Essentially, like an icon: a basic outline to give you an idea of what the pattern looks like.

The only problem is: The component doesn’t (yet) have linedrawings in it, and returns the outline of the GitHub icon (because we had to return something) :

Screenshot_2019-08-18 Screenshot

What we need to do is:

  • [ ] Draw linedrawing for Aaron
  • [ ] Draw linedrawing for Benjamin
  • [ ] Draw linedrawing for Bent
  • [ ] Draw linedrawing for Brian
  • [ ] Draw linedrawing for Bruce
  • [ ] Draw linedrawing for Carlita
  • [ ] Draw linedrawing for Carlton
  • [ ] Draw linedrawing for Cathrin
  • [ ] Draw linedrawing for Florent
  • [ ] Draw linedrawing for Huey
  • [ ] Draw linedrawing for Hugo
  • [ ] Draw linedrawing for Jaeger
  • [ ] Draw linedrawing for Sandy
  • [ ] Draw linedrawing for Shin
  • [ ] Draw linedrawing for Simon
  • [ ] Draw linedrawing for Sven
  • [ ] Draw linedrawing for Tamiko
  • [ ] Draw linedrawing for Theo
  • [ ] Draw linedrawing for Trayvon
  • [ ] Draw linedrawing for Wahid
  • [ ] Integrate these linedrawings into LineDrawing component

What’s a linedrawing?

A linedrawing (line drawing?) is a simplified/technical view of the sewing pattern. Example:

Example linedrawing

Updated 18/08/2019 15:01

Proposal: Drop Hydra vocabulary in favour of JSON [Hyper-]Schema

urbanobservatory/standards

Now that I have actually started trying to use some of the things we’ve been discussing in a real world project, I have concerns that Hydra as a hypermedia doesn’t provide enough of the descriptive power we need.

90% of our API functionality is providing access to sensor data through some sensible ontologies, but there are several instances where clients will need to go beyond following links, and instead compose IRIs to describe the filters, constraints, pages, etc. they want.

We know as a minimum that our hypermedia controls need to allow: * Filtering collections based on properties (#10) * Partial views on collections based on pagination or time slices (#6)

I have no doubt more complex scenarios will arise. Especially if people implement API endpoints for sensor management.

To be clear, there is JSON Schema (generally describing the shape of a document), and JSON Hyper-Schema (describing how to interact with an API). I am suggesting implementations MUST use link descriptions (LDO) from Hyper-Schema, but MAY also use bits of JSON Schema (which would be useful for allowing client-side validation for example).

We would use: * JSON-LD to reference our data to common vocabularies and ontologies, as a way of providing interoperability between the observatories and with other linked data APIs, RDF, etc. * JSON Hyper Schema to describe how to search and filter our APIs. * JSON Schema if people want to implement client-side validation, or to constrain what is allowed in their JSON-LD

Advantages * A basic implementation would probably only need to use two schemas, one for pagination and one for filtering. The pagination one I’ve already done below :-) * JSON [Hyper-]Schema is an active standard, much more so than Hydra seems to be. The latest version draft-08 is in final review. * JSON Schema is near-identical to OpenAPI (formerly Swagger), which can be used in all sorts of testing and documentation frameworks * Extensibility. Almost any API can be described using JSON [Hyper-]Schema. It can even be used to validate JSON-LD documents. * Automated documentation using Doca. * Greater likelihood of libraries that work out of the box for us, such as ajv.

Disadvantages * Complexity * Learning curve

Crash course in JSON Hyper-Schema

I’ve taken an example from the 9.5 Collections in the Hyper-Schema standard, and tried to make it more applicable to us. Assuming a request like GET http://example.webof.uk/sensor/ to obtain a list of all the sensors, the response would look something like: ```json Link: http://example.webof.uk/schemas/v1.0.0/sensor-collection.json; rel=“describedBy”

{ “elements”: [ {“@id”: “http://example.webof.uk/sensor/sensor-a”}, {“@id”: “http://example.webof.uk/sensor/sensor-b”} ], “meta”: { “current”: { “offset”: 0, “limit”: 2 }, “next”: { “offset”: 3, “limit”: 2 } } } `` The elements in the body could of course contain more properties, or could be left as@id` with the onus on the clients to dereference.

The schema to describe the interaction patterns associated with the above document would be obtained with GET http://example.webof.uk/schemas/v1.0.0/sensor-collection.json, with a semantic version number. json { "properties": { "elements": { "type": "array", "items": { "$ref": "#/definitions/jsonld-id" } }, "meta": { "type": "object", "properties": { "prev": {"$ref": "#/definitions/pagination"}, "current": {"$ref": "#/definitions/pagination"}, "next": {"$ref": "#/definitions/pagination"} } } }, "links": [ { "rel": "self", "href": "things{?offset,limit}", "templateRequired": ["offset", "limit"], "templatePointers": { "offset": "/meta/current/offset", "limit": "/meta/current/limit" }, "targetSchema": {"$ref": "#"} }, { "rel": "prev", "href": "things{?offset,limit}", "templateRequired": ["offset", "limit"], "templatePointers": { "offset": "/meta/prev/offset", "limit": "/meta/prev/limit" }, "targetSchema": {"$ref": "#"} }, { "rel": "next", "href": "things{?offset,limit}", "templateRequired": ["offset", "limit"], "templatePointers": { "offset": "/meta/next/offset", "limit": "/meta/next/limit" }, "targetSchema": {"$ref": "#"} } ], "definitions": { "jsonld-id": { "type": "object", "properties": { "@id": { "type": "string", "format": "uri" } }, "required": [ "@id" ] }, "pagination": { "type": "object", "properties": { "offset": { "type": "integer", "minimum": 0, "default": 0 }, "limit": { "type": "integer", "minimum": 1, "maximum": 100, "default": 10 } } } } } This describes all of the links associated with the collection, those being the previous/next/current pages. It also describes the maximum number of items that can be requested per page etc., and the default values. These IRIs for the pages are generate using the IRI templates with pointers to the data within the document, in the meta object.

You can try the two above snippets in the JSON Schema validator.

The client would, with the above two documents, be able to infer the links available, including recognising that prev was absent from the meta object in the document, so no previous page link is available. This would provide the following: json [ { "contextUri": "https://example.webof.uk/sensor", "contextPointer": "", "rel": "self", "targetUri": "https://example.webof.uk/sensor?offset=0&limit=2", "attachmentPointer": "" }, { "contextUri": "https://example.webof.uk/sensor", "contextPointer": "", "rel": "next", "targetUri": "https://example.webof.uk/sensor?offset=3&limit=2", "attachmentPointer": "" } ]

Updated 18/08/2019 14:47

Необходимо исправить вывод строк с одинаковыми датами в ДЗ forms/steps

NataliaGracheva/ra-homeworks

Через создание нового массива, в котором значения km с одинаковыми датами складываются, не получается. https://github.com/NataliaGracheva/ra-homeworks/blob/c2313916236341a17cdb3915938ef3514f3fd42d/forms/steps/src/components/StepCounter.jsx#L10 Можно ли как-то проще это сделать?

Updated 18/08/2019 14:37

/bin/sh: 1: ./mvnw: Permission denied

meeteor-13/core

Step 5/13 : RUN ./mvnw -Dmaven.test.skip=true install —> Running in aab4f751b16d /bin/sh: 1: ./mvnw: Permission denied ERROR: Service ‘core’ failed to build: The command ‘/bin/sh -c ./mvnw -Dmaven.test.skip=true install’ returned a non-zero code: 126

Updated 18/08/2019 14:35

windows dark mode reader

mdlincoln/darkly

I need to get onto a windows machine to test, but I believe it should be possible to read the system dark mode via something like

win_theme <- readRegistry("\SOFTWARE\Microsoft\WindowsCurrentVersion\Themes\Personalize\AppsUseLightTheme", hive = "HCU", view = "32-bit")
Updated 18/08/2019 14:20

[Bug] yarn add dep & devDep duplicate the dependency.

yarnpkg/berry

Describe the bug

Sometime I forgot to add --dev when trying to add devDependencies. Doing so again will duplicate the dependency entry in package.json.

IMO, v1 will stop us and ask to remove the package first. For v2, we can just automatically move it.

To Reproduce

const {promises: {readFile}} = require(`fs`);

await packageJsonAndInstall({
  devDependencies: {lodash: `*`},
});

await yarn(`add`, `lodash`);

const pkgJson = JSON.parse(await readFile(`package.json`, `utf8`));
expect(pkgJson.dependencies).not.toHaveProperty(`lodash`);

Environment if relevant (please complete the following information):

  • OS: OSX
  • Node version: 10.16.2
  • Yarn version: 2.0.0-rc1 (tag 2019-08-16)
Updated 18/08/2019 14:18 2 Comments

Containerise the development environment

freshwebio/apydox

To improve developer experience for those contributing to the project, it would be good to containerise the entire development environment that packages up both the api and the portal web ui.

This will help create consistency in the environment that developers work with and increase the speed of adoption and productivity saving contributors from spending hours to days working through environment setup and issues.

Updated 18/08/2019 13:11

add py3 multithreading example, fix py2 and multiprocessing

tqdm/tqdm

tqdm works fine with multithreading on python3:

from time import sleep
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor

def foo(a):
    tqdm.write("Working on %d" % a)
    sleep(0.1)
    return a

tqdm.get_lock()  # ensures locks exist
with ThreadPoolExecutor(max_workers=2) as executor:
    list(tqdm(executor.map(foo, range(8)), total=8))
Working on 0
Working on 1
Working on 2
Working on 3
Working on 4
Working on 5
Working on 6
Working on 7
100%|██████████| 8/8 [00:00<00:00, 20.05it/s]
[0, 1, 2, 3, 4, 5, 6, 7]

however the py2 output is: Working on 0 Working on 1 0%| | 0/8 [00:00<?, ?it/s]Working on 2 Working on 3 Working on 4 38%|███▊ | 3/8 [00:00<00:00, 15.12it/s]Working on 5 62%|██████▎ | 5/8 [00:00<00:00, 16.29it/s]Working on 6 Working on 7 100%|██████████| 8/8 [00:00<00:00, 19.27it/s] [0, 1, 2, 3, 4, 5, 6, 7]

while there’s still an issue (on both py2 and py3) with multiprocessing not sharing tqdm._instances: ```python from time import sleep from tqdm import tqdm from multiprocessing import Pool

def foo(a): tqdm.write(“Working on %d” % a) sleep(0.1) return a

tqdm.get_lock() # ensures locks exist p = Pool(2) list(tqdm(p.imap(foo, range(8)), total=8)) p.close() py3: bash Working on 0 Working on 1 0%| | 0/8 [00:00<?, ?it/s] Working on 2 Working on 3 Working on 4 Working on 5 38%|███▍ | 3/8 [00:00<00:00, 15.05it/s] Working on 6 62%|██████▋ | 5/8 [00:00<00:00, 16.25it/s] Working on 7 100%|██████████| 8/8 [00:00<00:00, 19.98it/s] py2: bash Working on 0 Working on 1 0%| | 0/8 [00:00<?, ?it/s]Working on 2 Working on 3 12%|█▎ | 1/8 [00:00<00:00, 9.96it/s]Working on 4 Working on 5 38%|███▊ | 3/8 [00:00<00:00, 11.72it/s]Working on 6 Working on 7 100%|██████████| 8/8 [00:00<00:00, 19.28it/s] ```

As of now it means that:

  • py3 threading is fully supported
  • py2 threading, py2/3 multiprocessing require things like set_lock and even position
    • can this be fixed/made easier?
  • documentation needs updating
Updated 18/08/2019 13:09

Consolidate mini and verbose reporters

avajs/ava

Currently we have two different reporter implementations. Code is duplicated across them which leads to various bugs. I once tried to consolidate them but that work stalled.

The idea is that we would have only one reporter. By default it provides minimal output. We show a spinner and the status of the last test, so you get a sense of activity, and once an entire test file completes do we print failures. In verbose mode, we’d print the results of all tests & hooks, including logs. Various counts for passing and failing tests are always shown at the end of the output.

If a TTY is not available, we wouldn’t print a spinner and only write the appropriate output for a test file once it completes.

We’d use ink to construct the output.

This should deal with the following issues:

  • https://github.com/avajs/ava/issues/2046
  • https://github.com/avajs/ava/issues/1953
  • https://github.com/avajs/ava/issues/653
  • https://github.com/avajs/ava/issues/2171
  • https://github.com/avajs/ava/issues/2194
  • https://github.com/avajs/ava/issues/1337
  • https://github.com/avajs/ava/issues/1917
  • https://github.com/avajs/ava/issues/2201
  • https://github.com/avajs/ava/issues/2191

Possibly also these issues, but they could be done as a follow-up:

  • https://github.com/avajs/ava/issues/698

We may want to remove duration logging for now: https://github.com/avajs/ava/issues/1668

Updated 18/08/2019 12:44 1 Comments

Install script for Raspbian Buster - first steps and help appreciated!

MiczFlor/RPi-Jukebox-RFID

Hi everybody, Thanks to @ckuetbach and @AdmiralVS I got something to work with when making a new one-line-install-script for the latest rapsbian release: buster.

It’s currently only on the develop branch. How to use this version of the “one line install” see below or go straight to the wiki.

Help would be welcome to test and improve it. And even more welcome to migrate the new version to the +Spotify edition of the Phoniebox.

Things which might soon be known issues, at the moment the script works for me, though:

  • file upload does not work, because the php.ini was not replaced
  • static IP address assignment
  • … add your findings here in the thread

Phoniebox Classic (buster): ~~~ cd; rm buster-install-*; wget https://raw.githubusercontent.com/MiczFlor/RPi-Jukebox-RFID/develop/scripts/installscripts/buster-install-default.sh; chmod +x buster-install-default.sh; ./buster-install-default.sh ~~~

Updated 18/08/2019 12:06

[BUG] Translation to CZ

stijnwop/guidanceSteering

Describe the bug Translation to CZ is not complete.

To Reproduce Steps to reproduce the behavior: 1. Change language of the game to CZ 2. Install GPS mod 3. Go to GPS menu 4. Examine text

Expected behavior All text and tooltips schould be in CZ language.

Screenshots https://drive.google.com/open?id=1CaHjWJGnJ128pLF_66lhsgScUY-TM6r5 screenshot of correction https://drive.google.com/open?id=1BntZi-cN7xZS0CSAR5NteOgioMf3__JY

Desktop (please complete the following information): - OS: Windows 10 - FS patch version: 1.4.1.0

Additional context I propose this solution: Add one new label in offset part of the setting frame (GuidanceSteeringSettingsFrame.xml line 145), instead of $l10n_guidanceSteering_setting_widthIncrement use $l10n_guidanceSteering_setting_offsetIncrement, should be added in each language…

Then apply this translation file: https://drive.google.com/open?id=1qkX4bs6yAuJVXg25TKCZhmVm3uvc70Ih

Result https://drive.google.com/open?id=1BntZi-cN7xZS0CSAR5NteOgioMf3__JY

Updated 18/08/2019 15:32 2 Comments

Sequence ignored when it used in extends of unused class

terser-js/terser

Bug report or Feature request?

Bug

Version (complete output of terser -V or specific git commit)

latest master

Complete CLI command or minify() options used

npx terser issue.js -c --toplevel

terser input

let x = "FAIL";
class A extends(x = "PASS", Object){}
console.log(x); // PASS

terser output or error

let x = "FAIL";
console.log(x); // FAIL

Expected result

let x = "FAIL";
x = "PASS";
console.log(x); // PASS

or

let x = "PASS";
console.log(x); // PASS

or

console.log("PASS"); // PASS
Updated 18/08/2019 16:37 3 Comments

Persist `InstallationId`

kowainik/hintman

I’m not sure how it’s going to work. But if we deploy GitHub app manually as a runnable server, we have to figure out how to persist installation ids to be able to restart server smoothly. If we store them only in memory we lose all information and can’t run hintman over repos where it’s already installed.

Updated 18/08/2019 10:14

Identifiers containing digits are not allowed for `enum`s and its variants

lpil/gleam
pub enum T1 =
| T(Integer)

will result in

error: Syntax error
- </home/nmelzer/projects/tlob/src/tlob/combi.gleam>:1:11
  |
1 | pub enum T1 =
  |           ^ Unexpected token
  |

Expected one of "(", "=", "enum", "external", "fn", "import", "pub"


pub enum T =
| T1(Integer)

will result in

- </home/nmelzer/projects/tlob/src/tlob/combi.gleam>:2:4
  |
2 | | T1(Integer)
  |    ^ Unexpected token
  |

Expected one of "(", "enum", "external", "fn", "import", "pub", "|"
Updated 18/08/2019 10:43 1 Comments

Slow startup

endaaman/tym

Hi, I noticed, that tym startups are slow (and maybe getting gradually slower?) Here is comparison with urxvt: ```

time urxvt -e “bash -c exit” 2019-08-18T11:20:42 CEST

0.03user 0.01system 0:00.04elapsed 95%CPU (0avgtext+0avgdata 13424maxresident)k 0inputs+0outputs (0major+1055minor)pagefaults 0swaps

time tym -e “bash -c exit” 2019-08-18T11:20:47 CEST

0.18user 0.05system 0:00.25elapsed 91%CPU (0avgtext+0avgdata 40592maxresident)k 1824inputs+0outputs (5major+4956minor)pagefaults 0swaps

200 ms might be ok for some users, but I spawn terminals in tiling wm (i3) pretty often. Slow startup causes, that tym misses several keystrokes when start typing immediately after launching tym (with kb shortcut).

Using Arch Linux, tym version 2.2.1.

EDIT:

under heavy CPU load:

time tym -e “bash -c exit” 2019-08-18T12:04:34 CEST

0.36user 0.05system 0:00.50elapsed 84%CPU (0avgtext+0avgdata 40948maxresident)k 0inputs+0outputs (0major+4970minor)pagefaults 0swaps

time urxvt -e “bash -c exit” 506ms  2019-08-18T12:04:37 CEST

0.04user 0.01system 0:00.06elapsed 88%CPU (0avgtext+0avgdata 13448maxresident)k 0inputs+0outputs (0major+1009minor)pagefaults 0swaps

```

Updated 18/08/2019 18:00 2 Comments

format of location line in error messages

lpil/gleam

The current location line wraps the file name in angle brackets followed by line and colum number, separated by colons (</foo/bar.gleam>:1:1).

Many editors can make output in a similar form (without the angle brackets) clickable in shell or other sub frames/windows to jump to the error location.

Therefore I suggest removing the angle brackets from the location line.

Updated 18/08/2019 10:40 1 Comments

Replace cheerio with native API

hexojs/hexo

Check List

Please check followings before submitting a new feature request.

  • [x] I have already read Docs page
  • [x] I have already searched existing issues

Feature Request

From the benchmark (performed by @SukkaW) we can see that cheerio can cause significant performance hit.

While cheerio is not necessary for the core functions of Hexo, it is currently utilized by several default plugins:

We are hoping to re-implement them without depending on cheerio, if possible.

Related PRs: - meta_generator: https://github.com/hexojs/hexo/pull/3671 https://github.com/hexojs/hexo/pull/3669 - open_graph: https://github.com/hexojs/hexo/pull/3670

Updated 18/08/2019 13:22 5 Comments

Think about using Sylabs cloud for singularity images?

nf-core/tools

Revisiting an old topic. Previously, we used Singularity Hub (a.k.a. Singularity Container Registry) to host Singularity images directly. This is good because:

  • Better reproducibility, as the conversion can introduce variance (though this is likely very small and very unlikely)
  • No need to convert from docker to singularity, which can take considerable computational resources and time
  • Users can directly download the singularity image without having Singularity installed (good for nf-core download when you’re not on the system that will run the pipeline)

However, we hit problems and abandoned this approach. Mostly because the automated GitHub link only triggered if the Singularity build file changed. Ours basically never change, it’s only the environment.yml file that changes. Secondly - to get versioned images, you had to have a directory full of tagged build files which just didn’t make any sense for our use case.

Now, we also have the option of Sylabs cloud. Nextflow has built-in support for this, much like the support for Docker Hub. We need to look in to how this would work. It may be possible to automate the building and push of singularity images using the new GitHub Actions, there is also the sylabs remote build service which could be useful.

Updated 18/08/2019 08:42

As a User, I'd like to challenge a matchmaker with a Discord emote react

monkiepaws/polybot

Is your feature request related to a problem? Please describe. There’s no direct and quick way to challenge a user with an active beacon.

Describe the solution you’d like A quick way to challenge, especially on Discord’s mobile app, would be to activate a react emoji.

Describe alternatives you’ve considered The alternative, to start your own beacon with !g sfv 1 is annoying on phones, and can be improved upon on desktop

Updated 18/08/2019 08:31

Use consistent button style

vega/editor

Here are our button styles:

<img width=“153” alt=“Screen Shot 2019-08-18 at 10 21 05 AM” src=“https://user-images.githubusercontent.com/589034/63222011-ed801380-c1a1-11e9-83df-bf5fcaf226de.png”> <img width=“337” alt=“Screen Shot 2019-08-18 at 10 21 01 AM” src=“https://user-images.githubusercontent.com/589034/63222012-ed801380-c1a1-11e9-89f5-0f30fcb76be2.png”> <img width=“322” alt=“Screen Shot 2019-08-18 at 10 20 57 AM” src=“https://user-images.githubusercontent.com/589034/63222013-ed801380-c1a1-11e9-9419-b8ec3b644ec4.png”> <img width=“113” alt=“Screen Shot 2019-08-18 at 10 20 47 AM” src=“https://user-images.githubusercontent.com/589034/63222014-ee18aa00-c1a1-11e9-971a-3b561220ce93.png”>

Updated 18/08/2019 08:21

Fix show for IndexMap

JuliaOpt/MathOptInterface.jl

Now that IndexMap is a subtype of AbstractDict, displaying it results in an error: julia julia> MOI.Utilities.IndexMap() Error showing value of type MathOptInterface.Utilities.IndexMap: ERROR: MethodError: no method matching length(::MathOptInterface.Utilities.IndexMap) Closest candidates are: length(::Core.SimpleVector) at essentials.jl:561 length(::Base.MethodList) at reflection.jl:801 length(::Core.MethodTable) at reflection.jl:875 ... Stacktrace: [1] summary(::IOContext{REPL.Terminals.TTYTerminal}, ::MathOptInterface.Utilities.IndexMap) at ./abstractdict.jl:34 [2] show(::IOContext{REPL.Terminals.TTYTerminal}, ::MIME{Symbol("text/plain")}, ::MathOptInterface.Utilities.IndexMap) at ./show.jl:81

Updated 18/08/2019 07:57

ADDomain: Get-TargetResource is not returning actual value for some properties

PowerShell/ActiveDirectoryDsc

The function Get-TargetResource of the resource ADDomain is not returning the actual value for some of the properties in the schema.mof.

Missing values for the properties:

  • DnsDelegationCredential
  • DatabasePath
  • LogPath
  • SysvolPath

A way of determine the actual value should be found a make sure to return the correct value in the hashtable when the domain exist.

Updated 18/08/2019 07:21

Home: Could not load platform updates

platformio/platformio-core

PIO Core Call Error: “Error: VCS: Could not receive an output from ['git', 'branch'] command ({‘returncode’: 128, ‘err’: u'fatal: \xe4\xb8\x8d\xe6\x98\xaf\xe4\xb8\x80\xe4\xb8\xaa git \xe4\xbb\x93\xe5\xba\x93\xef\xbc\x88\xe6\x88\x96\xe8\x80\x85\xe7\x9b\xb4\xe8\x87\xb3\xe6\x8c\x82\xe8\xbd\xbd\xe7\x82\xb9 / \xe7\x9a\x84\xe4\xbb\xbb\xe4\xbd\x95\xe7\x88\xb6\xe7\x9b\xae\xe5\xbd\x95\xef\xbc\x89\n\xe5\x81\x9c\xe6\xad\xa2\xe5\x9c\xa8\xe6\x96\x87\xe4\xbb\xb6\xe7\xb3\xbb\xe7\xbb\x9f\xe8\xbe\xb9\xe7\x95\x8c\xef\xbc\x88\xe6\x9c\xaa\xe8\xae\xbe\xe7\xbd\xae GIT_DISCOVERY_ACROSS_FILESYSTEM\xef\xbc\x89\xe3\x80\x82', ‘out’: u'‘})”

Updated 18/08/2019 08:12 1 Comments

Automation for Zenodo DOI

nf-core/tools

Zenodo DOIs are an excellent way to cite nf-core pipelines, especially as they give a specific DOI per version of the pipeline. However, there are two points with the current setup which are quite annoying:

  1. We (one of the nf-core admins) has to manually set up the automated GitHub link for each new pipeline
  2. DOIs are given after a release. This means that the master branch then has to be updated to show the badge for the new DOI after the release is pushed. This changes the commit hash on master so that it no longer matches the release.
    • This is very slightly bad practice as we’re no longer exactly the same as the release. But worse, it messes up functionality in nf-core list and elsewhere, which checks commit hashes of local clones to see if the latest release is being run.
    • Also bad - if people properly run the release (with the -r nextflow flag or by manually downloading), the bundled code cannot include any information about the proper DOI for citation. This will become more of an issue as we try to improve the ease of access to this information (see #361)

After a very, very quick skim read of the docs, I think that we should be able to solve both of these problems with what seems to be an excellent Zenodo API. I see two approaches:

Approach 1: Fully automate releases

  • We can create new resources for new pipelines: https://developers.zenodo.org/#create
  • We can reserve DOIs before publication. This can be done on the website and in the API (with the prereserve_doi flag), but not with the GitHub linkage.
  • We can then update the code with the new Zenodo badge and any other references to the DOI, commit this, then trigger the GitHub release using the GitHub API.

The downside is this has to be done before the release. This means that we can’t use the GitHub release web interface, but instead have to trigger the release programatically somehow. This probably needs a little though as to how to do it nicely. Also, whether it’s worth it!

Approach 2: More manual DOI fetching, with lint checks

An alternative to this is that we can go fully the other way, and instead of using the automated linkage, manually pre-reserve the DOI on the Zenodo website before release. This would have to be done by the pipeline authors. We could potentially then get the lint tests to check for this when running with the --release flag to ensure that it happens properly.

Welcome for thoughts and feedback!

Phil

Updated 18/08/2019 07:09

sonar-cofeelint plugin does not work with sonarqube 7.6 and above

notimewaste/sonar-coffeelint-plugin

issue: the sonar-cofeelint plugin does not work with sonarqube 7.6 and above but it used to work with sonarqube 6.7 LTS

error :

2019.08.18 06:39:00 WARN web[][o.s.c.p.PluginLoader] API compatibility mode is no longer supported. In case of error, plugin Sonar Coffeelint Plugin [coffeelint] should package its dependencies. 2019.08.18 06:39:00 INFO web[][o.s.s.p.d.m.c.PostgresCharsetHandler] Verify that database charset supports UTF8 2019.08.18 06:39:00 INFO web[][o.s.s.p.w.MasterServletFilter] Initializing servlet filter org.sonar.server.ws.WebServiceFilter@1b37dc5d [pattern=UrlPattern{inclusions=[/api/system/migrate_db.*, ...], exclusions=[/api/properties*, ...]}] 2019.08.18 06:39:00 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000 2019.08.18 06:39:01 INFO web[][n.f.s.p.i.w.ExportAction] Defining plugin ... 2019.08.18 06:39:01 INFO web[][n.f.s.p.i.w.ExportAction] Plugin defined 2019.08.18 06:39:01 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube java.lang.IllegalStateException: Fail to load plugin Sonar Coffeelint Plugin [coffeelint] at org.sonar.server.plugins.ServerExtensionInstaller.installExtensions(ServerExtensionInstaller.java:82) at org.sonar.server.platform.platformlevel.PlatformLevel4.start(PlatformLevel4.java:573) at org.sonar.server.platform.Platform.start(Platform.java:211) at org.sonar.server.platform.Platform.startLevel34Containers(Platform.java:185) at org.sonar.server.platform.Platform.access$500(Platform.java:46) at org.sonar.server.platform.Platform$1.lambda$doRun$0(Platform.java:119) at org.sonar.server.platform.Platform$AutoStarterRunnable.runIfNotAborted(Platform.java:371) at org.sonar.server.platform.Platform$1.doRun(Platform.java:119) at org.sonar.server.platform.Platform$AutoStarterRunnable.run(Platform.java:355) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NoClassDefFoundError: org/sonar/api/batch/Sensor at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at org.sonar.classloader.ClassRealm.loadClassFromSelf(ClassRealm.java:125) at org.sonar.classloader.ParentFirstStrategy.loadClass(ParentFirstStrategy.java:37) at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:87) at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:76) at org.sonar.plugins.coffeelint.CoffeelintPlugin.getExtensions(CoffeelintPlugin.java:31) at org.sonar.api.SonarPlugin.define(SonarPlugin.java:51) at org.sonar.server.plugins.ServerExtensionInstaller.installExtensions(ServerExtensionInstaller.java:72) ... 9 common frames omitted Caused by: java.lang.ClassNotFoundException: org.sonar.api.batch.Sensor at org.sonar.classloader.ParentFirstStrategy.loadClass(ParentFirstStrategy.java:39) at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:87) at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:76) ... 25 common frames omitted 2019.08.18 06:39:02 INFO web[][o.s.p.StopWatcher] Stopping process

Updated 18/08/2019 07:21 1 Comments

Provide Kotlin Extension Functions (with reified type) for GatewayFilterSpec

spring-cloud/spring-cloud-gateway

GatewayFilterSpec (https://github.com/spring-cloud/spring-cloud-gateway/blob/master/spring-cloud-gateway-core/src/main/java/org/springframework/cloud/gateway/route/builder/GatewayFilterSpec.java) has the Java methods modifyRequestBody and modifyResponseBody with arguments of type Class<T>. It would be nice to have Kotlin Extension Functions with reified types just as implemented by @sdeleuze for the Spring Framework.

Then in Kotlin one could write e.g. modifyResponseBody<String, String> { … } instead of modifyResponseBody(String::class.java, String::class.java) { … }.

Updated 18/08/2019 12:15 1 Comments

prefer-default-export False Positive

benmosher/eslint-plugin-import

Description eslint-plugin-import v2.18.2 throws an error (error Prefer default export import/prefer-default-export) when there are multiple export statements in a typescript file. (this is probably not limited to typescript)

Expected Results eslint-plugin-import v2.18.1 does not throw this error. This behavior aligns with the example page: prefer-default-export

Basic Example: export interface SubmitHandler {} export function anything() {} // error Prefer default export import/prefer-default-export

Updated 18/08/2019 06:52

Propose to open a discussion group for general discussions

williamFalcon/pytorch-lightning

Is your feature request related to a problem? Please describe.

I think it might be a good idea to have a discussion group on IM for general discussions.

Describe the solution you’d like

I dont know, Discord, Telegram perhaps? some of my own projects are maintained with discussions on these platforms, and they seem equally good to me.

Describe alternatives you’ve considered

Or maybe slack or gitter :)

Updated 18/08/2019 05:13

Add type definitions for Typescript

Pustur/whatsapp-chat-parser

I’d like to add type definitions but I don’t have any experience with Typescript yet.
The definition would be for the public API, so just this function:

https://github.com/Pustur/whatsapp-chat-parser/blob/9d78a686ea2604475779866019d5eec098c2dce9/src/index.js#L7

The API is pretty well documented in the readme.

I’d like to have a types folder in the root of the project, with a index.d.ts file inside.
When done, a PR can be opened against the develop branch.

Anyone interested in taking this? Would be very appreciated.

Updated 18/08/2019 02:43

SENSORS Identifier

Thorium-Sim/thorium

Requested By: Natalie Anderson

Priority: 1

Version: 1.17.3

It would be really nice when they are clicking on a ship on sensors that in the sensors field in the sensors core, it says what ship they are currently clicked on for specifying which ship they are scanning.

Updated 18/08/2019 02:09

Create process for generating variable TTF

tphinney/science-gothic

Placeholder for what may very well become an “epic” (with component tasks). Basically, although being able to export variable fonts direct from FontLab is fine for some testing/dev purposes, we need to play with an open source final toolchain.

May involve some scripting.

Likely includes: - export VF as UFO from FontLab, whether directly or indirectly. - export .designspace files (or write separately) - other steps…

Updated 18/08/2019 01:45

Fork me on GitHub