Contribute to Open Source. Search issue labels to find the right project for you!

Modification of main page

cands1/cands1.github.io

@mukulmodi what do you think of this idea: - modify front page to have a small description and then link to the engagement site? - have another tab that have stuff about “our story”?

Or would this be too complicated. I think this would make the front page look really plain too which is not good. Thoughts?

Updated 20/08/2017 07:39

Updating wordpress integration

Lambda-Cartel/Nikeza

Code is in branch wordpress-db

  • Removed old integration that passed the parsed XML feed as JSON through route.

  • Began adding SQL query strings to insert feed data once processed.

  • Began adding functions to insert data into DB.

Nothing is tested

Issues

  • Have not been able to connect to MSSQL DB running on docker. Currently seems to be setup for the express version running with the visual studio integration. Maybe @marukami can help?

  • RSS feed models still don’t feel right.

  • Functions that process RSS feed are still not very good and dont handle exceptions.

  • Need to add code to call this as a background (cron) job. Guess this is something that depends on azure. @odytrice any ideas?

Updated 20/08/2017 07:43 1 Comments

translateDoc() not called on changing src or dst Language when on docTranslation Interface

goavki/apertium-html-tools

When the src or dst language is changed on translateText interface, an ongoing request is aborted and the translateText() method gets called. However, the same when done on docTranslation interface, neither aborts an ongoing request nor calls the translateDoc() method. This is because the code only calls translateText() when any of these languages is changed. Would implementing this be a right behaviour?

Updated 20/08/2017 07:11

Issues backing up with zfs send/receive

iocage/iocage

iocage version 0.99.1 FreeBSD 11.1

I created a handful of jails on their own iocage zpool with these commands: - iocage activate iocage (iocage is the zpool) - iocage fetch 11.1-RELEASE - iocage create –name jailName host_hostname=“jailName.home” jail_zfs=on ip4_addr=“bge0|172.16.0.2/28” defaultrouter=172.16.0.1 -r 11.1-RELEASE

I have a handful of these, some having done snapshots with the snapshot command iocage uses

I then set up my backup drive (connected USB external HDD) as such: - zpool create backups /dev/da0 - zpool export backups - zpool import -o altroot=/mnt/backup_drive backups - zfs create backups/iocage - zfs set readonly=on compression=on backups/iocage

I then take a snapshot zfs snapshot -r iocage@date +"%Y-%m-%d"

My initial zfs send, goes like this: - zfs send -R iocage@2017-08-19 | zfs receive -vF backups/iocage

In the future, I plan to do incremental snapshots as such: - zfs snapshot -r iocage@date +“%Y-%m-%d” - zfs send -R -I iocage@dateOld iocage@dateNew | zfs receive -vF backups/iocage

Currently, while doing this with a dry run, I get this as my error output:

root@homeServer:~ # zfs send -R iocage@2017-08-19 | zfs receive -vnF backups/iocage
would receive full stream of iocage@2017-08-19 into backups/iocage@2017-08-19
would receive full stream of iocage/iocage@2017-08-19 into backups/iocage/iocage@2017-08-19
would receive full stream of iocage/iocage/download@2017-08-19 into backups/iocage/iocage/download@2017-08-19
would receive full stream of iocage/iocage/download/11.1-RELEASE@2017-08-19 into backups/iocage/iocage/download/11.1-RELEASE@2017-08-19
would receive full stream of iocage/iocage/download/10.3-RELEASE@2017-08-19 into backups/iocage/iocage/download/10.3-RELEASE@2017-08-19
would receive full stream of iocage/iocage/releases@2017-08-19 into backups/iocage/iocage/releases@2017-08-19
would receive full stream of iocage/iocage/releases/10.3-RELEASE@2017-08-19 into backups/iocage/iocage/releases/10.3-RELEASE@2017-08-19
would receive full stream of iocage/iocage/releases/10.3-RELEASE/root@2017-08-19 into backups/iocage/iocage/releases/10.3-RELEASE/root@2017-08-19
would receive full stream of iocage/iocage/releases/11.1-RELEASE@2017-08-19 into backups/iocage/iocage/releases/11.1-RELEASE@2017-08-19
would receive full stream of iocage/iocage/releases/11.1-RELEASE/root@nextcloud into backups/iocage/iocage/releases/11.1-RELEASE/root@nextcloud
cannot receive incremental stream: destination 'backups/iocage/iocage/releases/11.1-RELEASE/root' does not exist
warning: cannot send 'iocage/iocage/releases/11.1-RELEASE/root@minecraft': signal received
warning: cannot send 'iocage/iocage/releases/11.1-RELEASE/root@webServer': Broken pipe
warning: cannot send 'iocage/iocage/releases/11.1-RELEASE/root@plexMedia': Broken pipe
warning: cannot send 'iocage/iocage/releases/11.1-RELEASE/root@homeBackup': Broken pipe
warning: cannot send 'iocage/iocage/releases/11.1-RELEASE/root@reverseProxy': Broken pipe
warning: cannot send 'iocage/iocage/releases/11.1-RELEASE/root@2017-08-19': Broken pipe

So for some reason, zfs thinks there is some sort of incremental going on with this zfs dataset for whatever reason. Additionally, if I create that whole tree as differnet zfs filesystems on backups, I get a new problem:

cannot receive: local origin for clone backups/iocage/iocage/jails/webServer/root@2017-08-19 does not exist
warning: cannot send 'iocage/iocage/jails/webServer/root@2017-08-19': signal received

Any thoughts on a solution?

Updated 20/08/2017 06:50 1 Comments

Why GET /ehr/{ehrId}/versioned_ehr_status returns only revision_history?

openEHR/specifications-ITS

Shouldn’t be more RESTful to have a dimension per each method in VERSIONED_OBJECT? And leave GET /ehr/{ehrId}/versioned_ehr_status to return just uid, owner_id and time_created.

GET /ehr/{ehrId}/versioned_ehr_status/revision_history GET /ehr/{ehrId}/versioned_ehr_status/all_versions GET /ehr/{ehrId}/versioned_ehr_status/latest_trunk_version GET /ehr/{ehrId}/versioned_ehr_status/version_at_time etc.

This is similar to https://github.com/openEHR/specifications-ITS/issues/27 but focuses on the current Body of the 200 response from https://github.com/openEHR/specifications-ITS/blob/master/apiary.apib#L561-L611

Updated 20/08/2017 06:14

PUT /ehr/{ehrId}/ehr_status status 412 implications / questions

openEHR/specifications-ITS

Current:

  1. The existingversionUidofEHR_STATUSresource must be specified in theMatch-Ifheader. Ref https://github.com/openEHR/specifications-ITS/blob/master/apiary.apib#L436

  2. ```

  3. Response 412 412 Conflict is return when Match-If header doesn’t match the latest trunk version. Returns latest trunk version in the Content-Location and ETag headers. ``` https://github.com/openEHR/specifications-ITS/blob/master/apiary.apib#L548-L551

Does 2. implies that versionUid described in 1. should be the current trunk version? Or can versionUid be the UID of any version of the EHR_STATUS?

If it can be only the trunk version, is the versionUid needed? I think knowing the ehrUid and that the status is the trunk, that identifies just one version of the status.

Updated 20/08/2017 06:10

Contributing and conduct guidelines aren't visible when user creating an issue

limonte/sweetalert2

On this page https://github.com/limonte/sweetalert2/issues/new I expect to see the block with links to contributing guide and code of conduct:

Example: https://github.com/vuejs/vue/issues/new

Corresponding files are in the repo: https://github.com/limonte/sweetalert2/tree/master/.github

Anybody has any idea why that block isn’t visible for the SweetAlert2 repo?

Updated 20/08/2017 06:08

Investigate making Formation usable for Node-based server-side validation

ozzyogkush/formation

Since lots of websites nowadays use node and node-based server software, Formation could be quite useful on that end of the spectrum. Server-side validation is arguably more important than client-side.

This is to investigate what changes would be necessary in order to make it work.

My initial thought on implementing it is to create 3 new repositories:

  • formation-core : the core rule validation engine API
  • formation-client : the client-side process and event system API
  • formation-server : the server-side process and event system API

This repository would be a shorthand for using both formation-client and formation-server.

Updated 20/08/2017 05:44

Repeated routines

PennBBL/xcpEngine

A number of common processing routines are repeated, typically with minor changes, across modules. I am opening this issue to track these routines so that the possibility of functionalisation can be assessed for each.

Updated 20/08/2017 05:04 1 Comments

Close the project or leave it open?

CHEF-KOCH/BarbBlock-filter-list

I’m not sure if I not close this project and start a ‘global’ parsing list, which doesn’t exist - overall my personally goal isn’t to get only one list, I’m moreover interested in a project which lists them all instead of 100 separate little projects which at the end are difficult to maintain (I’m talking about a system which integrates into Pi-Hole and other known systems) - in most cases this ends up with giving up on the project cause time, lack of interest or other reasons.

Sadly I’m not aware of any system which fetches latest updates automatically, integrate them into your own project without any manually adjustments, you can only fetch the url of course, trigger an system which get the latest lists every yzh hours and then work with regex to add/exclude your things, but you can’t split them then automatically in only HOSTS, uBlock, pfSense, iptables relaated tables so that people not need to mess with that.

untitled

That said - I’m looking for an all-in-one solution. If anyone know a system which can get me exactly this, let me know. I think json might help but I’m interested in opinions to push something which doesn’t exist.

Updated 20/08/2017 03:37

Containers on AWS - Elastic Beanstalk - Dockerrun.aws.json

ansrivas/angular2-flask

I love this app. trying to load on the containers on AWS - Elastic Beanstalk. Every time I load and install the Dockerrun.aws.json. keep changing the Dockerrun.aws.json with different errors. Now I’m getting this error “Encountered error starting new ECS task: { "failures”: [ { “reason”: “RESOURCE:MEMORY”, “arn”: “arn:aws:ecs:us-west-2:016511600475:container-instance/cc20f5bf-7349-48c7-94a3-512df68b9962” } ], “tasks”: [] }"

does anyone know the correct configuration for Dockerrun.aws.json. This is what i have now

{ “AWSEBDockerrunVersion”: 2, “volumes”: [ { “name”: “backend-v” }, { “name”: “frontend-v” } ], “containerDefinitions”: [ { “name”: “backend-c”, “image”: “.dkr.ecr.us-west-2.amazonaws.com/apptbot/server”, “essential”: true, “memory”: 4096, “mountPoints”: [ { “sourceVolume”: “backend-v”, “containerPath”: “/usr/src/app/backend”, “readOnly”: true } ] }, { “name”: “frontend-c”, “image”: “.dkr.ecr.us-west-2.amazonaws.com/apptbot/frontend”, “essential”: true, “portMappings”: [ { “hostPort”: 3000, “containerPort”: 80 } ], “links”: [ “backend-c” ], “mountPoints”: [ { “sourceVolume”: “frontend-v”, “containerPath”: “/var/www/front” } ], “memory”: 4096 } ] }

Updated 20/08/2017 07:49 5 Comments

Issues encountered during first release

gordon-cs/Project-Phoenix

These are some of the issues I encountered during the first checkin:

Documenting them here to keep track:

New Rcis for the fall semester were not showing up

What the problem was:

We look for new RCIs to be generated by looking through the RoomAssign table for records that match the current session…except the official session was still “Summer Term”. The Fall semester officially starts on the 30th of August, but RAs need to access the system and pre-fill stuff beforehand.

What I did:

TEMPORARY FIX - Edit the stored procedure to ignore that passed in parameter and look for room assign records for the Fall Semester.

Long-term fix:

Not sure yet 💡 Still thinking it over.

## New RAs can’t log in!

What the problem was:.

The CurrentRA table is not up to date. I talked with Jay and that view is based on user roles within Active Directory, not Jenzibar. This means that something was off there. We have contacted NSG. Waiting till Monday morning for their reply.

How I fixed it:

Hopefully there will be nothing to fix once the Active Directory records are correct.

Long-Term Fix

Consider using our own CurrentRA and CurrentRD tables that are not based on Gordon systems? Then have the Housing Director update that.

Polly can’t log in!

What the problem was:

They renamed her Job_Title_Hall from “Chase Hall” to “Chase” in the CurrentRD table 🙅‍♂️ so the system wasn’t finding a matching Building code in BuildingAssign.

What I did:

Renamed the BuildingAssign entry from “Chase Hall” to “Chase”. 🎉

Long-Term fix:

This would be solved if we control what gets put into the CurrentRD table. (See earlier suggested solution)

This Lewis Guy shows up multiple times for his room!

What the problem was:

The dude had 10 + Rcis with a CreationDate of August 18th. Turns out his RoomAssign record also had an AssignDate of August the 18th but with more accuracy (hours, minutes and seconds). So each time he/his RAs logged in on the 18th the system was comparing the two dates and finding that his RoomAssign record was more recent than his last RCI.

How I fixed it:

I didn’t really. After the 18th had passed, new RCIs stopped being generated. So I just deleted all his old ones.

Long-Term Fix:

Make the CreationDate field more accurate?/precise? by adding hours, minutes and seconds.

A certain RA can see all her residents' RCIs, but can’t see her own!

What the problem was:

Remember how we fake-generated checkout RCIs at the end of the year last semester? Well we generated her RCI AFTER her new RoomAssign record had been created/updated. So this semester, the system was correctly seeing that rci.CreationDate > roomAssign.AssignDate and not generating anything.

How I fixed it:

I altered her RCI’s creation date

Long-Term Fix:

Nothing. As long as we don’t fake-generate RCIs like we did in the beginning, everything should be Gucci. ⚙️

Updated 20/08/2017 03:53 1 Comments

places.sqlite integrity checks?

MoonchildProductions/Pale-Moon

Hi, I have a question which may not be entirely appropriate here, but nonetheless I feel here I have the best chance of getting an answer, or at least a hint.

I’m attacking the age old issue of syncing the places data among multiple profiles on multiple computers. Actually all I care about is the bookmarks, but I understand that the bookmark, history and other data is hopelessly jumbled together in places.sqlite and I don’t mind copying the other data too. Now for my own reasons (which I can explain if needed but which I don’t want to argue about), the following “official” ways to share bookmarks are not acceptable to me:

  1. the Sync
  2. manual bookmark.json backup and restore

Now I tried just copying over the places.sqlite* files. That’s what the couple of existing discussions I found recommends, on Stack Overflow and similar. It does work, and it is acceptable. But there is the nagging drawback that those are binary files so of course there is no hope of merging should I ever be so careless and modify the bookmarks on multiple computers before the syncing :-(

So, the next thing I tried was to take a SQL dump of the database using the sqlite3 shell on computer 1, copy the dump to computer 2, delete 2’s places.sqlite* and recreate it using sqlite3. But here I hit a snag: when I start PM after that, it thinks the database is corrupt, renames it to places.sqlite.corrupt and just creates a new one from scratch using (I think!) one of the automatic .jsonlz4 backups.

So clearly, the process of dumping and un-dumping the database loses some crucial information that PM checks on startup to make sure the database is kosher. Does anyone here know what that information might be, so I can amend the dumping or un-dumping process to make PM grok the data without problems?

Updated 20/08/2017 05:32 1 Comments

Unearthed Arcana sort order

Sklore/HL_DD_5e_Colab

I was looking at how the unearthed arcana sources are getting pretty blocky and was going to take a crack at sorting or reorganizing that section since it can only get more unwieldy.

I’m noticing that many of the sources created by the community aren’t in the .1st file - does anyone know the default sort order for these? Are they always going to show up after whatever is defined in the .1st file or do they sort at a default number we can manipulate?

I could start a task to get these put into the .1st file where we can control sort orders a bit better and then make a rule that everything gets sorted alpha by using the same sort order # in the .1st file.

Secondary question/poll: If I start splitting these up, any opinions on how you would like to see these?

  1. By Year/Date
  2. By “PHB chapter” (a sub category for classes, races, feats, miscellany?)
Updated 20/08/2017 00:07

tenorshare.com, oceanofgames.com -Ads- Windows

AdguardTeam/AdguardFilters

Description: * Current behavior: Windows beta (6.2.390.2018) Why isn`t the pop-up blocked by Adguard?

(1) http://www.tenorshare.com/windows-care/free-windows-8-cleaner-to-clean-up-junk-files-on-windows-8.html

(2)http://oceanofgames.com/ipl-6-pc-game-free-download/

<details><summary>Screenshot1:</summary>

screen shot 2017-08-20 at 2 29 39 am

</details>

<details><summary>Screenshot2:</summary>

screen shot 2017-08-20 at 2 50 04 am

</details>

Reproduced on our side (Mac OS)

Information value
Operating system: Mac OS
Browser: Chrome
Adguard version: 1.4.1
Adguard DNS: None
Adguard filters: 0,1,2,3,4,11,14,
Ticket ID (if exists): 553706
Updated 19/08/2017 23:53

www.onegoodthingbyjillee.com - Ad - Windows

AdguardTeam/AdguardFilters

Description: * Current behavior:

Why isn’t the app redirecting to Amazon blocked by Adguard on: (Windows, beta version 6.2.390.2018) https://fossbytes.com/top-10-free-registry-cleaners-for-microsoft-windows/

Is it considered to be the integrated part of the website?

<details><summary>Screenshot:</summary>

screen shot 2017-08-20 at 2 22 09 am

</details>

Reproduced on our side (Mac OS)

Information value
Operating system: Mac OS
Browser: Chrome
Adguard version: 1.4.1
Adguard DNS: None
Adguard filters: 0,1,2,3,4,11,14,
Ticket ID (if exists): 553706
Updated 19/08/2017 23:28

fossbytes.com -Ads - Windows

AdguardTeam/AdguardFilters

Description: * Current behavior: Why aren`t the ads automatically blocked by Adguard on: (Windows, beta version 6.2.390.2018) https://fossbytes.com/top-10-free-registry-cleaners-for-microsoft-windows/

Are they considered to be the integrated part of the website? <details><summary>Screenshot:</summary>

screen shot 2017-08-20 at 2 07 53 am

</details>

Reproduced on our side (Mac OS)

Information value
Operating system: Mac OS
Browser: Chrome
Adguard version: 1.4.1
Adguard DNS: None
Adguard filters: 0,1,2,3,4,11,14,
Ticket ID (if exists): 553706
Updated 19/08/2017 23:19

https://free-windows-cleanup-tool.en.softonic.com/? - Ads- Windows

AdguardTeam/AdguardFilters

Description:

The user is wondering why these kinds of pop-ups are not blocked by Adguard on the website https://free-windows-cleanup-tool.en.softonic.com/? on Windows beta 6.2.390.2018 Reproduced on our side (Mac OS, attaching the system details of Mac)

<details><summary>Screenshot:</summary>

screen shot 2017-08-20 at 1 59 35 am

</details>

<details><summary>Screenshot:</summary>

screen shot 2017-08-20 at 2 00 22 am

</details>

System configuration

Information value
Operating system: Mac OS
Browser: Chrome
Adguard version: 1.4.1
Adguard DNS: None
Adguard filters: 0,1,2,3,4,11,14,
Ticket ID (if exists): 553706
Updated 19/08/2017 23:18

Epel version

adaltas/node-masson

Hi, is there a reason why are we using epel from centos 6 and not centos 7 ?

core/yum/configure.coffee.md: Instead of
http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm we could use http://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm

Thanks

Updated 19/08/2017 22:38

About settings page

EyeSeeTea/FIRE-WiFiCalling

@adrianq I am looking at the settings page

I understand that we are going to make the base URL for all requests dynamic, right?

The settings page has 3 fields, 2 of them are user and current password which we get from the user endpoint. But where do we get the server IP? the current user interface does not have a server property.

I also noticed that the menu for unauthenticated users shows the settings page. in this case user and current password would not make sense!

What do you think? thanks

Updated 19/08/2017 22:57 1 Comments

is there a workshop for chatbots?

nodeschool/discussions

Notes

  • If you have a question about the NodeSchool organization: Please open an issue in nodeschool/organizers
  • If you want to improve or contribute to workshoppers: Please open an issue in workshopper/org
  • Did you see the guide for questions? https://github.com/nodeschool/discussions#when-you-have-a-problem

Please delete this section after reading it

If you have a problem with an error message:

  • node version: <run “node -v” in you terminal and replace this>
  • npm version: <run “npm -v” in you terminal and replace this>
  • os: windows 7/windows 10/mac/linux

Error output

 // copy your error output here

My Code

 // copy your code here
Updated 20/08/2017 06:41

[Aula 2] Dúvida Desafio 2

da2k/curso-javascript-ninja

Quando eu tento clonar aparece essa mensagem.

$ git clone git@github.com:wdgraphael/curso-javascript-ninja.git Cloning into ‘curso-javascript-ninja’… Permission denied (publickey). fatal: Could not read from remote repository.

Please make sure you have the correct access rights and the repository exists.

<!– Não apague daqui pra baixo! –> @fdaciuk

Updated 20/08/2017 00:16 1 Comments

[byu-footer] Left/Right Footer Margin

byuweb/byu-theme-components

Description

This is another quetsion for @Aleniras: Is the padding supposed to be that small between the edge of the browser and the gray footer? I know it seems like no one would noticed this because they won’t be resizing their browser, but if they’re like me and sometimes have their browser this width, they may notice it. <img width=“1049” alt=“screen shot 2017-08-19 at 3 16 49 pm” src=“https://user-images.githubusercontent.com/9389424/29490215-4c59ed66-84f2-11e7-9a92-d7ab15b5074e.png”>

Websites Affected

Issue Type

Is this (add an x in the boxes that apply)

  • [ ] A difference between the components and the Official Specification?
  • [x] A bug, such as a Javascript error, or the UI not rendering properly on a page?
  • [ ] Inconsistent appearance/behavior between browsers?
  • [ ] An issue on mobile browsers?
  • [ ] A request for a new feature/enhancement?

Browsers Affected

Add an x in all the boxes that apply. Please mark desktop and mobile browsers separately.

We support the last two versions of Chrome, Firefox, Safari, and Edge, plus Internet Explorer 11.

Desktop Browsers

  • [x] Google Chrome
  • [x] Mozilla Firefox
  • [x] Apple Safari
  • [ ] Microsoft Edge
  • [ ] Microsoft Internet Explorer 11
  • [ ] Other (please specify)

Mobile Browsers

  • [ ] Safari for iOS
  • [ ] Chrome for iOS
  • [ ] Firefox Mobile for Safari
  • [ ] Chrome for Android
  • [ ] Firefox Mobile for Android
  • [ ] Other (please specify)

Web Site Platform

What is hosting your website?

  • [ ] Drupal 7
  • [x] Drupal 8
  • [ ] Wordpress
  • [x] Custom Site
  • [ ] Don’t Know
Updated 19/08/2017 21:24

Is ExtCore core 2.0 compatible?

ExtCore/ExtCore

Hi Dmitry,

I tried to upgrade our application to .net core 2.0 as you can see on this branch but the application crashes at startup when activating extcore.

The error message says:

System.MissingMethodException: Method not found: ‘System.IServiceProvider Microsoft.Extensions.DependencyInjection.ServiceCollectionContainerBuilderExtensions.BuildServiceProvider(Microsoft.Extensions.DependencyInjection.IServiceCollection)

I double-checked the nuget packages and the compiled extensions dlls. Do you think that ExtCore is not compatible with netcore 2.0 or could it be something else?

Updated 19/08/2017 21:51 2 Comments

fgdc Time Period - Current tag

adiwg/mdTranslator

The time period (timeperd) section has a current (current) element which is has the domain [‘ground condition’ | ‘publication date’ | free text]. Any thoughts where this might transfer to in mdJson? Do we even need to worry about it? Other time period fields will all be kept by the reader.

Updated 19/08/2017 21:10

Provided access code was rejected by Harvest, no token was returned

log0ymxm/node-harvest

Hello there,

Is it possible that because I’m working locally I can’t request an access token?

When I log the responsein setAccessToken on line 107 I can see a 400 Bad Request.

{ server: 'nginx',
        date: 'Sat, 19 Aug 2017 20:46:47 GMT',
        'content-type': 'application/json',
        'content-length': '85',
        connection: 'close',
        status: '400 Bad Request',
        'x-frame-options': 'SAMEORIGIN',
        'x-xss-protection': '1; mode=block',
        'x-content-type-options': 'nosniff',
        'cache-control': 'private, no-store, no-cache, max-age=0, must-revalidate',
        p3p: 'CP="Our privacy policy is available online: https://www.getharvest.com/services/privacy-policy"',
        'x-app-server': 'app12',
        'x-robots-tag': 'noindex, nofollow',
        'content-security-policy': 'report-uri /csp_reports; default-src *; img-src *; font-src data: cache.harvestapp.com; script-src \'self\' \'unsafe-inline\' \'unsafe-eval\' https://*.google-analytics.com https://*.nr-data.net https://ajax.googleapis.com cache.harvestapp.com https://js-agent.newrelic.com https://js.appcenter.intuit.com https://platform.twitter.com https://www.google.com https://www.googleadservices.com https://www.googletagmanager.com https://connect.facebook.net; style-src \'self\' \'unsafe-inline\' cache.harvestapp.com https://js.appcenter.intuit.com https://www.google.com',
        'x-request-id': '4a90bfe1d14f28f3da199a7ab882bb20',
        'x-runtime': '0.015621',
        'strict-transport-security': 'max-age=15552000; includeSubDomains' } },
  read: [Function] }

However my redirect url in Harvest is set to http://127.0.0.1:8080/oauth_redirect

Updated 20/08/2017 01:59 3 Comments

hash function and proxies (this is complicated!)

openpathsampling/openpathsampling

This is more a discussion then really an issue. I realized that we use different concepts of equality and wanted an opinion. These are the different ways for equality we use and below I will explain why we need both and what we could do about it.

  1. Compare features or attributes of two objects: This is intuitive and the pythonic way. If two things represent the same information they are equal. For example for volumes and ensembles we compare the string representation. if these are the same, we consider the objects equal. The problem is to find a suitable hash function.

  2. Compare using the uuid, which should be really unique. So, if these are the same, we know objects are equal, but. If you make a deep copy, the UUID is different but the objects are equal in the first sense.

For most objects we use 2. but for some 1. So why use 2. if the other one is better?

Problem is that we also use shallow proxy objects, that contain no information but the UUID and a place to load it from. Of course, you would like to compare, a proxy with other objects and if the UUID is the same, they are equal. If you use 1. you will have to load every object for comparison. If you use 2. you do not.

Also, in 2. you have a very good hash function which you need to place objects in a set or use as a key in a dict. If you put an object and its proxy in a set in 2. you have only one object. In 1. you have 2 objects. That is redundant for e.g. caching. Also in 1. the hash function might be very inefficient.

Solutions

  • Keep it the way it is and have special equality for some objects, Trajectory, Ensemble, Volume, etc
  • Use only UUID hash. Equality will still work, but dicts will consider deep copies different, although eq might say different.
  • Change storage (at some point) and save the hash with the objects. This would be a big change, but technically pretty cool and advanced. I have not seen any of this considered in NoSQL DB (if that rings a bell)

For now I leave it the way it is, but especially for Trajectory and upcoming trajectory CVs this could be problematic…

Updated 19/08/2017 21:12 6 Comments

Change processId without activityId?

imixs/imixs-workflow

Hi Ralph,

can you please tell me if it’s possible to change the processId of a workitem without an event / activityId? This would be interesting to cancel a workitem in every process step without adding this event to all workflow tasks. It would be great too if an admin could set back the workflow task from every process step to another which was before the current step. Is it possible to modify the processId without the activityId of the event?

Thank you. Chris

Updated 19/08/2017 20:38 1 Comments

Mobiles

kataras/iris

Is running iris on mobiles and desktops considered plausible ?

The MCV functionality is server side and so the client will refresh each page. But if it’s fast enough it can be enough for many apps.

Updated 19/08/2017 19:26 2 Comments

Symbolic Calculations

josdejong/mathjs

Hey there,

I am working with math js and what i would like to do is something like this: invert(imatrix(2)*s-A) where A should also be a 2x2 Matrix

and s is a symbol.

Any ideas how something like this can be implemented.

Anyway I would like to ask how to work with symbols in math js?

Updated 19/08/2017 20:33 1 Comments

.NET Standard 2.0 Support

RoushTech/RollbarDotNet

The new NETStandard adds much needed functionality for tracing….

So what do we do, do we support both NetStandard 2.0 and 1.0, with 1.0 being a reduced functionality set, or do we only support 2.0?

I’ll have to start working on NetStandard 2.0 and get this lib on 1.0 soon anyway.

Updated 19/08/2017 18:18

mise à jour intégrée

chriscamicas/girr

= pouvoir mettre à jour l'application depuis l'application

plusieurs pistes - récupérer la dernière release disponible du projet github et le déployer à la place de l'existant (mais comment gérer les fichiers à supprimer? idem comment gérer les fichiers de config qui sont modifiés localement) - git clone utiliser git pour faire le déploiement (comment gérer les fichiers de config qui sont modifiés localement, et comment gérer le build des sources si on intègre vue/webpack

Updated 19/08/2017 16:38

The filename, directory name, or volume label syntax is incorrect.

webpack/webpack-dev-server

Do you want to request a feature or report a bug? Bug

What is the current behavior?

Jonas@JONAS-PC /c/Sites
$ cd ~/app_name

Jonas@JONAS-PC ~/app_name
$ ./bin/webpack-dev-server

Jonas@JONAS-PC ~/app_name
yarn run v0.27.5
$ "c:\Users\Jonas\app_name\node_modules\.bin\webpack-dev-server" "--progress" "--color" "--config" "c:/Users/Jonas/app_name/config/webpack/development.js"
The filename, directory name, or volume label syntax is incorrect.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

If the current behavior is a bug, please provide the steps to reproduce.

$ rails new app_name --webpack
$ cd app_name
$ ./bin/webpack-dev-server

What is the expected behavior? No errors.

Please mention your webpack and Operating System version. webpacker: 2.0 webpack: 3.5.5 Windows 10 latest updates

Updated 20/08/2017 01:11 1 Comments

Migration Vue+Webpack

chriscamicas/girr

Initialement le projet a été fait le plus simple possible. Cependant, dans le but de faire une vraie webapp, et pouvoir notamment inclure des components tiers, il serait souhaitable de migrer le front vers un ‘vrai’ projet vue/webpack. voir https://github.com/vuejs/vue-cli et https://github.com/vuejs-templates/webpack

attention à intégrer ça dans le projet actuel qui contient front+back

Updated 19/08/2017 16:28 1 Comments

Image EXIF/Location Surfacing

WatchNature/watchnature-nuxt

So we can get location data from photos pretty easily, which we could then use to populate the location portion of the post form. I’m currently reading through docs regarding looking up places via. lat/lng since the actual image location might be different than the observation location.

I know that we’ve talked in the past about a post having its own location (it does) and that location being geared more for the google place name, and the observation locations being the actual location the photo was taken. Im thinking we update location part of the form to update the Posts’s generic location, and then silently (if we can) get the real lat/lng from photos. We could flag images that don’t have the needed exif data with needs location or something for processing on the backend.

@bradyswenson Just pinging you for your thoughts.

Updated 19/08/2017 16:00

ACCOUNT RealdebridCom: Could not login user | Login handshake has failed

pyload/pyload

Get the following error for: 19.08.2017 11:21:52 INFO ACCOUNT RealdebridCom: Adding userozo... 19.08.2017 11:21:52 DEBUG ACCOUNT RealdebridCom: Reached login timeout for userozo 19.08.2017 11:21:52 INFO ACCOUNT RealdebridCom: Login userozo... 19.08.2017 11:21:52 DEBUG ACCOUNT RealdebridCom: LOAD URL https://api.real-debrid.com/rest/1.0/user | redirect=True | cookies=True | get={'auth_token': '**********'} | req=None | decode=True | multipart=False | post={} | ref=True | just_header=False 19.08.2017 11:21:53 ERROR ACCOUNT RealdebridCom: Could not login userozo| Login handshake has failed Traceback (most recent call last): File "/opt/pyload/pyLoadCore.py", line 667, in <module> main() File "/opt/pyload/pyLoadCore.py", line 658, in main pyload_core.start() File "/opt/pyload/pyLoadCore.py", line 433, in start self.accountManager.getAccountInfos() File "/opt/pyload/module/utils.py", line 165, in new return func(*args) File "/opt/pyload/module/plugins/AccountManager.py", line 176, in getAccountInfos data[p.__name__] = p.getAllAccounts(force) File "/opt/pyload/module/plugins/internal/misc.py", line 230, in new return fn(*args, **kwargs) File "/opt/pyload/module/plugins/internal/Account.py", line 279, in getAllAccounts self.init_accounts() # @TODO: Recheck in 0.4.10 File "/opt/pyload/module/plugins/internal/misc.py", line 230, in new return fn(*args, **kwargs) File "/opt/pyload/module/plugins/internal/Account.py", line 267, in init_accounts self.add(user, info['password'], info['options']) File "/opt/pyload/module/plugins/internal/misc.py", line 230, in new return fn(*args, **kwargs) File "/opt/pyload/module/plugins/internal/Account.py", line 313, in add result = u['plugin'].choose(user) File "/opt/pyload/module/plugins/internal/misc.py", line 230, in new return fn(*args, **kwargs) File "/opt/pyload/module/plugins/internal/Account.py", line 430, in choose self.relogin() File "/opt/pyload/module/plugins/internal/Account.py", line 177, in relogin return self.login() File "/opt/pyload/module/plugins/internal/Account.py", line 118, in login self.info['data']) File "/opt/pyload/module/plugins/accounts/RealdebridCom.py", line 56, in signin self.fail_login() File "/opt/pyload/module/plugins/internal/Account.py", line 446, in fail_login return self.fail(msg) File "/opt/pyload/module/plugins/internal/Plugin.py", line 158, in fail raise Fail(encode(msg)) # @TODO: Removeencodein 0.4.10 Fail: Login handshake has failed

Updated 20/08/2017 02:54 5 Comments

huge slowdown compared to parsec

mrkkrp/megaparsec

i switched from parsec to megaparsec and noticed a severe slowdown. This is probably my fault, since i have a somewhat complicated setup (i parse a stream of tokens, not characters) and have written my own Stream instance. The Stream instance is here. A comparison of the slowdown can be seen here for megaparsec and here for parsec. Here is an overview over for the profiling: https://gist.github.com/LeanderK/bf471e4cbf0d049f99976ad41ded7882

What has changed? I switched from parsec to megaparsec and replaced L with Located (in Position).

The parser is not very optimised yet (many try-statements etc.), i first want to regain speed compared to parsec and then start working on optimising other parts.

Updated 20/08/2017 07:26 1 Comments

Question about key generation

riboseinc/rnp

@dewyatt, how I see you’re last one who touch this code, so, I think the question is to you but I may be wrong.

Let get the sample of code: ```c bool pgp_generate_primary_key(rnp_keygen_primary_desc_t *desc, bool merge_defaults, pgp_key_t * primary_sec, pgp_key_t * primary_pub, pgp_seckey_t * decrypted_seckey) { …. // generate the raw key pair if (!pgp_generate_seckey(&desc->crypto, &seckey)) { goto end; }

// write the secret key, userid, and self-signature
if (!pgp_setup_memory_write(NULL, &output, &mem, 4096)) {
    goto end;
}
if (!pgp_write_struct_seckey(
      PGP_PTAG_CT_SECRET_KEY, &seckey, desc->crypto.passphrase, output) ||
    !pgp_write_struct_userid(output, desc->cert.userid) ||
    !pgp_write_selfsig_cert(output, &seckey, desc->crypto.hash_alg, &desc->cert)) {
    RNP_LOG("failed to write out generated key+sigs");
    goto end;
}
// load the secret key back in
if (!load_generated_key(&output, &mem, primary_sec)) {
    goto end;
}
...
return ok;

} ```

Here the code generate a new key, write it as PGP key to memory, and after load it from memory. I feel that it might be a security reason to keep as less as possible time when we keep unsecured a secret numbers. Am I right?

If yes, it broke G10 storage because it requests unencrypted numbers to storing.

So, I see two possible way to fix this issue: 1. Keep a generated key as unsecured in memory before it’d saved by key-store. 2. G10 should know that key might be encrypted by PGP, and it should decrypt it before saving.

I vote for first option, because a last one requests one more password inputs for encrypting.

Have you got anything in mind about this?

cc: @ronaldtse

Updated 19/08/2017 16:56 1 Comments

imprecise result in 'search package files' option

excalibur1234/pacui

in the devel branch, i have just rewritten the “search package files” option of pacui.

the code is not more complicated and i use more files (a.k.a. the code is more ugly). but i managed to get rid of a long awk command. this resulted in a dramatic performance increase. for example, you want to search for files ending with “config” and search for “config$”. on my system, pacui 1.6 needs longer than 2 min to display the result (15135 lines). the rewritten code needs less than 10 sec to display the result (15219 lines).

pros: - dramatic performance increase! - get rid of long and complicated awk command

cons - more and uglier code - the result shows duplicate entries. to be precise, the “package files in system repositories” section of results shows sometimes the same results as the “local package files” section.

in my opinion, showing duplicate results (in some cases) is acceptable when i get such a performance increase as return. what do you think?

Updated 19/08/2017 14:58

Lower performance compared with the paper

bluemonk482/tdparse

Hi, I ran the code to reproduce the results on the data of Dong et al.. I found that the results in the paper are different from the result I tried (worse). I downloaded the data from their official link and moved them to the data directory. I also used TweeboParser to get lidong.train.conll and lidong.test.conll files.

Then, I ran the run.sh script to do CV. The command used is: ./run.sh lidong tdparse liblinear scale,tune,pred ../data/lidong/parses/lidong.train.conll ../data/lidong/parses/lidong.test.conll The results is: ``` extracting features for training (6248, 3600) 0 Parse source: ../data/lidong/parses/lidong.train.conll extracting features for testing (692, 3600) 0 Parse source: ../data/lidong/parses/lidong.test.conll



—Feature scaling Scaling features —Parameter tuning When C=1e-05, acc is 0.531360, 2-class-f1 is 0.097125 and 3-class-f1 is 0.293077 When C=3e-05, acc is 0.601280, 2-class-f1 is 0.346280 and 3-class-f1 is 0.470578 When C=5e-05, acc is 0.634240, 2-class-f1 is 0.448400 and 3-class-f1 is 0.543794 When C=7e-05, acc is 0.651680, 2-class-f1 is 0.497264 and 3-class-f1 is 0.579096 When C=9e-05, acc is 0.657280, 2-class-f1 is 0.518538 and 3-class-f1 is 0.593594 When C=0.0001, acc is 0.661280, 2-class-f1 is 0.529563 and 3-class-f1 is 0.601490 When C=0.0003, acc is 0.683520, 2-class-f1 is 0.585372 and 3-class-f1 is 0.642016 When C=0.0005, acc is 0.686720, 2-class-f1 is 0.594281 and 3-class-f1 is 0.648487 When C=0.0007, acc is 0.688000, 2-class-f1 is 0.600096 and 3-class-f1 is 0.652133 When C=0.0009, acc is 0.688480, 2-class-f1 is 0.602174 and 3-class-f1 is 0.653542 When C=0.001, acc is 0.686560, 2-class-f1 is 0.601640 and 3-class-f1 is 0.652392 When C=0.003, acc is 0.682240, 2-class-f1 is 0.602399 and 3-class-f1 is 0.651145 When C=0.005, acc is 0.679680, 2-class-f1 is 0.604536 and 3-class-f1 is 0.650880 When C=0.007, acc is 0.674720, 2-class-f1 is 0.601054 and 3-class-f1 is 0.646703 When C=0.009, acc is 0.671200, 2-class-f1 is 0.598069 and 3-class-f1 is 0.643595 When C=0.01, acc is 0.669440, 2-class-f1 is 0.596605 and 3-class-f1 is 0.641960 When C=0.03, acc is 0.651520, 2-class-f1 is 0.580306 and 3-class-f1 is 0.625306 When C=0.05, acc is 0.641600, 2-class-f1 is 0.573112 and 3-class-f1 is 0.616739 When C=0.07, acc is 0.635840, 2-class-f1 is 0.569867 and 3-class-f1 is 0.612235 When C=0.09, acc is 0.629280, 2-class-f1 is 0.563507 and 3-class-f1 is 0.605964 When C=0.1, acc is 0.626400, 2-class-f1 is 0.560738 and 3-class-f1 is 0.603180 When C=0.3, acc is 0.608960, 2-class-f1 is 0.544979 and 3-class-f1 is 0.587095 When C=0.5, acc is 0.601440, 2-class-f1 is 0.535666 and 3-class-f1 is 0.579105 When C=0.7, acc is 0.600480, 2-class-f1 is 0.533680 and 3-class-f1 is 0.577613 When C=0.9, acc is 0.591840, 2-class-f1 is 0.531436 and 3-class-f1 is 0.572010 When C=1.0, acc is 0.595040, 2-class-f1 is 0.532123 and 3-class-f1 is 0.573961 When C=3.0, acc is 0.588960, 2-class-f1 is 0.528539 and 3-class-f1 is 0.569136 When C=5.0, acc is 0.589440, 2-class-f1 is 0.529499 and 3-class-f1 is 0.569597 When C=7.0, acc is 0.588480, 2-class-f1 is 0.527653 and 3-class-f1 is 0.568528 When C=9.0, acc is 0.589280, 2-class-f1 is 0.528592 and 3-class-f1 is 0.569407

Five-fold CV on ../data/lidong/output/train.scale, the best accuracy is 0.688480 at c=0.000900 —Model fitting and prediction Macro-F1 score: 0.662785519652 Accuracy score: 0.695086705202 Macro-F1 score (2 classes): 0.614282718121


Five-fold CV on ../data/lidong/output/train.scale, the best 3classf1 is 0.653542 at c=0.000900 —Model fitting and prediction Macro-F1 score: 0.662785519652 Accuracy score: 0.695086705202 Macro-F1 score (2 classes): 0.614282718121


Five-fold CV on ../data/lidong/output/train.scale, the best 2classf1 is 0.604536 at c=0.005000 —Model fitting and prediction Macro-F1 score: 0.677383882802 Accuracy score: 0.702312138728 Macro-F1 score (2 classes): 0.636346094474


```

In the paper, the accuracy and 3 classes macro-F1 score are 72.5 and 70.3 respectively on the data of Dong et al.. However I got 70.2 and 67.7 as shown above.

Is there anything wrong or missing from my attempt?

Very thanks

Updated 19/08/2017 21:57 1 Comments

number data type implements or not a Number type?

Microsoft/TypeScript

For its language segment I’m writing a detailed tutorial for TypeScript. But there is one thing that I cannot understand. The data type t implements the data type T? Or the data type of t compatible with the data type T? (t === number, string, boolean, symbol, object) (T === Number, String, Boolean, Symbol, Object)

Updated 19/08/2017 16:05 2 Comments

Why is spack re-installing a second CMake?

LLNL/spack

If I am running an explicit install of a CMake version, and then a following install of a package build with CMake, why does spack install yet another CMake?

e.g. ```bash spack install cmake@3.7.2 spack install pngwriter@0.6.0

builds cmake@3.9.0 first

```

This looks illogical to me, since the pngwriter package makes no claim about a minimum version of CMake. Should it?

Updated 20/08/2017 00:36 3 Comments

Deprecation of scoring functions

ryanmcgreevy/ModelMaker

Rosetta has dropped default support for older scoring functions in newer builts/weekly releases.

We’re using talaris2013 as scoring function when using densities. In my built, I get an error and Rosetta tells me to add another flag to the command to still use the scoring function.

I suggest we should use the current default scoring function from now on, which is ref2015.

Updated 19/08/2017 13:39 1 Comments

errNum=13 errorCode=18 无法下载,请求帮助

Mapaler/PixivUserBatchDownload

创建任务正常,但是在下载过程中出错,所有图片全部失败。win10系统,控制台代码如下: 08/19 21:00:48 [NOTICE] Download GID#59bfbfa43640c0a9 not complete: D:/PixivDownload//53872504_p0.jpg

08/19 21:00:48 [ERROR] CUID#27 - Download aborted. URI=https://i.pximg.net/img-original/img/2015/12/04/09/27/45/53872482_p0.jpg Exception: [AbstractCommand.cc:403] errorCode=18 URI=https://i.pximg.net/img-original/img/2015/12/04/09/27/45/53872482_p0.jpg -> [RequestGroup.cc:760] errorCode=18 Download aborted. -> [util.cc:1597] errNum=13 errorCode=18 Failed to make the directory D:/PixivDownload/, cause: Permission denied

新手,不知所措,还请帮助

Updated 20/08/2017 03:05 2 Comments

Using traversals in L.pick

calmm-js/partial.lenses

I am trying to map a data structure received from one API to a format that would be suitable for another API. L.pick works well when using lenses with single focus but in my case, I would also need to be able to pick properties from list items. Consider the following example:

const data = {
  a: [
    {b: 1, c: 1},
    {b: 2, c: 2},
    {b: 3, c: 3}
  ],
  d: {e: 'a'}
}

L.get(
  L.pick({
    stuff: L.pick({
      b: ['a', L.elems, 'b'] // L.elems does not work here
    }),
    otherStuff: ['d', 'e']
  }), data)

const desiredResult = {
  stuff: [
    {b: 1},
    {b: 2},
    {b: 3},
  ],
  otherStuff: 'a'
}

The example will not run, because L.get throws the following error: partial.lenses: `elems` requires an applicative. Given: { map: [Function: sndU] }

I do realize that L.pick expects lenses to be passed in and thus this use of the API is invalid. Anyhow being able to transform data structures in this manner would be very useful imho.

Updated 20/08/2017 04:44 1 Comments

Pausing Pushback?

skiselkov/BetterPushbackC

Hey, as atc i often come to situations where i have to cancel or break some Pushs during traffic that uses wrong taxiways or something else. Is there a possibilty to add that feature? Like when you set the Parkingbrake back to on?

Thanks :)

Updated 19/08/2017 12:47 2 Comments

spack load: default module system

LLNL/spack

I searched the docs and etc/ but did not find a way to do the following:

When running spack load and only having lmod configured (in modules.yaml), is there a way configure spack load -m lmod by default so I don’t need to pass it? Otherwise it will fail complaining tcl is not available: ```bash $ spack load cmake spack module loads: error: argument -m/–module-type: invalid choice: ‘tcl’ choose from: lmod $ spack module loads cmake

output ok, but does not load the module env

$ spack load -m lmod cmake

nope, not an option

$ spack module loads -m lmod cmake

output ok, but does not load the module env

```

update: oh wait, that does not seem to work with lmod at all … (I used module load directly with lmod before).

Updated 19/08/2017 14:23 1 Comments

'pip' is not recognized as an internal or external command

ritiek/spotify-downloader

<!– Please follow the guide below

  • You will be asked some questions and requested to provide some information, please read them CAREFULLY and answer honestly
  • Put an x into all the boxes [ ] relevant to your issue (like that [x])
  • Use Preview tab to see how your issue will actually look like –>

  • [x ] Using latest version as provided on the master branch

  • [x ] Searched for similar issues including closed ones

What is the purpose of your issue?

  • [x ] Script won’t run
  • [ ] Encountered bug
  • [ ] Feature request
  • [ ] Question
  • [ ] Other

System information

  • Your python version: python 3.5.2
  • Your operating system: `Windows 10

Description

<!– Provide as much information possible with relevant examples and whatever you have tried below –>

So i followed the instructions, I downloaded python 3.5.2, And i extracted the zip file from the master branch. I also downloaded FFmpeg from the link given and put the .exe file in my system32.

Then I opened a normal windows CMD and typed in pip install -u -r requirements.txt and i got this. C:\Users\Dance>pip install -u -r requirements.txt 'pip' is not recognized as an internal or external command, operable program or batch file.

What am I doing wrong?

<!– Give your issue a relevant title and you are good to go –>

Updated 19/08/2017 16:16 10 Comments

Separate sensor.['armWarn'] for ARMAWAY and ARHHOME

allan-gam/ideAlarm

For each sensor in the configuration file, there is a [‘armWarn’] element Wiki says:

armWarn: Boolean. Can be set to false if you wish to exclude the sensor from being checked when arming a zone with tripped sensors. Default value is true.

As it works now, if armWarn is set to true for a sensor you will get a warning telling you that the sensor is tripped when arming a zone. Furthermore, if canArmWithTrippedSensors is set to true for the zone, the arming attempt is refused.

That also means that if armWarn is set to false for a sensor you won’t get a warning telling you if the sensor is tripped when arming a zone. Furthermore, even if canArmWithTrippedSensors is set to false, the sensor’s armWarn setting overrides the zone’s canArmWithTrippedSensors and arming can be done.

My thoughts today are the following:

  1. Is the sensor configuration element armWarn named in a way that can be misleading? Because the setting certainly also affects not only warnings but also the ability to block an arming attempt (e.g. bypass the zone’s canArmWithTrippedSensors setting) Suggestions from native English speakers for a better naming of the armWarn element are welcomed!

  2. Should we have different armWarn settings for “Arming Home” and “Arming Away”? For example, I have a bedroom window that I wan’t to keep a bit open at night when I normally Arm Home. I don’t wish to have any warning, nor blocking when I arm Home. However if I Arm Away, I’d certainly like to block an arming attempt.

Updated 19/08/2017 11:48

adding package description to list of packages

excalibur1234/pacui

i have just added package descriptions to func_i (in devel branch). (you have to delete /tmp/pacui-packages-install file before you can test the new func_i )

pros: - you can now search for package name and package description (for packages from the system repositories). you were able to do the same before pacui version 1.6.

cons: - i have not found an easy and fast method to download package descriptions of all AUR files. this means package descriptions for AUR packages cannot be searched with the current layout of pacui’s code structure.

how do you like this?

Updated 19/08/2017 14:58

dmenu freeze

way-cooler/way-cooler

I experience few times across 6.1 and 6.2 sudden random freeze when launching apps from dmenu, everything blocked, can’t switch tty, can’t move cursor, can’t type anything.

This is all I got from log: DEBUG [layout::commands] src/layout/commands.rs:374 Layout.SwitchWorkspace("2") TRACE [layout::actions::workspace] src/layout/actions/workspace.rs:79 Switching to workspace 2 TRACE [layout::actions::workspace] src/layout/actions/workspace.rs:44 Adding workspace Workspace { name: "2", geometry: Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 1080 } }, fullscreen_c: [], id: Uuid("1f0ed350-ff38-45bf-9d0d-397537c8f9b6") } TRACE [layout::core::graph_tree] src/layout/core/graph_tree.rs:508 Normalized edge weights for: NodeIndex(1) INFO [layout::core::graph_tree] src/layout/core/graph_tree.rs:240 Added new child NodeIndex(5) for Workspace { name: "2", geometry: Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 1080 } }, fullscreen_c: [], id: Uuid("1f0ed350-ff38-45bf-9d0d-397537c8f9b6") } TRACE [layout::core::graph_tree] src/layout/core/graph_tree.rs:508 Normalized edge weights for: NodeIndex(5) INFO [layout::core::graph_tree] src/layout/core/graph_tree.rs:240 Added new child NodeIndex(6) for Container { layout: Horizontal, floating: false, fullscreen: false, output_handle: WlcOutput { handle: 1, name: "eDP-1", views: [WlcView { handle: 2, title: "Code School Free Weekend | Code School - Mozilla Firefox", class: "Firefox" }] }, apparent_geometry: Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 1080 } }, geometry: Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 1080 } }, id: Uuid("f785e646-b443-4cc9-a433-173b9bec5df3"), borders: Some(Borders { title: "", surface: ImageSurface(Surface(0x7ffb471d0080)), geometry: Geometry { origin: Point { x: -8, y: -8 }, size: Size { w: 1928, h: 1088 } }, output: WlcOutput { handle: 1, name: "eDP-1", views: [WlcView { handle: 2, title: "Code School Free Weekend | Code School - Mozilla Firefox", class: "Firefox" }] }, color: None, title_color: None, title_font_color: None }) } TRACE [layout::actions::workspace] src/layout/actions/workspace.rs:188 Focusing on next container TRACE [layout::actions::focus] src/layout/actions/focus.rs:260 Active container set to container NodeIndex(6) INFO [layout::core::tree] src/layout/core/tree.rs:231 Active container was 6, is now 6 TRACE [callbacks] src/callbacks.rs:61 view_focus: WlcView { handle: 2, title: "Code School Free Weekend | Code School - Mozilla Firefox", class: "Firefox" } false INFO [modes::default] src/modes/default.rs:256 [key] Found an action for Keypress(MOD_MOD4, Some("Return")), blocking event TRACE [lua::thread] src/lua/thread.rs:212 Handling a request TRACE [lua::thread] src/lua/thread.rs:299 Lua: handling keypress Keypress(MOD_MOD4, Some("Return")) TRACE [lua::thread] src/lua/thread.rs:309 Handled keypress okay. TRACE [lua::thread] src/lua/thread.rs:201 Lua: awaiting request Read 36 .desktop files, found 24 apps. TRACE [callbacks] src/callbacks.rs:49 view_created: WlcView { handle: 3, title: "", class: "" }: "" TRACE [layout::commands] src/layout/commands.rs:423 Adding view: WlcView { handle: 3, title: "", class: "" } w/ bit: 1 has parent: false title: "" class: "" appid: "" TRACE [layout::core::graph_tree] src/layout/core/graph_tree.rs:508 Normalized edge weights for: NodeIndex(6) INFO [layout::core::graph_tree] src/layout/core/graph_tree.rs:240 Added new child NodeIndex(7) for View { handle: WlcView { handle: 3, title: "", class: "" }, floating: false, effective_geometry: Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 19 } }, id: Uuid("0db27119-a620-4132-884f-6367d79ccfbd"), borders: None } INFO [layout::core::tree] src/layout/core/tree.rs:220 Active container was 6, is now 7 INFO [layout::core::tree] src/layout/core/tree.rs:231 Active container was 6, is now 7 TRACE [callbacks] src/callbacks.rs:61 view_focus: WlcView { handle: 3, title: "", class: "" } true TRACE [callbacks] src/callbacks.rs:99 View WlcView { handle: 3, title: "", class: "" } requested geometry Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 19 } } TRACE [callbacks] src/callbacks.rs:55 view_destroyed: WlcView { handle: 3, title: "", class: "" } TRACE [layout::core::graph_tree] src/layout/core/graph_tree.rs:508 Normalized edge weights for: NodeIndex(6) TRACE [layout::actions::focus] src/layout/actions/focus.rs:260 Active container set to container NodeIndex(6) INFO [layout::core::tree] src/layout/core/tree.rs:220 Active container was not set, is now 6 INFO [layout::core::tree] src/layout/core/tree.rs:231 Active container was not set, is now 6 User input is: telegram/desktop /bin/bash -i -c 'telegram/desktop' TRACE [layout::core::tree] src/layout/core/tree.rs:590 Removed container Ok(View { handle: WlcView { handle: 3, title: "", class: "" }, floating: true, effective_geometry: Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 19 } }, id: Uuid("0db27119-a620-4132-884f-6367d79ccfbd"), borders: None }), index NodeIndex(7) TRACE [layout::commands] src/layout/commands.rs:479 Removed container WlcView { handle: 3, title: "", class: "" } with id Uuid("0db27119-a620-4132-884f-6367d79ccfbd") bash: telegram/desktop: No such file or directory INFO [modes::default] src/modes/default.rs:256 [key] Found an action for Keypress(MOD_MOD4, Some("Return")), blocking event TRACE [lua::thread] src/lua/thread.rs:212 Handling a request TRACE [lua::thread] src/lua/thread.rs:299 Lua: handling keypress Keypress(MOD_MOD4, Some("Return")) TRACE [lua::thread] src/lua/thread.rs:309 Handled keypress okay. TRACE [lua::thread] src/lua/thread.rs:201 Lua: awaiting request Read 36 .desktop files, found 24 apps. TRACE [callbacks] src/callbacks.rs:49 view_created: WlcView { handle: 3, title: "", class: "" }: "" TRACE [layout::commands] src/layout/commands.rs:423 Adding view: WlcView { handle: 3, title: "", class: "" } w/ bit: 1 has parent: false title: "" class: "" appid: "" TRACE [layout::core::graph_tree] src/layout/core/graph_tree.rs:508 Normalized edge weights for: NodeIndex(6) INFO [layout::core::graph_tree] src/layout/core/graph_tree.rs:240 Added new child NodeIndex(7) for View { handle: WlcView { handle: 3, title: "", class: "" }, floating: false, effective_geometry: Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 19 } }, id: Uuid("b72293fd-c2fe-4b27-b16c-5e4d8a46aba7"), borders: None } INFO [layout::core::tree] src/layout/core/tree.rs:220 Active container was 6, is now 7 INFO [layout::core::tree] src/layout/core/tree.rs:231 Active container was 6, is now 7 TRACE [callbacks] src/callbacks.rs:61 view_focus: WlcView { handle: 3, title: "", class: "" } true TRACE [callbacks] src/callbacks.rs:99 View WlcView { handle: 3, title: "", class: "" } requested geometry Geometry { origin: Point { x: 0, y: 0 }, size: Size { w: 1920, h: 19 } } TRACE [callbacks] src/callbacks.rs:55 view_destroyed: WlcView { handle: 3, title: "", class: "" } TRACE [layout::core::graph_tree] src/layout/core/graph_tree.rs:508 Normalized edge weights for: NodeIndex(6) TRACE [layout::actions::focus] src/layout/actions/focus.rs:260 Active container set to container NodeIndex(6) INFO [layout::core::tree] src/layout/core/tree.rs:220 Active container was not set, is now 6 INFO [layout::core::tree] src/layout/core/tree.rs:231 Active container was not set, is now 6 User input is: telegram-desktop /bin/bash -i -c 'telegram-desktop' Froze here.

Updated 19/08/2017 23:14 4 Comments

trouble with HBL water dynamics analysis

MDAnalysis/mdanalysis

Expected behaviour

To get the hydrogen bond lifetimes of water in an enzyme active site from a trajectory XTC file I used waterdynamics.

Actual behaviour

After run the analysis I get a warning (below), after that the process seem correct. Finally I print the results but all are zero. I tested whit different selections but its the same. How can I manage this issue? I have to specify a donor and acceptor indices?

HBL_analysis.run()
/anaconda/envs/mdaenv/lib/python2.7/site-packages/MDAnalysis/analysis/hbonds/hbond_analysis.py:594: DeprecationWarning: The donor and acceptor indices being 1-based is deprecated in favor of a zero-based index. These can be accessed by 'donor_index' or 'acceptor_index', removal of the 1-based indices is targeted for version 0.17.0
  " version 0.17.0", category=DeprecationWarning)
/anaconda/envs/mdaenv/lib/python2.7/site-packages/MDAnalysis/analysis/hbonds/hbond_analysis.py:718: SelectionWarning: No donors found in selection 1. You might have to specify a custom 'donors' keyword. Selection will update so continuing with fingers crossed.
  warnings.warn(errmsg, category=SelectionWarning)
/anaconda/envs/mdaenv/lib/python2.7/site-packages/MDAnalysis/analysis/hbonds/hbond_analysis.py:718: SelectionWarning: No acceptors found in selection 1. You might have to specify a custom 'acceptors' keyword. Selection will update so continuing with fingers crossed.
  warnings.warn(errmsg, category=SelectionWarning)
/anaconda/envs/mdaenv/lib/python2.7/site-packages/MDAnalysis/analysis/hbonds/hbond_analysis.py:718: SelectionWarning: No acceptors found in selection 2. You might have to specify a custom 'acceptors' keyword. Selection will update so continuing with fingers crossed.
  warnings.warn(errmsg, category=SelectionWarning)
/anaconda/envs/mdaenv/lib/python2.7/site-packages/MDAnalysis/analysis/hbonds/hbond_analysis.py:718: SelectionWarning: No donors found in selection 2. You might have to specify a custom 'donors' keyword. Selection will update so continuing with fingers crossed.
  warnings.warn(errmsg, category=SelectionWarning)
HBonds frame  5000:  5001/5001 [100.0%]

Code to reproduce the behaviour

import numpy as np
import MDAnalysis
from MDAnalysis.analysis.waterdynamics import HydrogenBondLifetimes as HBL
u = MDAnalysis.Universe('ref_2ypi.pdb', '2ypi_center.xtc')

water = "byres name SOL and sphzone 6.0 protein"
activesite = "resid 12 171 210 212 230 232 233 234"

HBL_analysis = HBL(u, water, activesite, 0, 5000, 30)
HBL_analysis.run()
time = 0

for HBLc, HBLi in HBL_analysis.timeseries:
    print("{time} {HBLc} {time} {HBLi}".format(time=time, HBLc=HBLc, HBLi=HBLi))
    time += 1
0 1.0 0 0.0
1 0.0 1 0.0
2 0.0 2 0.0
3 0.0 3 0.0
4 0.0 4 0.0
5 0.0 5 0.0
6 0.0 6 0.0
7 0.0 7 0.0
8 0.0 8 0.0
9 0.0 9 0.0
10 0.0 10 0.0
11 0.0 11 0.0
12 0.0 12 0.0
13 0.0 13 0.0
14 0.0 14 0.0
15 0.0 15 0.0
16 0.0 16 0.0
17 0.0 17 0.0
18 0.0 18 0.0
19 0.0 19 0.0
20 0.0 20 0.0
21 0.0 21 0.0
22 0.0 22 0.0
23 0.0 23 0.0
24 0.0 24 0.0
25 0.0 25 0.0
26 0.0 26 0.0
27 0.0 27 0.0
28 0.0 28 0.0
29 0.0 29 0.0

Currently version of MDAnalysis:

0.16.2

Updated 20/08/2017 06:12 1 Comments

Get the parameter define in a route

skipperbent/simple-php-router

Hi, how can i identify if a route with any name has a or multiple parameter?

ex :

SimpleRouter::get(‘/{query}.html’, “SimpleController@search”) ->where([ ‘query’ => ‘[A-Za-z0-9-]+’ ]) ->name(‘search’);

so i can create (example) :

SimpleRouter::getParams( ‘search’ );

it will return

return array(‘query’);

if the route dosn’t has any parameter, it will return “null”, etc.

thanks :)

Updated 19/08/2017 12:05 1 Comments

Fork me on GitHub