Contribute to Open Source. Search issue labels to find the right project for you!

resource.raccess.public is set without setting isPublic AVU, so iRODS and Django disagree on public status

hydroshare/hydroshare

@hyi @pkdash @mjstealey @dtarb In issue #2031 @hyi asked why the isPublic AVU doesn’t always match the ResourceAccess.public flag according to check_irods_files. I found out why just now.

In the REST API, the setting of the isPublic AVU is carefully modified to always match the setting of ResourceAccess.public using the private routine hs_core/views/__init__.py:_set_resource_sharing_status.

However, there are several places in the code (hs_core/HydroShare/resource.py, hs_core/views/utils.py, and hs_core/views/__init__.py in which this is not used, ResourceAccess.public is modified, and the AVU is not set to match ResourceAccess.public. There are several places where Resource.public is set to False, and one place where ResourceAccess.public is set to True and HYRAX updates are not scheduled as documented in _set_resource_sharing_status.

Will attempt a hotfix for 1.10.0.

Updated 30/04/2017 14:36

[Important] Stop working on non-high-priority library adding issues.

cdnjs/cdnjs

As you guys can see, we have many dropped pull requests in the list:

https://github.com/cdnjs/cdnjs/pulls?utf8=%E2%9C%93&q=is%3Aopen%20is%3Apr%20sort%3Aupdated-asc%20s

The author or the pull requests just leaved, dropped the PRs, no matter what the reasons are, I think it’s time to clean them up! From now on, we’ll stop working on non-high-priority issues, focus on the high-priority issues, and take care of the old pull requests.

The first stage, we target on the pull requests: 1. not authored by the assignees in this issue 2. not in status need response from the upstream 3. not respond after two pings from @PeterBot

Not that the “wait for response” may be not that reliable especially if the waiting is longer than a month.

What we are going to do on the dropped pull requests, is to use new and working pull requests to close the old, dropped ones, for these kind of libraries, we treat them as High Priority issues, do our best to host them, try to manually fix the problems as those pull requests are pending so long, maybe more than 50% of have problems need to be solved manually. To prevent wasting and overlap on the effort, please leave a comment to tell everybody that you’re going to pick and fix that pull request.

4686 & eee3ed4630aa9151f7d1ceabee51e6538ea862a8 is an example of how would we deal with it.

Suggestions, discussions are very welcome. Thank you all!

Updated 30/04/2017 15:28

LESS CSS + @supports bubbling + escaping

FriendsOfEpub/Blitz

Tried to implement progressive enhancements (branch https://github.com/FriendsOfEpub/Blitz/tree/progressive-enhancements) and to my surprise, it seems LESS’ support for feature queries (@supports) is a little bit “raw around the edges.”

To sum up:

  1. bubbling works OK but for some reason LESS won’t compile strings for @supports: unlike @media (cf. http://lesscss.org/3.x/#escaping), you get the variable name (cf. https://github.com/FriendsOfEpub/Blitz/blob/progressive-enhancements/Blitz_framework/LESS/core/features.less) in the output CSS…
  2. as a result, feature queries have been tightly coupled to mixins (cf. https://github.com/FriendsOfEpub/Blitz/blob/progressive-enhancements/Blitz_framework/LESS/reference/mixins.less#L75), which may sound like a good idea at first since “hey, you don’t even have to manage feature queries, it’s automagic!”…
  3. except in real case scenarios, it means you can’t override values if you need to, like say margins if you’re using flexbox to align vertically and shit—was screwed like 2 minutes after starting a template to check the DX.
  4. @supports all over the place, obviously, since one mixin = one feature query so you can’t gather declarations which could be gathered within the same feature query.
  5. when using arguments in the nested feature query (mixin), you get an error. ¯\(ツ)

And man, some feature queries are just plain fucking awful—OTF, I’m looking at you. I have spent like 2 hours designing them, it’s not humanly possible to type them by hand every time you need them.

Of course there is like nothing about that. Checked and only found @supports bubbling is supposed to work since 2.5…

Any idea how to get around this temporarily?

Updated 30/04/2017 12:22 2 Comments

IOError crash

LolexOrg/Lolex-Tools

Issue description (if this or anything else isn’t provided, your issue will be closed)

<!— Write a short description about the issue –> https://github.com/python/cpython/commit/55fe1ae9708d81b902b6fe8f6590e2a24b1bd4b0

Steps to reproduce the issue (every step, please, and in detail

<!— Help us find the problem by adding steps to reproduce the issue –> IOError and EnvironmentError are no longer used so try except IOError

OS and versions

<!— Versions MUST be included–> * Lolex Tools:9.0nann3 * OS (eg Windows 10):ALL * Python Version (eg 3.5.2):3.7 alpha 1

What error do you get? Include error types, line numbers and filenames

<!— Paste in the below block in between the backticks –> NameError

Updated 29/04/2017 15:34

getHeader in MockHttpServletResponse

GoogleCloudPlatform/google-cloud-eclipse

Servlet 3.0 ( or 3.1) added a getHeader and several other methods to HttpServletResponse we need to implement in our sample code.

The type MockHttpServletResponse must implement the inherited abstract method HttpServletResponse.getHeader(String) MockHttpServletResponse.java /fred/src/test/java line 19 Java Problem

This broke the project on conversion to Java 8.


@Override
public void setContentLengthLong(long len) {
    // TODO Auto-generated method stub

}

@Override
public int getStatus() {
    // TODO Auto-generated method stub
    return 0;
}

@Override
public String getHeader(String name) {
    // TODO Auto-generated method stub
    return null;
}

@Override
public Collection<String> getHeaders(String name) {
    // TODO Auto-generated method stub
    return null;
}

@Override
public Collection<String> getHeaderNames() {
    // TODO Auto-generated method stub
    return null;
}
Updated 29/04/2017 14:32 2 Comments

Testing suite is frequently broken by rebuilds

Codewars/codewars.com

Yesterday we’ve encountered a situation where the css stylesheet fetching return 404 error some of the time, resulting in unformatted test output (the horror!). And today it happens again with manifest.js, which causes entire test screen to go white, breaking test suite as well as the code autosave functionality:

image

image

Both occurrences happen at roughly the same time period. Are there any explanations to them? The same thing happening for half an hour or so every day is pretty inconvenient (especially for power users).

Updated 29/04/2017 20:32 8 Comments

Need more testers

LolexOrg/Lolex-Tools

Issue description (if this or anything else isn’t provided, your issue will be closed)

<!— Write a short description about the issue –> I can do development but not test anything

Steps to reproduce the issue (every step, please, and in detail

<!— Help us find the problem by adding steps to reproduce the issue –> Run it. Something will crash!!!

OS and versions

<!— Versions MUST be included–> * Lolex Tools:9.0nann * OS (eg Windows 10):All * Python Version (eg 3.5.2):All

What error do you get? Include error types, line numbers and filenames

<!— Paste in the below block in between the backticks –> OSError , stupid things with typos.

Updated 29/04/2017 06:49

Automated Password reset with email update feature

gctools-outilsgc/gcconnex

As public servants move across government and change email address frequently and in high numbers, the Helpdesk is receiving high volumes of request to update user’s email address and reset their password as 1. they forgot their password 2. they no longer have access to email address associated with their profile to request a password reset through the “forgot password” link. Users will often omit to include their old email address (or further details) which would permit to easily find their profile, thus creating addition back and forth email between Helpdesk and the user to make sure we update the right profile. It would be beneficial to have an automated password reset feature that allows the user to also update their email address if it has changed. This may include an type of security question which once validated, would allow the user to be sent a password reset link to their new email address.

Updated 28/04/2017 13:55

Storing RASH documents via browser

essepuntato/rash

By using the Core layer of RAJE (i.e. by opening a RASH document that includes the editor script with a browser) it should be possible to save the RASH document edited by means of the usual save button of the browser. There are two possible alternatives (in order of preference):

  1. we store the RASH_source HTML by intercepting the save event in the browser and by subtituting the HTML page visualised with the RASH_source;
  2. we store the RASH_view (which is visualised) adding a meta tag saying explicitly that the HTML document is a RASH_view file.

In the latter case, RAJE core is expected to recognise the RASH_view file, and to convert it back as a RASH_source (storing it in an appropriate variable). If this option will be chosen, then it should result in appropriately extending ROCS so as to handle the case of submitting RASH_view documents.

Updated 28/04/2017 13:28

Add thrift@0.10.0 w/ git auto-update

cdnjs/cdnjs

Pull request for issue: #8848 Related issue(s): # #

Checklist for Pull request or lib adding request issue follows the conventions.

Note that if you are using a distribution purpose repository/package, please also provide the url and other related info like popularity of the source code repo/package.

Profile of the lib

  • Git repository (required): https://git-wip-us.apache.org/repos/asf/thrift.git
  • Official website (optional, not the repository): https://thrift.apache.org/
  • NPM package url (optional): https://www.npmjs.com/package/thrift
  • License and its reference: Apache-2.0, ref
  • GitHub / Bitbucket popularity (required):
    • Count of watchers: 355
    • Count of stars: 3552
    • Count of forks: 2060
  • NPM download stats (optional):
    • Downloads in the last day: 1974
    • Downloads in the last week: 9889
    • Downloads in the last month: 45575

Essential checklist

  • [ ] I’m the author of this library
    • [ ] I would like to add link to the page of this library on CDNJS on website / readme
  • [x] This lib was not found on cdnjs repo
  • [x] No already exist / duplicated issue and PR
  • [x] The lib has notable popularity
    • [x] More than 100 [Stars / Watchers / Forks] on [GitHub / Bitbucket]
    • [x] More than 500 downloads stats per month on npm registry
  • [x] Project has public repository on famous online hosting platform (or been hosted on npm)

Auto-update checklist

  • [x] Has valid tags for each versions (for git auto-update)
  • [x] Auto-update setup
  • [x] Auto-update target/source is valid.
  • [x] Auto-update filemap is correct.

Git commit checklist

  • [x] The first line of commit message is less then 50 chars, be clean and clear, easy to understand.
  • [x] The parent of the commit(s) in the PR is not old than 3 days.
  • [x] Pull request is sending from a non-master branch with meaningful name.
  • [x] Separate unrelated changes into different commits.
  • [x] Use rebase to squash/fixup dummy/unnecessary commits into only one commit.
  • [x] Close corresponding issue in commit message
  • [x] Mention related issue(s), people in commit message, comment.

close #8848, cc @jeking3

Updated 30/04/2017 11:00 4 Comments

Add node-forge w/ npm auto-update

cdnjs/cdnjs

Pull request for issue: #11152 Related issue(s): # #

Checklist for Pull request or lib adding request issue follows the conventions.

Note that if you are using a distribution purpose repository/package, please also provide the url and other related info like popularity of the source code repo/package.

Profile of the lib

  • Git repository (required): https://github.com/digitalbazaar/forge
  • Official website (optional, not the repository):
  • NPM package url (optional): https://www.npmjs.com/package/node-forge
  • License and its reference: BSD-3-Clause OR GPL-2.0
  • GitHub / Bitbucket popularity (required):
    • Count of watchers: 106
    • Count of stars: 1862
    • Count of forks: 302
  • NPM download stats (optional):
    • Downloads in the last day: 40233
    • Downloads in the last week: 218411
    • Downloads in the last month: 898968

Essential checklist

  • [ ] I’m the author of this library
    • [ ] I would like to add link to the page of this library on CDNJS on website / readme
  • [x] This lib was not found on cdnjs repo
  • [x] No already exist / duplicated issue and PR
  • [x] The lib has notable popularity
    • [x] More than 100 [Stars / Watchers / Forks] on [GitHub / Bitbucket]
    • [x] More than 500 downloads stats per month on npm registry
  • [ ] Project has public repository on famous online hosting platform (or been hosted on npm)

Auto-update checklist

  • [x] Has valid tags for each versions (for git auto-update)
  • [x] Auto-update setup
  • [x] Auto-update target/source is valid.
  • [x] Auto-update filemap is correct.

Git commit checklist

  • [x] The first line of commit message is less then 50 chars, be clean and clear, easy to understand.
  • [x] The parent of the commit(s) in the PR is not old than 3 days.
  • [x] Pull request is sending from a non-master branch with meaningful name.
  • [x] Separate unrelated changes into different commits.
  • [x] Use rebase to squash/fixup dummy/unnecessary commits into only one commit.
  • [x] Close corresponding issue in commit message
  • [x] Mention related issue(s), people in commit message, comment.
Updated 30/04/2017 10:11 2 Comments

OpenLDAP certificates are signed with wrong CN

GluuFederation/community-edition-setup

When working with replication issues in 3.0.1 I discovered that the OpenLDAP certificates are wrongly signed. The CN is supposed to be the domain name, but it is localhost. The other certificates are properly signed with domain names.

root@gluu:/etc/certs# openssl x509 -noout -subject -in openldap.crt
subject= /C=IN/ST=TN/L=Chennai/O=Test Organization/CN=localhost/emailAddress=test@example.com

root@gluu:/etc/certs# openssl x509 -noout -subject -in asimba.crt
subject= /C=IN/ST=TN/L=Chennai/O=Test Organization/CN=gluu.example.com/emailAddress=test@example.com
root@gluu:/etc/certs# openssl x509 -noout -subject -in idp-encryption.crt
subject= /C=IN/ST=TN/L=Chennai/O=Test Organization/CN=gluu.example.com/emailAddress=test@example.com
root@gluu:/etc/certs# openssl x509 -noout -subject -in shibIDP.crt
subject= /C=IN/ST=TN/L=Chennai/O=Test Organization/CN=gluu.example.com/emailAddress=test@example.com
Updated 28/04/2017 16:17 4 Comments

template system in BEAUti is broken

CompEvol/beast2

@rbouckaert

Template in BEAUti seems totally broken. I tested StarBEAST and MutliType Tree.

Two problems:

  1. Load alignments => change the template to StarBEAST => click Yes => BEAUti is broken, for example, if you switch between Partitions panel and other panels, then all data are lost or no input error;

  2. Change the template to StarBEAST before loading alignments => Load alignments => BEAUti is broken, for example, the template in the top of GUI still shows “Standard”, and no data is loaded in Partitions panel.

Updated 28/04/2017 02:40

[Request] Add angular-mocks

cdnjs/cdnjs

Library name: angular-mocks Git repository url: https://github.com/angular/bower-angular-mocks npm package name or url (if there is one): https://www.npmjs.com/package/angular-mocks License (List them all if it’s multiple): MIT Official homepage: Wanna say something? Leave message here:


Notes from cdnjs maintainer: Please read the README.md and CONTRIBUTING.md document first.

We encourage you to add a library via sending pull request, it’ll be faster than just opening a request issue, since there are tons of issues, please wait with patience, and please don’t forget to read the guidelines for contributing, thanks!!

Updated 30/04/2017 14:18 3 Comments

Harvest jobs eventually failing during certificate/proxy check

dmwm/WMCore

Interesting case… the harvesting job fails most of the times during the authentication step [*]. I assume it cannot find my proxy in the X509_USER_PROXY env var, though it’s there according to the condor.stdout logs.. Even more interesting, it works just fine with DQMHarvest workflows, but mostly fails in the other workflows that have harvesting enabled.

BTW, Matteo has just hit this issue in production as well.

[*] INFO:root:HTTP Upload is about to start: => URL: https://cmsweb-testbed.cern.ch/dqm/dev;https://cmsweb.cern.ch/dqm/relval-test => Filename: /storage/local/data1/condor/execute/dir_3057657/glide_GVbWPk/execute/dir_3686180/job/WMTaskSpace/cmsRun1/DQM_V0001_R000202209JetHTCMSSW_7_2_0-RECODreHLT_TaskChain_LumiMask_multiRun_HG1705_Validation_TEST_Alan_v5-v11-202205-202209__DQMIO.root

INFO:root:HTTP POST upload arguments: ==> checksum: md5:bff80c3eafc583b07b48c926edb66aed ==> size: 63537411

ERROR:root:HTTP upload failed with response: Problem unknown. Error: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (ssl.c:662)> Traceback: Traceback (most recent call last): File “/storage/local/data1/condor/execute/dir_3057657/glide_GVbWPk/execute/dir_3686180/job/WMCore.zip/WMCore/WMSpec/Steps/Executors/DQMUpload.py”, line 161, in httpPost (headers, data) = self.upload(uploadURL, args, filename) File “/storage/local/data1/condor/execute/dir_3057657/glide_GVbWPk/execute/dir_3686180/job/WMCore.zip/WMCore/WMSpec/Steps/Executors/DQMUpload.py”, line 255, in upload result = urllib2.build_opener(HTTPSCertAuthenticate()).open(datareq) File “/cvmfs/cms.cern.ch/COMP/slc6_amd64_gcc493/external/python/2.7.13/lib/python2.7/urllib2.py”, line 429, in open response = self.open(req, data) File “/cvmfs/cms.cern.ch/COMP/slc6_amd64_gcc493/external/python/2.7.13/lib/python2.7/urllib2.py”, line 441, in open ‘default_open’, req) File “/cvmfs/cms.cern.ch/COMP/slc6_amd64_gcc493/external/python/2.7.13/lib/python2.7/urllib2.py”, line 407, in call_chain result = func(*args) File “/storage/local/data1/condor/execute/dir_3057657/glide_GVbWPk/execute/dir_3686180/job/WMCore.zip/WMCore/WMSpec/Steps/Executors/DQMUpload.py”, line 241, in default_open return self.do_open(HTTPSCertAuth, req) File “/cvmfs/cms.cern.ch/COMP/slc6_amd64_gcc493/external/python/2.7.13/lib/python2.7/urllib2.py”, line 1198, in do_open raise URLError(err) URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:662)>

Updated 27/04/2017 15:50

PascalLanguage.java

Aspect26/TrufflePascal

this class extends the TruffleLanguage interaface, but it does not really implement its methods which makes our interpreter not usable in Truffle virtual machine (cooperation with other truffle based languages)

the whole class must be reimplemented and refactored

Updated 27/04/2017 15:30

WiFi Provisioning: drop WiFI AP after getting config command

blynkkk/blynk-library

We’re facing provisioning issues of SparkFun Blynk Board from iOS devices. From the logs it is seen as the board just can’t connect: SSID: freud Pass: annafreud2013 Auth: 04b3b38ab7bb4ffab779d3275e620d6c Host: qa.blynk.cc Port: 8442 Connecting to qa.blynk.cc Connecting to: freud .............. Timed out connecting to WiFi. I’ve noticed, that in such case, the iPhone is still connected to board’s WiFI AP (e.g. “BlynkMe-BBYR”). This seems to be an issue.

This is not the case with Android, because Android drops connection to board after sending config http request. Unfortunately, there is no way to do the same on iOS.

As experiment, we’ve tried to provision board from Android without explicitly dropping connection to it after config http request. The result was the same as on iOS: board couldn’t connect to instructed WiFi network.

Updated 28/04/2017 11:43

Empties window without pre-selection possible

metasfresh/metasfresh-webui-api

Is this a bug or feature request?

Feature Request

What is the current behavior?

Currently, you cannot do the Quick actions “Empties …” w/o selecting a material receipt candidate line.

Which are the steps to reproduce?

Open The Material Receipt candidates window and try.

What is the expected or desired behavior?

Allow the Quick actions for empties also w/o preselecting a line. image

Updated 28/04/2017 15:32 3 Comments

"Auto-request stable" cannot be disabled once enabled.

fedora-infra/bodhi

Hi, I created errata and checked “Auto-request stable”, next day I changed my mind and wanted to disable this option. So I in WebUI clicked on Edit and unchecked “Auto-request stable” and hit “Save”. However the “Auto-request stable” is still enabled.

The temporary workaround is to raise karma treshold to unsane high number so the treshold is never reached.

Updated 27/04/2017 13:07

Segmentation fault (Python 3.6.0, Anaconda 4.3.0, Ubuntu 16.04.01)

pytorch/pytorch

Steps to reproduce:

  • Install Ubuntu 16.04.01 LTS
  • Install Anaconda 4.3.0 (https://repo.continuum.io/archive/Anaconda3-4.3.0-Linux-x86_64.sh)
  • Install PyTorch (cpu version) with conda install pytorch torchvision -c soumith
  • Get ResNeXt code with git clone https://github.com/prlz77/ResNeXt.pytorch/
  • Apply this patch to enable running on cpu:
diff --git a/train.py b/train.py
index dc5a31d..ceec980 100644
--- a/train.py
+++ b/train.py
@@ -84,9 +84,9 @@ if __name__ == '__main__':
         test_data = dset.CIFAR100(args.data_path, train=False, transform=test_transform, download=True)
         nlabels = 100
     train_loader = torch.utils.data.DataLoader(train_data, batch_size=args.batch_size, shuffle=True,
-                                               num_workers=args.prefetch, pin_memory=True)
+                                               num_workers=args.prefetch, pin_memory=False)
     test_loader = torch.utils.data.DataLoader(test_data, batch_size=args.test_bs, shuffle=False,
-                                              num_workers=args.prefetch, pin_memory=True)
+                                              num_workers=args.prefetch, pin_memory=False)

     # Init checkpoints
     if not os.path.isdir(args.save):
@@ -109,7 +109,7 @@ if __name__ == '__main__':
         net.train()
         loss_avg = 0.0
         for batch_idx, (data, target) in enumerate(train_loader):
-            data, target = torch.autograd.Variable(data.cuda()), torch.autograd.Variable(target.cuda())
+            data, target = torch.autograd.Variable(data), torch.autograd.Variable(target)

             # forward
             output = net(data)
  • Run with python train.py --ngpu 0 --batch_size 8 data cifar10

Result: Segmentation fault (core dumped)

Debugging info:

ResNeXt.pytorch$ gdb python
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.04) 7.11.1
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...done.
(gdb) r train.py --ngpu 0 --batch_size 8 data cifar10
Starting program: /home/alex/anaconda3/bin/python train.py --ngpu 0 --batch_size 8 data cifar10
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Files already downloaded and verified
Files already downloaded and verified
CifarResNeXt (
  (conv_1_3x3): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  (bn_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
  (stage_1): Sequential (
    (stage_1_bottleneck_0): ResNeXtBottleneck (
      (conv_reduce): Conv2d(64, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
        (shortcut_conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (shortcut_bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
      )
    )
    (stage_1_bottleneck_1): ResNeXtBottleneck (
      (conv_reduce): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
      )
    )
    (stage_1_bottleneck_2): ResNeXtBottleneck (
      (conv_reduce): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
      )
    )
  )
  (stage_2): Sequential (
    (stage_2_bottleneck_0): ResNeXtBottleneck (
      (conv_reduce): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
        (shortcut_conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (shortcut_bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      )
    )
    (stage_2_bottleneck_1): ResNeXtBottleneck (
      (conv_reduce): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
      )
    )
    (stage_2_bottleneck_2): ResNeXtBottleneck (
      (conv_reduce): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
      )
    )
  )
  (stage_3): Sequential (
    (stage_3_bottleneck_0): ResNeXtBottleneck (
      (conv_reduce): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
        (shortcut_conv): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (shortcut_bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      )
    )
    (stage_3_bottleneck_1): ResNeXtBottleneck (
      (conv_reduce): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
      )
    )
    (stage_3_bottleneck_2): ResNeXtBottleneck (
      (conv_reduce): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_reduce): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True)
      (conv_conv): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=8, bias=False)
      (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True)
      (conv_expand): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn_expand): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
      (shortcut): Sequential (
      )
    )
  )
  (classifier): Linear (1024 -> 10)
)
[New Thread 0x7fffbefa5780 (LWP 18919)]
[New Thread 0x7fffbeba4800 (LWP 18920)]
[New Thread 0x7fffbe7a3880 (LWP 18921)]
[New Thread 0x7fffab9b1700 (LWP 18923)]
[New Thread 0x7fffab1af980 (LWP 18924)]
[New Thread 0x7fffaadaea00 (LWP 18925)]
[New Thread 0x7fffaa9ada80 (LWP 18926)]

Thread 5 "python" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffab9b1700 (LWP 18923)]
0x00007fffedba4d04 in torch::autograd::cat (tensors=..., dim=dim@entry=0) from /home/alex/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
(gdb) where
#0  0x00007fffedba4d04 in torch::autograd::cat (tensors=..., dim=dim@entry=0) from /home/alex/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#1  0x00007fffedba6f1c in torch::autograd::ConvBackward::apply (this=0x2db1a278, grad_outputs=...) from /home/alex/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#2  0x00007fffedb8d138 in torch::autograd::call_function (task=...) from /home/alex/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#3  torch::autograd::Engine::evaluate_function (this=this@entry=0x7fffee408d00 <engine>, task=...) at torch/csrc/autograd/engine.cpp:136
#4  0x00007fffedb8ed3a in torch::autograd::Engine::thread_main (this=this@entry=0x7fffee408d00 <engine>, queue=...) from /home/alex/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#5  0x00007fffedb9f89a in PythonEngine::thread_main (this=0x7fffee408d00 <engine>, queue=...) from /home/alex/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#6  0x00007fffd20a5870 in ?? () from /home/alex/anaconda3/lib/python3.6/site-packages/torch/lib/../../../../libstdc++.so.6
#7  0x00007ffff76bc6ba in start_thread (arg=0x7fffab9b1700) at pthread_create.c:333
#8  0x00007ffff6ada82d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Versions: $ python --version Python 3.6.0 :: Anaconda 4.3.0 (64-bit) $ python -c "import torch; print(torch.__version__)" 0.1.11+b13b701

Note: other networks seem to work okay, for example some of the pytorch examples. It is just this network that crashes.

Updated 27/04/2017 21:01

Unable to remove Custom Script from oxTrust UI

GluuFederation/oxTrust

OS: CentOS 6.8 (possibly affecting all other) Package: 2.4.4 sp2 upgraded to 3.0.1 Ticket: https://support.gluu.org/upgrade/3982/migrated-244sp2-301-unable-to-delete-unwanted-person-authentication-scripts/ Steps to reproduce: Install 2.4.4 and update to 2.4.4-sp2 be replacing the identity.war,oxauth.war` from maven and upgrade to 3.0.1. Log in and try to remove custom scripts it will throw error. Report: https://github.com/GluuFederation/gluu-qa/wiki/Itemized-Reports#upgrade-script Observation: Issue confirmed with similar stack trace given in the ticket.

Updated 26/04/2017 20:13

[of Web Users] Neurological

w3c/wai-people-use-web

https://www.w3.org/WAI/intro/people-use-web/diversity#cognitive was “Cognitive and neurological”

https://w3c.github.io/wai-people-use-web/diversity#cognitive has “Cognitive and learning”

The content of that section includes important non-cognitive neurological issues, including:

They may affect any part of the nervous system and impact how well people hear, move, see, speak,… Multiple sclerosis - causes damage to nerve cells in the brain and spinal cord, and can affect auditory, cognitive, physical, or visual abilities, in particular during relapses.

While I am undoubtedly personally biased on this point :-/, I think it’s important to keep the broader neurological in the document. Perhaps we go back to how it was before, and consider different ways of addressing it in later revision? Maybe for now change the title to:

Cognitive, learning, and neurological

Updated 27/04/2017 15:06 2 Comments

[in Web Use] solutions?features

w3c/wai-people-use-web

Was:

Accessibility solutions benefit people with and without disabilities

Changed to:

Accessibility features benefit people with and without disabilities

“feature” seems to me like it is an added on extra, which of course we don’t want to convey. Why change from “solutions”?

Could do something like:

Accessible design and coding benefits people with and without disabilities

Updated 27/04/2017 16:41 4 Comments

Add emojione@3.0.1 & update emojione auto-update config

cdnjs/cdnjs

Pull request for issue: #11104 Related issue(s): # #

Checklist for Pull request or lib adding request issue follows the conventions.

Note that if you are using a distribution purpose repository/package, please also provide the url and other related info like popularity of the source code repo/package.

Profile of the lib

  • Git repository (required): https://github.com/Ranks/emojione
  • Official website (optional, not the repository): http://www.emojione.com
  • NPM package url (optional): https://www.npmjs.com/package/emojione
  • License and its reference: MIT, https://github.com/Ranks/emojione/blob/master/LICENSE.md

Essential checklist

  • [ ] I’m the author of this library
    • [ ] I would like to add link to the page of this library on CDNJS on website / readme
  • [ ] This lib was not found on cdnjs repo
  • [x] No already exist / duplicated issue and PR
  • [x] The lib has notable popularity
    • [x] More than 100 [Stars / Watchers / Forks] on [GitHub / Bitbucket]
    • [x] More than 500 downloads stats per month on npm registry
  • [x] Project has public repository on famous online hosting platform (or been hosted on npm)

Auto-update checklist

  • [x] Has valid tags for each versions (for git auto-update)
  • [x] Auto-update setup
  • [x] Auto-update target/source is valid.
  • [x] Auto-update filemap is correct.

Git commit checklist

  • [ ] The first line of commit message is less then 50 chars, be clean and clear, easy to understand.
  • [x] The parent of the commit(s) in the PR is not old than 3 days.
  • [x] Pull request is sending from a non-master branch with meaningful name.
  • [x] Separate unrelated changes into different commits.
  • [x] Use rebase to squash/fixup dummy/unnecessary commits into only one commit.
  • [x] Close corresponding issue in commit message
  • [x] Mention related issue(s), people in commit message, comment.
Updated 27/04/2017 13:27

Seg fault from tide-index with many mods

crux-toolkit/crux-toolkit

I tried to do a search with many mods, and I got a core dump. Here is the command line:

crux tide-index –mods-spec 1A+3.010065,1A+4.007099,1F+10.027228,1G+3.003745,1I+7.017164,1K+8.014199,1K+6.020129,1K+1.99407,1K+42.010565,1K+226.077598,1K+27.994915,1K+70.041865,1K+114.042927,1K+86.036779,1K+114.031694,1K+68.026215,1K+86.000394,1K+59.045045,1K+28.0313,1K+42.04695,1K+56.026215,1K+100.016044,1L+7.017164,1M+15.994915,1R+10.008269,1R+6.020129,1R+0.984016,1R+14.01565,1R+28.0313,1S+79.966331,1T+5.010454,1T+79.966331,1V+6.013809,1Y+10.027228,1Y+79.966331,1K+45.029395 –overwrite T –output-dir foo sample.fasta foo

The fasta file is attached.

sample.txt

Updated 26/04/2017 12:51

к рисункам транзиентов

vtyulb/BSA-Analytics

1) В крайней правой вкладке (где цвет импульса можно менять) дополнительно выводить звездное время импульса (с точностью до секунды) и UT time импульса с точностью до точки (или долей точки). 2) На рисунке выводимом на печать у тебя есть вкладка с информацией ту да же добавить звездное время импульса и его UT time

Updated 26/04/2017 16:54

Attributes set manually in production, receipt get lost

metasfresh/metasfresh

Is this a bug or feature request?

Bug

What is the current behavior?

When you set attributes manually in production, when receiving a product, those attributes are gone when you check the HU again.

Which are the steps to reproduce?

  • Create a PP_Order, set an attribute there, e.g. Bio
  • In production, use a product in issue that has another attribute set, e.g. CH (only issue this one product, to keep it simple)
  • Open Receipt to receive the product from your PP_Order: Bio & CH are set already, OK
  • Add another attribute manually, e.g. MHD or Inland, and receive the product, note the HU value
  • Check the HU you received: Bio & Inland are set, but not the one you set manually!

    What is the expected or desired behavior?

    The attribute set manually in production, receipt should be kept as well

Updated 29/04/2017 13:09

zenith() and ramp() need calculations checking and tidying.

AtChem/AtChem

The workings of zenith() and ramp() are very flaky. I’m not entirely sure of the calculation that should be going on - @rs028 are you able to point me to a standard work, or other place, where this calculation is defined?

Issues I’ve spotted:

  • [ ] What is ramp() calculating? It seems to output max{0,x} - is that what’s required?
  • [ ] zenith() has an input argument theta which is calculated but never used.
  • [ ] The secx calculation seems like a fudge, adding 1e-30 to avoid a division by zero - what is the real calculation supposed to be?

Related to #14 ?

Updated 26/04/2017 11:05 1 Comments

Assess feasibility of integrating geocoder into ESRI Portal

bcgov/ols-devkit

The workflows should use the geocoder API specification, not, the Arc Geocoder API and should be built in such a way as to be usable in ESRI Portal deployments.

There should be workflows for: - addresses - intersections/nearest - geocoder/sites/nearest

so that ESRI developers can take full advantage of geocoder features such as multiple interpolation methods, multiple location types, etc.

Updated 26/04/2017 00:51

localHydroProxy files lost during file migration

hydroshare/hydroshare

@dtarb @hyi @mjstealey @mseul @aphelionz When I wrote the ResourceFile fix, I didn’t account for a category of files for which there are few instances. I accounted for:

  • local data resources, accessed by the HydroShare proxy user.
  • federated resources, accessed via a federation path and federated proxy user.

I did not account for a third category of resources, that are stored on the local user server, but accessed through a different proxy account (like localHydroProxy). These look a lot like federated resources, but there is one important difference: Their paths are dynamically generated based upon deployment setup, and not fixed. They also live on our servers rather than on federated servers.

Thanks to @mjstealey who just spent an hour with me and helped me get to the bottom of this.

In short, in the ResourceFile makeover, I mistakenly treated these as federated resources, and because of this difference, they became inaccessible. There are about 20 of these. Fortunately, these resources are easily recognizable via the embedded keyword localHydroProxy in their paths.

Fix is to treat them differently than federated resources, creating a flag that indicates that their access paths should be dynamically generated. Thus, these resources do not have a resource_federation_path (which by definition is statically defined), but instead, a Boolean flag that marks them as in the local user proxy zone, from which their paths can be dynamically constructed via setup information.

Otherwise, their files look like files of local resources.

Updated 27/04/2017 17:04 2 Comments

Initiate discussion of relocation project

jasonwebb/tc-maker-4x4-router

Jon Alt (wood shop manager) has requested that we move (or develop a plan to move) the CNC machine into the adjacent “Scary Bathroom” space.

This would have clear benefits to both the wood shop and to the CNC machine, specifically: 1. Wood shop gaining much-needed floor space to better serve its members. 2. CNC machine being better isolated from dust and noise. 3. CNC machine having a dedicated and clearly-defined workspace so that members can keep relevant materials and tools on-hand as well as stay focused on their projects without disruption (or causing disruption) to wood shop.

Some of the challenges of moving the machine include: 1. Readying the bathroom to be capable of housing the CNC machine. Dust collection, power (120V and 240V), and evaluating practical space requirements for working with materials. 2. Physically moving the machine into the room, possibly requiring disassembly, reassembly, and recalibration.. This is potentially the most crucial topic, requiring strong coordination and planning for how to get done.

Given the complexity of the machine and the delicate nature of its calibration, we ought to figure this out before continuing on any further work, especially involving mechanical tuning and tweaking.

Jon and Pete need to discuss options and plans, and Jason will be involved as desired to facilitate discussion and coordinate work.

Given that this discussion involves the re-appropriation of shop space (the bathroom in particular), I feel it would be best to be as transparent as possible with the membership base and the board. Before plans are acted upon, they should perhaps be approved by the membership and board (though maybe that doesn’t need to be done?).

Updated 25/04/2017 15:47 1 Comments

Fork me on GitHub