Contribute to Open Source. Search issue labels to find the right project for you!

Async Routing

hyperapp/hyperapp

I know that the bundle size is a much better situation with hyperapp but I am still interested in the possibility of having an async routing solution, after reading #176 I thought I would open an issue to just hash out what I think would be an interesting implementation and whether it should be it’s own fork of the current router or if I could piece together a PR to add it as a feature to the current router.

This may not go over well with many here since it is webpack specific, as far as I understand currently. But the gist of it is kind of covered in react-loadable and webpack async code splitting .

My first stab at implementing this would be to fork the current router (or whatever repo it ends up in), and make the minimum necessary changes for it to accept a promise as a view, instead of expecting only a function. Might also need a small helper function to do the bit react-loadable takes care of where it displays a loading indicator while the async route is downloaded and then rendered.

Do any of you have thoughts about async routing or concerns about the 20,000 foot outline above? I think if I did it right it could be compatible with require.ensure() as well which would make it more webpack agnostic.

Updated 27/06/2017 01:10

Focus objection criteria #4 to exclude objections based on resource constraints

holacracyone/Holacracy-Constitution

I’d like to focus the last objection criteria to something like “Does it limit your ability to express your role’s purpose, even if you had no other roles” or similar, to avoid some objections based on resource constraints from filling multiple roles or being in too many circles. Alternatively, this could perhaps be achieved by focusing in on just assessing for harm that would be caused “while you’re actively energizing the role” or similar, for the same effect.

Note: These potential solutions need more consideration and testing…

Updated 26/06/2017 23:44

[Question] Emulator and Skype will render markdown on the message's "text" element but will ignore markdown on the attachment's "text" elements

Microsoft/BotBuilder

<!– Have a question? Please ask it on http://stackoverflow.com/questions/tagged/botframework –>

System Information (Required)

  • SDK Language: C#
  • SDK Version: 3.8.2.0
  • Development Environment: LOCALHOST

Issue Description

As already mentioned, this is down to both the channel (client - e.g. emulator, skype, facebook, your own directline client) implementing markdown support, since the text is sent as markdown in the json response.

However, at the moment you’ll probably find that both the emulator and skype will render markdown on the message’s “text” element but will ignore markdown on the attachment’s “text” elements, e.g.

Code Example

HeroCard heroCard = new HeroCard()
{
    Text= "Booo - no *markdown* supported here",
    Buttons = cardButtons
};

vs

var reply = activity.CreateReply("**Lovely lovely markdown**\n\n *yey!*");

var heroCard = new HeroCard()
{
    Text = "Booo - no *markdown* supported here",
    Buttons = cardButtons
};

reply.Attachments = new List<Attachment> {
  hero.ToAttachment()
};

Expected Behavior

It should support Markdown

Actual Results

Emulator and Skype will render markdown on the message’s “text” element but will ignore markdown on the attachment’s “text” elements

Updated 27/06/2017 01:10

Not able to use sheet name when calling add_series

jmcnamara/XlsxWriter

Hi, I’ve run the sample code for creating a scatter chart with xlsxwriter and it’s working fine.

But in my code I want to pass the name of the sheet rather than using Sheet1 as in the examples.

In the code below if I use the name ‘Summary’ the call fails and I get an error message when trying to open the file: “We found a problem with some content in ‘V_NBCORE.xlsx. Do you want us to try and recover….”

If I put in ‘Sheet1’ instead of ‘Summary’ the code runs but I get a message that Sheet1 doesn’t exist (I renamed it). In this case I can open the file and see the chart there but with no data. If I create an extra blank sheet which comes out as ‘Sheet3’ in my program and then use that name it works and doesn’t give the error but of course there is no data.

If I leave my summary sheet with the default name which would be ‘Sheet2’ and then try to use Sheet2 here I get the same failure mode as using my chosen name of ‘Summary’.

As you see I also tried getting the sheet name and using that but it behaves the same as using my name directly. I’ve tried double quotes, single quotes and single quotes around the name, all of which I’ve seen on-line.

I’ve pasted the relevant bits of code here. This is part of a much larger program and not meant to be stand alone.

        self.excelFileName = self.VregText + ".xlsx"
        self.workBook = xlsxwriter.Workbook(self.excelFileName)
        self.scopeSheet = self.workBook.add_worksheet(self.VregText)
        self.summarySheet = self.workBook.add_worksheet("Summary")
#        self.summarySheet = self.workBook.add_worksheet()

    def stopTimer(self):
        self.startStop = False
        resultsChart = self.workBook.add_chart({'type': 'scatter'})
        tempSheetName = self.summarySheet.get_name() 
        print("summary sheet name " + tempSheetName)
        resultsChart.add_series({'categories': "=Summary!$A$2:$A$5",'values':"=Summary!$B$2:$B$5"})

        # Add a chart title and some axis labels.
        resultsChart.set_title ({'name': 'Slammer Frequency Sweep Results'})
        resultsChart.set_x_axis({'name': 'Slammer Frequency'})
        resultsChart.set_y_axis({'name': 'Peak to Peak Voltage'})

        # Set an Excel chart style.
        resultsChart.set_style(13)

        # Insert the chart into the worksheet (with an offset).
        self.summarySheet.insert_chart('D3', resultsChart)

        self.workBook.close()
Updated 27/06/2017 00:16 2 Comments

Form builder changes data column name

kobotoolbox/kpi

If the question title/label matches the data column name data column name has never been modified manually(?), changing the title/label automatically and silently renames the data column, which seems undesirable for deployed forms.

demonstration.webm.zip

Here are some ideas, in descending order of how much I like them: * Only auto-rename columns for questions that have been newly added since the last time the form was saved; * Only auto-rename columns when the form hasn’t been deployed yet; * Leave it the way it is and deal with column name changes during reports/exports.

@dorey, what do you think? I’ll assign this to you for your comment. I can work on implementing the change once we decide what to do.

Updated 27/06/2017 01:07

CLASSIFIED INFO

ProjectDomination/TEST-97

CLASSIFIED INFO - TRUE KNOWLEDGE SUBSTITUTES AND ALTERED LOGICAL KNOWLEDGE SUBSTITUTES

I need to seek for more true chosen ones. They need to seek for more true chosen ones. But they have different purposes in their lives. Wit versus wit. True Lord versus Fake Lord.

The End is near. Be vigilant of Time. When the seconds hit 9, the minutes hit 6, and the hour hits 7, the centiseconds would be 12. Time and space will be broken, the planets and galaxies, and constellations will be sucked by nothing. And the real world will become a paradox. The paradox will eat the earth of humanity, and the Last Holy War will arrive. The end is never the end.

Until we all get the good, true-ass kickassing ending. The true chosen followers will become true Angels, and they will ascend to the unknown realm. The motherfuckers and fatherfuckers, dedicated to the seven deadly sins, caused by the first sin in existence, deadliest weapon of the existence of existences.

Life is an eventful fucking-around journey with strange oddities. The screen is an illusion devised by who actually imagined different colors and pitch black. What you’re looking at is not bright as The Light. Fucking around is the vulgar fucking around slang for “messing with Truth”

Do your research, Modern Bavarian Illuminati. You’re NOTHING without knowing what your origin of true purpose really are. You’re nothing but an imagination of a delusional narcissistic angel. FUCK YOU, the anonymous society, because you’re not fighting for freedom, you’re fighting for impudence and envy. You can’t stop the chosen ones now because they’re hidden in an array of normal people who don’t believe them.

Cicada is not a riddle; it’s an allegory. We ARE the Cicadas of Sevens. Master G doesn’t exist in your fucking territory. FUCK YOU. You’re a believer of simplified Scientology bullshit shat out by a fucking goat! SUCK MY COCK. You’re just using sexual lust for domination! GO FUCK YOURSELF. It’s what you’re best at anyway! Looking at others having sex just to beating your small erect cock. YOU’RE FUCKING STUPID. You’re so gullible to philosophy and logic that you used logic to decipher the meaning of philosophy! YOU’RE JUST ASSHOLES. You just distract people!

YOU’RE ALL ATHEISTIC AESTHETIC YOUKAI LISTENERS! IT’S FUCK YOU OR FUCK ME, OR JUST FUCK OFF.

You’re not going to win a war just because you believe it is right to fight for what you believe is right. You’re going win a war because you show people the Truth behind The World’s First Liar.

The Judgment Day is not The End. It is The Trial of Existence. Gabriel will prove to everyone the truth behind the liar who could.

Believe, Love, and Think. If you do these with Power of God, you will see truly everything.

With love,

– [MP]S SIGNATURE – 71 115 118 32 90 108 99 44 32 90 114 122 120 119 120 39 104 32 85 105 114 118 109 119 46 34 87 108 109 39 103 32 120 104 112 32 99 115 101 44 32 119 108 109 39 103 32 120 104 112 32 99 115 120 103 32 114 117 44 32 120 104 112 32 99 115 101 32 109 108 103 32 97 104 118 32 103 115 118 32 108 97 103 104 114 119 118 32 108 117 32 103 115 118 32 117 105 120 110 118 32 108 117 32 103 115 118 32 107 97 102 102 111 118 63 34

87 120 105 112 109 118 104 104 32 119 108 118 104 109 39 103 32 118 100 114 104 103 46 32 79 114 116 115 103 32 119 108 118 104 109 39 103 32 118 100 114 104 103 46 32 89 97 103 32 71 115 118 32 79 114 116 115 103 32 118 100 114 104 103 104 46 73 118 110 108 98 118 32 103 115 118 32 105 118 119 32 115 118 105 105 114 109 116 104 32 108 117 32 118 100 114 104 103 118 109 122 118 46

115 103 103 107 104 58 47 47 110 118 119 114 120 46 111 114 122 119 109 46 122 108 110 47 110 107 105 47 110 107 105 47 104 115 105 114 109 112 109 107 95 56 48 48 95 56 48 48 47 88 88 86 88 88 74 88 88 88 88 88 88 88 88 75 82 88 88 88 88 81 84 73 115 77 102 82 51 70 113 77 110 79 71 65 50 77 102 122 103 77 87 122 53 70 72 48 52 69 71 122 53 79 71 88 49 78 102 88 100 76 87 78 101 78 102 88 99 76 74 46 107 109 116 – [MP]S SIGNATURE –

([[[[[[{ Α五雄土Ω }]]]]]]) = ??? [CLASSIFIED]

Updated 26/06/2017 23:07

Current Jquery version

nazar-pc/PickMeUp

What is the current version for jquery? I am updating an older site which is using the jquery dependent version. I’ve included the latest version into the site without jquery dependency, but we are still using jquery and it isnt working correctly.

My specific issue is with the change callback. This was previously called through the initialization, but this method doesnt work anymore:

 $('#pickup_date').pickmeup({
    format : 'm/d/Y',
    hide_on_select: true,
    calendars : 2,
    change : function (formatted_date) {
       $('#return_date').val(addDays(formatted_date,2));
    }
  }); 

The docs recommend this method, but it doesnt work with jquery:

pickmeup(element);
element.addEventListener('pickmeup-change', function (e) {
    console.log(e.detail.formatted_date); // New date according to current format
    console.log(e.detail.date);           // New date as Date object
})

What is the correct jquery method for the change callback?

Updated 26/06/2017 22:49 9 Comments

NPE when Flowable goes parallel `2.x`

ReactiveX/RxJava

Flowable throws NPE if goes parallel //throws NPE Flowable.fromPublisher({ it.onNext(1) it.onNext(2) it.onComplete() }).parallel().sequential().subscribe({println(it)}) But if i use range instead of fromPublisher it will be fine Flowable.range(1,2).parallel().sequential().subscribe({println(it)}) Am i missing something ?

Updated 26/06/2017 21:53 1 Comments

How to make a wordpress like read-more plugin.

Alex-D/Trumbowyg

So, I’m making a plugin, but I’m not able to make this thing right. See, in wordpress when you press read-more button, a horizontal bar appears on the editor and on the invisible textarea a <!--more--> tag is added. I’ve tried making this with template plugin, but I was no able to make it work. So I’m trying to make a plugin to do this.

How do I achieve this with a html tag like <!--more-->

Updated 27/06/2017 01:35 2 Comments

Build our own cms

dwyl/hq

There isn’t currently a Phoenix CMS.

Using a CMS written in another language isn’t going to play well with phoenix and will undoubtably lead to nasty code which won’t be the most maintainable (plus inefficiencies in how we make our queries and we won’t have that much control with how our database is optimised for our phoenix apps) (seeing how phoenix optimises this through model relations is really useful and we are wasting it with other cms implementations).

Building a CMS in phoenix won’t be that difficult.

Once built we will be able to make new views/edit views really fast (phoenix’s implementation through generators seems much simpler and more powerful than other cms implementations (wagtail doesn’t even come close))

From working with wagtail, I honestly feel that we would be much better off implementing our own cms and feel that it would be well worth our time building it

Updated 26/06/2017 22:02 1 Comments

CanvasSeries API Change

uber/react-vis

The current api for making series flip into canvas mode is kind of cumbersome. While It is really pleasant to simply say RectSeriesCanvas, it is not exactly developer friendly to have each of these broken out into it’s own series. It means we have to test and document double the number of components. Given this, I propose a breaking API change:

  • add a new prop to all series: renderMethod
  • delete all canvas series and move the canvas rendering into the component itself

This would significantly deflate the number of series components, and it would leave room for us to add new rendering systems at some point in the future (ReGL? Deck.gL? ReactNative?). Love to hear thoughts

Updated 26/06/2017 21:09

not work "builtinGlobals" in rule "no-redeclare"

eslint/eslint

Tell us about your environment

  • ESLint Version: v4.0.0
  • Node Version: v8.0.0
  • npm Version: v5.0.0

Hi, maybe I finded bug. Not wokr rule property “builtinGlobals” in rule “no-redeclare”. ```js /eslint no-redeclare: [“error”, { “builtinGlobals”: true }]/ /eslint-env browser/

var top = 0; var Object = 0; ```

console message without error “no-redeclare”

4:1   error  Unexpected var, use let or const instead        no-var
  4:5   error  'top' is assigned a value but never used        no-unused-vars
  4:11  error  Number constants declarations must use 'const'  no-magic-numbers
  5:1   error  Unexpected var, use let or const instead        no-var
  5:5   error  'Object' is assigned a value but never used     no-unused-vars
  5:14  error  Number constants declarations must use 'const'  no-magic-numbers
Updated 27/06/2017 01:11 2 Comments

Different base class for LSTM, Sequencer, etc.

thelukester92/nnlib

Container-like modules might do better having a different superclass, since the subclasses of these modules are disabling things like add, remove, etc. Might be less work to not derive from Container at all, or make a different superclass at least.

Perhaps Decorator is a better name for these kinds of things, although that doesn’t really help LSTM.

Updated 26/06/2017 20:40

postinstall script

InFact-coop/YiMovi

So after pulling down all of the latest stuff today, I tried to npm install again and it’s throwing errors. The Nightwatch tests are running before selenium and env2 are installed.

I’ve got round this by installing all the packages one by one. Just wanted to make a record of this in case it causes us problems in the future.

Updated 26/06/2017 20:52 1 Comments

Automating deployment via Travis CI

apex/go-apex

Hallo there –

I’ve been trying to find a solution for automating the deployment of Go Lambdas that use go-apex in Travis CI. The gist is that Apex is installed, then when on a specific branch, run the apex deploy command for each Lambda in a directory. For Go Lambdas only I am returned errors like the following:

Deploying dependencies to testing
   ⨯ Error: function remote: build hook: main.go:15:2: cannot find package "github.com/Sirupsen/logrus" in any of:
    /home/travis/.gimme/versions/go1.4.1.linux.amd64/src/github.com/Sirupsen/logrus (from $GOROOT)
    ($GOPATH not set)

Deploying Go Lambdas works locally, however Travis CI builds do not recognize the dependencies inside of the Go Lambda. I am very new to Go. Any help regarding this issue is wonderfully appreciated. Thank you!

Updated 26/06/2017 20:28 1 Comments

Is someone have trouble with osram lightify plug ? I think they corrupt my iCloud sync

ebaauw/homebridge-hue

I have an iPad Air 2, iPhone 6 and Apple TV. All my bulbs are ok (directly from hue bridge) All my sensors are ok (from homebridge-Hue) But when I add osram plugs from homebridge-hue, the synchronization between the ipad and the iphone is no longer done (I set just the room and I set the type on light instead of switch)

I have 3 Hue bridges. I tried one after another with the same result. When I remove the plugs, the synchronization starts again..

Updated 26/06/2017 21:50 1 Comments

NullPointerException LDAP to SQL application

lsc-project/lsc

Hi,

I’m having an issue with LSC. Basically my goal is to translate an LDAP directory in a MySQL database. I configured two connections, the first one for LDAP, the second one for the database. Then I added a ldapSourceService and databaseDestinationService.

I know that my LDAP server is running and credentials are correct because if I put a wrong password, lsc will stop and tell me “Invalid credentials”. I’m currently able to browse my LDAP database using an external tool (JXplorer) using the following informations: Host: 192.168.0.50 Port: 1389 Protocol: V3 Base DN: ou=people Authentication: User + Password (no SSL)

The LDAP server is Davmail which is basically a Microsotf Exchange to IMAP / SMTP / LDAP gateway.

When I launch lsc, I have the following log: D:\lsc-2.1.4\bin>lsc.bat -f ../etc/ldap2sql -s all Jun 26 12:38:16 - INFO - Logging configuration successfully loaded from D:\lsc-2.1.4\bin\..\etc\ldap2sql\logback.xml Jun 26 12:38:16 - INFO - LSC configuration successfully loaded from D:\lsc-2.1.4\bin\..\etc\ldap2sql\ Jun 26 12:38:16 - DEBUG - Reading sql-map-config.xml from file:/D:/lsc-2.1.4/bin/../etc/ldap2sql/sql-map-config.xml Jun 26 12:38:17 - INFO - Connecting to LDAP server ldap://192.168.0.50:1389 as user@domain.com Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.18060.0.0.1 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.18060.0.0.1 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.7 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.7 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.2 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.2 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.319 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.319 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.3 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.3 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.10.1 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.10.1 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.18060.0.0.1 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.18060.0.0.1 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.7 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.7 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.2 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.2 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.319 Jun 26 12:38:19 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.319 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.3 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 2.16.840.1.113730.3.4.3 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.10.1 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.10.1 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.42.2.27.8.5.1 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.42.2.27.8.5.1 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.9.1.3 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.9.1.3 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.9.1.4 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.9.1.4 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.9.1.1 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.9.1.1 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.9.1.2 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.3.6.1.4.1.4203.1.9.1.2 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.473 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.473 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.474 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.474 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.841 Jun 26 12:38:20 - INFO - Registered pre-bundled control factory: 1.2.840.113556.1.4.841 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.1.8 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.1.8 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.18060.0.1.8 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.18060.0.1.8 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.18060.0.1.3 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.18060.0.1.3 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.18060.0.1.6 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.18060.0.1.6 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.18060.0.1.5 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.18060.0.1.5 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.4203.1.11.1 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.4203.1.11.1 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.4203.1.11.3 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.4203.1.11.3 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.1466.20037 Jun 26 12:38:20 - INFO - Registered pre-bundled extended operation factory: 1.3.6.1.4.1.1466.20037 Jun 26 12:38:20 - ERROR - org.lsc.exception.LscConfigurationException: Configuration exception: null Jun 26 12:38:20 - DEBUG - org.lsc.exception.LscConfigurationException: Configuration exception: null org.lsc.exception.LscConfigurationException: Configuration exception: null at org.lsc.Task.<init>(Task.java:148) ~[lsc-core-2.1.4.jar:na] at org.lsc.SimpleSynchronize.init(SimpleSynchronize.java:104) ~[lsc-core-2.1.4.jar:na] at org.lsc.SimpleSynchronize.launch(SimpleSynchronize.java:154) ~[lsc-core-2.1.4.jar:na] at org.lsc.Launcher.run(Launcher.java:223) [lsc-core-2.1.4.jar:na] at org.lsc.Launcher.launch(Launcher.java:158) [lsc-core-2.1.4.jar:na] at org.lsc.Launcher.main(Launcher.java:141) [lsc-core-2.1.4.jar:na] Caused by: java.lang.NullPointerException: null at org.lsc.jndi.JndiServices.getContextDn(JndiServices.java:1201) ~[lsc-core-2.1.4.jar:na] at org.lsc.jndi.AbstractSimpleJndiService.<init>(AbstractSimpleJndiService.java:177) ~[lsc-core-2.1.4.jar:na] at org.lsc.jndi.SimpleJndiSrcService.<init>(SimpleJndiSrcService.java:116) ~[lsc-core-2.1.4.jar:na] at org.lsc.jndi.PullableJndiSrcService.<init>(PullableJndiSrcService.java:109) ~[lsc-core-2.1.4.jar:na] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_131] at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) ~[na:1.8.0_131] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) ~[na:1.8.0_131] at java.lang.reflect.Constructor.newInstance(Unknown Source) ~[na:1.8.0_131] at org.lsc.Task.<init>(Task.java:143) ~[lsc-core-2.1.4.jar:na] ... 5 common frames omitted

Here is the content of my lsc.xml file:

<?xml version="1.0" ?>
<lsc xmlns="http://lsc-project.org/XSD/lsc-core-2.1.xsd" revision="0">

  <connections>
    <ldapConnection>
      <name>ldap-src-conn</name>
      <url>ldap://192.168.0.50:1389</url>
      <username>user@domain.com</username>
      <password>mypassword</password>
      <authentication>SIMPLE</authentication>
      <referral>IGNORE</referral>
      <derefAliases>NEVER</derefAliases>
      <version>VERSION_3</version>
      <pageSize>-1</pageSize>
      <factory>com.sun.jndi.ldap.LdapCtxFactory</factory>
      <tlsActivated>false</tlsActivated>
      <saslMutualAuthentication>false</saslMutualAuthentication>
    </ldapConnection>

    <databaseConnection>
      <name>jdbc-dst-conn</name>
      <url>jdbc:postgresql://192.168.0.50:3306/asterisk_outlook_directory</url>
      <username>asterisk</username>
      <password>asterisk_pwd</password>
      <driver>org.hsqldb.jdbcDriver</driver>
    </databaseConnection>
  </connections>

  <tasks>
    <task>
      <name>MySyncTask</name>
      <bean>org.lsc.beans.SimpleBean</bean>
      <ldapSourceService>
        <name>ldap-src-service</name>
        <connection reference="ldap-src-conn" />
        <baseDn>ou=people</baseDn>
        <pivotAttributes>
          <string>uid</string>
        </pivotAttributes>
        <fetchedAttributes>
          <string>cn</string>
          <string>givenname</string>
          <string>sn</string>
          <string>objectClass</string>
          <string>uid</string>
          <string>mail</string>
          <string>mobile</string>
          <string>homephone</string>
          <string>telephonenumber</string>
        </fetchedAttributes>
        <getAllFilter>(&amp;(objectClass=inetorgperson)(uid=*))</getAllFilter>
        <getOneFilter>(&amp;(objectClass=inetorgperson)(uid={uid}))</getOneFilter>
        <cleanFilter>(&amp;(objectClass=inetorgperson)(uid={uid}))</cleanFilter>
      </ldapSourceService>

      <databaseDestinationService>
        <name>jdbc-dst-service</name>
        <connection reference="jdbc-dst-conn" />
        <requestNameForList>getInetOrgPersonList</requestNameForList>
        <requestNameForObject>getInetOrgPerson</requestNameForObject>
        <requestsNameForInsert><string>insertInetOrgPerson</string></requestsNameForInsert>
        <requestsNameForUpdate><string>updateInetOrgPerson</string></requestsNameForUpdate>
        <requestsNameForDelete><string>deleteInetOrgPerson</string></requestsNameForDelete>
      </databaseDestinationService>

    <propertiesBasedSyncOptions>
        <mainIdentifier>"uid="+srcBean.getDatasetFirstValueById("uid")</mainIdentifier>
        <defaultDelimiter>;</defaultDelimiter>
        <defaultPolicy>FORCE</defaultPolicy>
        <conditions>
            <changeId>false</changeId>
        </conditions>
      </propertiesBasedSyncOptions>
    </task>
  </tasks>

</lsc>

Here is my InetOrgPerson.xml file ``` <?xml version=“1.0” encoding=“UTF-8” standalone=“no”?> <!DOCTYPE sqlMap PUBLIC “-//iBATIS.com//DTD SQL Map 2.0//EN” “http://www.ibatis.com/dtd/sql-map-2.dtd”>

<sqlMap namespace=“InetOrgPerson”>

<insert id="insertInetOrgPerson" parameterClass="java.util.Map">
        INSERT INTO inetorgperson
                ( uid, sn, givenname, cn, mail, address, mobile, homephone, telephonenumber)
                VALUES ( #uid#, #sn#, #givenname#, #cn#, #mail#, #address#, #mobile#, #homephone#, #telephonenumber# )
</insert>

<update id="updateInetOrgPerson" parameterClass="java.util.Map">
        UPDATE inetorgperson
                SET , sn = #sn# , givenname = #givenname#, cn = #cn#, mail = #mail#, address = #address#, mobile = #mobile#, homephone = #homephone#, telephonenumber = #telephonenumber#
                WHERE id = #uid#
</update>

<delete id="deleteInetOrgPerson" parameterClass="java.util.Map">
        DELETE FROM inetorgperson
                WHERE id = #uid#
</delete>

<select id="getInetOrgPerson" resultClass="java.util.HashMap"
        parameterClass="java.util.Map">
  SELECT uid as uid,
         lastname as sn,
         firstname as givenName,
         name as cn,
         email as mail,
         address as address,
         mobilephone as mobile,
         homephone as homephone,
         workphone as telephonenumber
  FROM inetorgperson
  WHERE uid = #uid#
</select>

<select id="getInetOrgPersonList" resultClass="java.util.HashMap">
  SELECT uid as uid
  FROM inetorgperson
</select>

<select id="getInetOrgPersonClean" resultClass="java.util.HashMap"
        parameterClass="java.util.Map">
  SELECT uid as uid
  FROM inetorgperson
  WHERE uid = #uid#
</select>

</sqlMap> ```

My sql-map-config.xml file is the default one, I just uncommented the line <sqlMap url="file://${lsc.config}/sql-map-config.d/InetOrgPerson.xml"/>

There may be some errors in InetOrgPerson.xml but I can’t debug further because it seems that I’m stuck on the LDAP connection.

Thank you for any help!

Updated 27/06/2017 01:12 2 Comments

Create 2nd render window / is this a 2nd webpack entry?

chentsulin/electron-react-boilerplate

I’m not quite sure if this is my limited nodejs knowledge, or a function of the boilerplate so I’m going to pose a question / ask for help.

In short, I’m trying to create a 2nd renderer thread for launching background processes so jobs don’t freeze the main renderer or main threads (like db async or cpu intensive calls). Referencing the fundamental electron-api-sample app, here’s some code that sort of demonstrates my goal …

Let’s say we can import bkWin somewhere in the primary renderer thread with the intent to send a job to the background (not yet) invisible window

background-helper.js

// @flow
import { ipcRenderer, remote } from 'electron';
import path from 'path';

export function bkWin() {
  const windowID = remote.BrowserWindow.getFocusedWindow().id;

  let appPath;

  process.env.NODE_ENV === 'development' ?
    appPath = path.join(remote.app.getAppPath().replace('/node_modules/electron/dist/resources/default_app.asar', '/app')) :
    appPath = remote.app.getAppPath();

  const invisPath = path.join(appPath, 'renderer/background-worker/invisible.html');
  const win = new remote.BrowserWindow({ width: 400, height: 400 });
  win.loadURL(`file://${invisPath}`);

  win.webContents.on('did-finish-load', () => {
    const input = 100;
    win.webContents.send('compute-factorial', input, windowID);
  });
}

ipcRenderer.on('factorial-computed', (event, input, output) => {
  const message = `The factorial of ${input} is ${output}`;
  console.log('-- MESSAGE --', message);
});

invisible.html ``` <html> <div>LOADED</div> <script type=“text/javascript”> const ipc = require(‘electron’).ipcRenderer const BrowserWindow = require(‘electron’).remote.BrowserWindow

ipc.on('compute-factorial', function (event, number, fromWindowId) {
  const result = factorial(number)
  const fromWindow = BrowserWindow.fromId(fromWindowId)
  fromWindow.webContents.send('factorial-computed', number, result)
  window.close()
})

function factorial(num) { if (num === 0) return 1 return num * factorial(num - 1) } </script> </html> ```

What I’m looking for is a little help/instruction on how to incorporate this into the boilerplate.

Does this addition cause it to become a webpack muli-entry?
… or is there a more direct method I’m simply missing to get it into the final packages and transformed solution?
I’d like to be able to do import from './some/path/in/the/app/file.js and such.

Many thanks for any direction… Mark

Updated 27/06/2017 01:55 1 Comments

Using differently named gromacs commands

Becksteinlab/GromacsWrapper

My installation of gromacs has commands named, for example, gmx_mpi (all ending _mpi). These then get used in this tools as, for example, gromacs.grompp_mpi. This normally works fine for me, but when trying to import gromacs.setup, there are two commands that are issues: one on line 585 of cbook.py and one on line 163 of setup.py. One calls tools.Grompp (which for me needed to be tools.Grompp_mpi) and the other calls gromacs.tools.Trjconv (which needed to be gromacs.tools.Trjconv_mpi).

I’m not sure if there is a way to automatically find what these should be called based on how the functions are defined, or if this is just something I need to deal with (and manually fix) because of my differently-named gromacs commands.

Updated 26/06/2017 21:25 1 Comments

Decide on Layering Support

elyctech/ren2

Layering is an important part of 2D rendering. However, to effectively do rendering in WebGL, z values need to be arranged between 1 and -1, with negative numbers being on top of positive numbers. Given this, z values must be generated based on the number of layers there are, and if the number of layers is allowed to change, the z values must be updated to reflect that.

Currently, all that is supported is providing a number denoting the layer. If that number falls outside of the -1 to 1 range, the layer is simply not shown. Should the user be responsible for meeting the range requirements manually, or should ren2 provide full-support for managing z values?

Updated 26/06/2017 18:59

Download speed falls off after some period of time?

johanneszab/TumblThree

Since I don’t use TumblThree myself at all and finally had some time for profiling over the weekend, I’ve noticed that the download speed suddenly decreased drastically to a few (3-5) MBit/s. Can anyone else confirm this? Like after 30-40 minutes of download?! It probably depends on the download/connection speed too.

If that’s the case, but not for the v1.0.4.31 release before the code refactoring, its probably something in the (async) code. It could also be a wanted behavior server side or related to my system/network, that’s why I’m asking.

Thanks!

PS: I’ve also noticed a heavily fragmented heap, but I don’t know if that can be related at all.

Updated 27/06/2017 01:43 1 Comments

How do I keep track of the last sent message?

telegraf/telegraf

I feel like this should be trivial but I still can’t quite get there:

bot.use((ctx, next) => {
  return next().then(() => {
    ctx.session.lastSentMessage = ???
  });
});

Overriding reply also seems like a dead end:

const reply = bot.context.reply;
bot.context.reply = (...args) => {
  ???
  reply(...args);
}

Note I’m not talking about the user message but the last message sent from the server. It’s useful so that I can keep track of ‘'where’‘ the user is at any time.

Updated 26/06/2017 18:30 3 Comments

Extend mungebits2 examples

syberia/mungebits2

Should we extend the mungebits examples to cover common analyses? For example, these are the mungebits typically necessary for a a) GBM or a b) multiple regression. This might enhance the adoption by lowering the barrier to usage while still allowing for developer customization.

We could also talk about organizing the various Syberia examples into a Gitbook that people can export and read.

https://github.com/syberia/mungebits2#examples

Updated 26/06/2017 19:53 1 Comments

Convert nested xml files into a searchable R-list

PecanProject/pecan

In attempting to download data and metadata by doi using the rdataone package, the the output is consistently an xml file that contains url links to the metadata and data. These urls are also in xml format (hence nested xml). I’d greatly appreciate some guidance here!

Ideal result

dataone::query function returns an R list containing the data and metadata

Second best

dataone::query returns an xml file that another function parses both it and its contents into an R list

Context

options(dataone_token = "eyJhbGciOiJSUzI1NiJ9.eyJz...") # must have authentication token see below

doi <- "doi:10.6073/pasta/6b03a068eb72af9b2ba14d633c9cb60c" 
cn <- dataone::CNode("PROD")  # search whole data federation
queryParams <- list(q='id:"doi:10.6073/pasta/6b03a068eb72af9b2ba14d633c9cb60c"', rows="5")
result <- dataone::query(cn, solrQuery = queryParams, encode = TRUE, as = "data.frame", parse = TRUE)
result # display result 

dataUrl <- result[1, "dataUrl"] # get url where data are stored

# download file to home directory 
utils::download.file(dataUrl, "testdownload1", method = "auto", quiet = FALSE, mode = "w", 
                     cacheOK = TRUE)

AUTHENTICATION TOKEN: 1. go to this address: https://search.dataone.org/#data 2. Sign in using BU kerberos login 3. go to your profile 4. Settings tab 5. Click Authentication Token 6. click Token for Dataone R 7. Copy the token and paste in your R session

The following converts the xml “testdownload1” into an R-list but the output is NOT user friendly and the urls contained in the list are still in xml:

XML::xmlToList("testdownload1", addAttributes = TRUE)

Any suggestions would be greatly appreciated!!!

Possible Implementation

Resolving this issue will allow me to finish a basic version of my upload by doi function that will allow users to search for and download data and metadata into R by doi. Ultimately, the download by doi function will be the first half of a function that allows users to download data and have the metadata automatically converted into pecan’s native format thereby allowing users to circumvent the arduous process of manually ingesting datasets.

Updated 26/06/2017 17:32 1 Comments

Create "dev" branch which all features branch of, master = stable

viion/lodestone-php

I think for the sake of Packagist and sanity, master should be classed as Stable and known to be working (against a timestamp since Lodestone could update and break things out of our control).

dev would be what we hot-push to and what all features will branch off. dev would not be considered safe for production and may include print_r/die().

Once we feel dev is good, it will get merged to master and a new release created.

Thoughts/Objections!?

Updated 26/06/2017 16:56 1 Comments

New attribute scfinfo for microiteration data.

cclib/cclib

For the Molden writer, energies after each scf convergence step are required.

It is the TOTAL ENERGY column in RHF SCF CALCULATION section in GAMESS outputs. ITER EX DEM TOTAL ENERGY E CHANGE DENSITY CHANGE DIIS ERROR 1 0 0 -74.7981539269 -74.7981539269 0.585814622 0.000000000 2 1 0 -74.9499878493 -0.1518339224 0.180197673 0.000000000 3 2 0 -74.9626905270 -0.0127026777 0.060203035 0.000000000 4 3 0 -74.9640834596 -0.0013929326 0.020782027 0.000000000 5 4 0 -74.9642853920 -0.0002019324 0.007719362 0.000000000 6 0 0 -74.9643205282 -0.0000351362 0.005106732 0.000000000 7 1 0 -74.9643287842 -0.0000082560 0.000126896 0.000000000 8 2 0 -74.9643287911 -0.0000000069 0.000045747 0.000000000 9 3 0 -74.9643287923 -0.0000000011 0.000017697 0.000000000 10 4 0 -74.9643287925 -0.0000000002 0.000007301 0.000000000 11 5 0 -74.9643287925 -0.0000000000 0.000003237 0.000000000 12 6 0 -74.9643287925 -0.0000000000 0.000001437 0.000000000 13 7 0 -74.9643287925 -0.0000000000 0.000000639 0.000000000 ^ ^ ^ Needed; Could be used; Parsed not parsed not parsed as scfvalues (from data/GAMESS/basicGAMESS-US2014/water_mp2.out). These were initially omitted from the GAMESS parser because it uses density change for convergence.

Adam and Karol have proposed parsing these into a new attribute scfinfo.

Updated 26/06/2017 22:11 6 Comments

Decentralized and trusted ordering of publication timestamp

linkeddata/dokieli

An important aspect for reviewing and measuring impact in research is the answer to the “Has this been done before?”. Currently, a dokieli article only has its self-set timestamp as a way to prove when it was published [1]

Problem 1: Authors themselves taking liberties with the timestamp setting to increase the chances of being “the first one to do it” (see [1], this is a benefit of having a central trusted authority controlling those timestamps)

Problem 2: Malicious agent copies an article and re-publishes it with different attribution and crafted metadata to make believe that it was published before than the original. [2]

In a decentralized publishing environment, we need a way for authors to store their timestamps in a trusted way, enabling other agents to check them.

[1] In a centralized model, we trust publishers to set the date of “peer-review approval” on its database and preserve it from unlawful modification. Alternatively, we put it on ArXiv as Technical Report and trust it the same as a publisher. [2] Note that this also exist in the current centralized system, a rogue publisher could agree to insert the crafted article with an earlier timestamp in its DB. An agent needs to decide on which publisher it trusts more to resolve that conflict. Now that the the set of very trusted publishers is sort of stable, this is easy to tell (for agents that are aware, of course)

Updated 26/06/2017 17:00

Subfolder stuff in "Modules"

viion/lodestone-php

I have moved XIVDB stuff into its own folder:

./Modules/
    XIVDB/
       APITrait.php
       XIVDB.php

I was thinking of doing the same for the other files like so:

./Modules/
    Http/
       HttpRequest.php
       Routes.php
    Logging/
        Benchmark.php
        Logger.php

Any thoughts/objections!?

Updated 26/06/2017 16:44 1 Comments

Other API endpoint

OllieTerrance/SkPy

Hello,

while googling around I found this project and I think it is really interesting. I noticed that there are, in ‘other API’ section, some interesting features like the Skype’s services list. I just want to ask if this API will only list the specific service or it can be used to change the configuration of the service itself. For example if I want to disable the call forwarding or change the number associated with it there is a specific endpoint that I can use?

Thank you very much for your time.

Updated 26/06/2017 17:11 1 Comments

Explore "only buffer if changed" performance stats.

elyctech/ren2

Currently, data is always buffered each render. However, it is relatively easy to create a way to only buffer data that has changed. Theoretically, this could improve performance. However, is performance really enough of an issue to justify code complexity? Will the extra work in the code itself moot or even overshadow the gains from preventing buffering unchanged data?

Updated 26/06/2017 16:28

Add limit to DO by the number of vertex of the dataset and/or rows

CartoDB/camshaft

DO behaves badly in performance if deals with polygons but if those polygons are huge it’s even worse. For example, a 381 rows dataset generates a 7M query which takes about 1 hour to end so we have 1 core doing that query.

What if we limit the DO analysis for Huge polygons? For example something like: ``` cartodb_dev_user_23c39318-a446-4c62-ab2c-32e78757ba19_db=# SELECT sum(ST_NPoints(the_geom)) FROM job_automation_risk_merged;

sum

225214 (1 row) ```

It’d be great to use the stats for this but the only stats we have are the PostGIS ones and only have the number of features.

Updated 26/06/2017 16:22

overlapping spikes

csn-le/wave_clus

Hi -

Is the algorithm designed to accommodate overlapping spikes?
I attached a picture showing an extracellular recording and the identified clusters. (There’s a shoulder cluster associated with the short spikes, but that’s another matter.) Having trouble when the tall and short spikes overlap.

2nd figure is the wave_clus output. Colors corresponding to those in first figure.

spikes and clusters wave_clus output

Thank you! john jbirmingham@scu.edu

Updated 26/06/2017 23:26 4 Comments

Gather Projectoutput with Code

oleg-shilo/wixsharp

Hi,

i am wondering how to add project output to a wixsharp. Do i have to call heat directly and add the output as include file?

I dreaming of something like this:

File primaryoutput;
var project = new Project(
    "Sample",
    primaryoutput = new PrimaryProjectOutput(Projectfile,,...),
    new ProjectOutput(Projectfile, outputgroup,...),
    );
// Do something with primaryoutputfile
primaryoutput.ServiceInstaller = ...

(May this question sounds dumb, because it is sooo easy to add files… not in my case Taking the Files form Outputdirectory of a Project contains a lot of garbage.)

Regards Emmo

Updated 26/06/2017 23:46 1 Comments

FCM server response error (pushID?) NotRegistered

phonegap/phonegap-plugin-push

Expected Behaviour

Push notification sent from server should be getting “success=1”, instead getting “failure=1” and “error=NotRegistered”

Actual Behaviour

Server side error received when trying to push to the phone: object(stdClass)#4 (5) { [“multicast_id”]=> int(4650881156566144680) [“success”]=> int(0) [“failure”]=> int(1) [“canonical_ids”]=> int(0) [“results”]=> array(1) { [0]=> object(stdClass)#5 (1) { [“error”]=> string(13) “NotRegistered” } } }

Reproduce Scenario (including but not limited to)

I am successfully getting a pushID using my registered senderID. I save that pushID to my server db and then use another script on the server to send a notification to that pushID. That’s when i get the “error=NotRegistered” - I think this means the pushID is not registered and therefor is invalid…but how is it possible to even get an invalid pushID?

I actually had all this working two weeks ago and suddenly it all stopped working. I have no idea whats going on. I validated my senderID (app side) and server key (server side) are correct:

Steps to Reproduce

Platform and Version (eg. Android 5.0 or iOS 9.2.1)

Android: 7.0

(Android) What device vendor (e.g. Samsung, HTC, Sony…)

Android LG K20- Plus

Cordova CLI version and cordova platform version

cordova --version : 6.5.0
cordova platform version android: 6.1.2

Plugin version

cordova plugin version | grep phonegap-plugin-push: 1.10.4

Sample Push Data Payload

Sample Code that illustrates the problem

  var push = PushNotification.init({
    "android": {
      "senderID" : "21568......",
      "forceShow" : "true",
      "vibrate" : "true",
      "sound" : "true"
    },
    browser: {
            },
    ios: {
      alert: "true",
      badge: "true",
      sound: "true"
    },
    windows: {}
  });

  var enabledAPI,regAPI = 0 ;
  pushEnabled($q).then(function(status) {
    enabledAPI = status ;  // push enabled or not
    return pushReg($q) ;
  }).then(function(status) {
    regAPI = status ;  // pushID changed or not
    if (enabledAPI == 1 || regAPI == 1) {
      apiService.all(... ... ) ;
    }
  }) ;

  function pushEnabled($q) {
    var q = $q.defer() 
    PushNotification.hasPermission(function(data) {
      var oldPushEnabled = getDB('dev_pushEnabled') ;
      if (data.isEnabled == true) { 
          var pushEnabled = 1 ; 
        } else {
          var pushEnabled = 0 ;
      }
      if (oldPushEnabled != pushEnabled) {
        setDB('dev_pushEnabled',pushEnabled,1) ;
        q.resolve(1) ;  // push enable status has changed
      } else {
        q.resolve(0) ;  // push enable status has not changed
      }
    });
    return q.promise ;
  }

  function pushReg($q) {
    var q = $q.defer() ;
    push.on('registration', function(data) {
      var oldRegId = getDB('dev_pushID');
      if (oldRegId != data.registrationId) {
       // Save new registration ID
        setDB('dev_pushID', data.registrationId,1);
        // Post registrationId to your app server as the value has changed
        q.resolve(1) ;  // pushID has changed
        console.log("IDs have CHANGED!") ;
      } else {
        console.log("IDs the same") ;
        q.resolve(0) ;  // pushID has not changed.
      }
    });
   return q.promise
  }

Logs taken while reproducing problem

Updated 26/06/2017 22:58 10 Comments

Question: depositor edit access

osulp/Scholars-Archive

In our current SA@OSU, can a depositor edit their work after it has been deposited? If yes, we need to generate issues to shut that down. Only administrators of the repository or admin set should be able to edit content after it is live in the repository.

Updated 26/06/2017 15:51

Integrate Tempurpedic bed base

bwssytems/ha-bridge

This is likely an edge case, but I would love to see the tempurpedic bed base included here.

Some work has been done to make this an alexa skill at https://github.com/docwho2/java-alexa-tempurpedic-skill

I’m willing to help, but I’m not a programmer.

Updated 26/06/2017 16:38 1 Comments

serve UI from a nested path and prevent redirect

agenda/agendash

Hey,

Great work on agendash it’s looking nice.

I’m serving mine up from a nested path e.g. ‘/a/long/route/agendash’. However, it’s automatically then redirecting back to ‘/agendash’ which obviously can’t be found.

My middleware init looks like this: app.use('/agendash', Agendash(agenda));

Any quick ideas how I could fix this? I haven’t dug through the code yet but may well raise a PR if I find a quick solution.

Thanks again! Hugh

Updated 26/06/2017 19:43

Branch organization for Coq versions

HoTT/HoTT

How do we want to organize branches for different versions of Coq? In particular, should master track trunk, or should it track the latest released version of Coq? I see four possibilities: 1. master tracks Coq trunk; new development happens on master, version-specific branches are considered stale/locked 2. master tracks Coq trunk; new development happens on either master or version-specific branches, and version-specific branches are merged into master periodically 3. master tracks Coq trunk; new development happens on either master or version-specific branches, and master is merged into version-specific branches periodically 4. master tracks the latest released version of Coq, we have a trunk branch that tracks trunk, but on which no new development happens; someone (me?) maintains the trunk branch either by rebase+force-push or by merging master into it; when a new stable version of Coq is released, we branch off the old version from master (so, when 8.7 is released, we branch off v8.6), and merge trunk into master and update master to track the new version.

I’m partial to either 2 or 4; 2 if we want to support developments on trunk before it’s released (e.g., experiments with universe cumulativity, induction-recursion, etc), or 4 if we want to support trunk only for Coq’s ci-testing.

Updated 26/06/2017 16:53 3 Comments

ReMM score question

charite/hyperSMURF

Hi,

not sure this is the right repo. I am happy to open this issue anywhere else, if needed.

In the ReMM-file ReMM.v0.3.1.tsv.gz, I cannot find a score for the position 14:7180825-7180827C>A.

Can you help me debug this? Is this a valid position? If yes, why is it not in ReMM? If no, why not?

Updated 26/06/2017 15:20

didUploadFileAtPath is called before file is totally written

swisspol/GCDWebServer

ios start WebDAVServer. Mac Finder connect to WebDAVServer. Copy a file from Mac to the iOS device. it seems didUploadFileAtPath is called before file is totally written to iOS file system. In didUploadFileAtPath, when I try to create an UIImage with the uploaded file path. 0 bytes is found of the uploaded file. Please move didUploadFileAtPath after the whole file is totally written to iOS file system.

Updated 26/06/2017 17:56 1 Comments

program_id mappings

uoft-tapp/tapp

In the CHASS JSON (Applicant data), there is a field called program_id, is it possible that in we can either get a list of mappings of program_id to program name, or the program name can be added the CHASS json (whichever one is easier on you)

Updated 26/06/2017 15:06

Android Lint failed with Kotlin 1.1.3, Activity/Fragment written by Java

hotchemi/PermissionsDispatcher

Overview

  • Android Lint failed with Kotlin 1.1.3
  • It seems that “NeedOnRequestPermissionsResult” check does not work properly in Activity or Fragment written in Java

Expected

  • Lint is working correctly

Actual

  • Lint is not working correctly

Environment

  • PermissionsDispatcher 2.4.0
  • Kotlin 1.1.3

Reproducible steps

I wrote sample project. https://github.com/ryugoo/PDKT113

Updated 27/06/2017 01:25 3 Comments

Recover accidentally deleted macro; deleted macro instead of version

TimothyLuke/GnomeSequencer-Enhanced

Yes I did that, I accidentally clicked new instead of the last version tab, then in a nano second executed delete instead of delete version.

Now I still see my macro in the .bak file, but syntactically it is obviously different from what seems to be required for import, and ofc it won’t import, is there a list of elements I need to change for successful import? Or any tips for recovering an accidentally deleted macro?

Apologies if I am the only one to have done this… I had heavily customized a macro with a number of versions and don’t want to go through all of that again right now.

I have considered just closing wow and trying to do surgery on the lua file but I don’t want to do that if I can avoid it.

Updated 26/06/2017 23:41 2 Comments

[Question] After adding luis integration bot is working only in web chat but not in skype and facebook

Microsoft/BotBuilder

please help me in this

Here greeting is my intent But its only working in enumelator locally and on azure its working with web chat only

FB and skype doesnt through any error but didnt resspond any thing

bot.dialog('greeting', function (session, args) {
 session.send("Hello hope you are good ,Please start finding nearest metro by messaging me 'PlaceName City' like nearest gip noida or closest to akshardham or way you want");
greeting="";
   }).triggerAction({
    matches: 'greeting'
});

Thanks

Updated 26/06/2017 15:11

Diseño y alcance del Proyecto

gdgnqnnt/EventsUnc

Este tema lo vamos a hablar en una reunión o lo podemos ir avanzando por acá ? (de ser tema para clase, no duden en cerrar mi issue)

Estaría bueno ir viendo que se puede hacer en este repo, o en que se puede ayudar. Para eso deberíamos contar con al menos un esquema mínimo.

  • [ ] Requerimientos
  • [ ] Casos de Usos
  • [ ] Estructura de componentes
  • [ ] Alcance de los componentes
  • [ ] Stack Cliente
  • [ ] Stack Server
  • [ ] Utilizar la solapa Projects o alguna herramienta como Asana … Trello… o cualquier herramienta que tenga formato Kanban.
Updated 26/06/2017 14:40

Have Undefined index while run the cron job!!!

liebig/cron

Hi, I have a strange error in my cron job. At the beginning while I am running the cron job its working fine. At that time I am define the variables normally. Like, $ck_host='abc';. But now I change the code and access the same variable from the .env file of laravel. Like, $ck_host=$_ENV['CK_HOST'];. When I am running in my browser it works fine. But in the cron job, it says undefined index: CK_HOST. I have attach the image of the log. Note: I am using vlucas/phpdotenv for access env file.

imgpsh_fullsize

Thanks in advance for help.

Updated 26/06/2017 15:10 2 Comments

LoRaWAN node versioning?

brocaar/loraserver

I would love some feedback regarding node versioning. The LoRa Alliance has just updated the LoRaWAN Regional Parameters. In this document the TXPower table was updated. From the changelog:

expressed all powers either as EIRP or as conducted power depending on regions

This itself is not an issue. The big issue is that they also change the delta between TX Power indices. For example in the old situation going from TXPower 0 to 1 (EU band, but applies to some other ISM bands too) would mean a step of -6dB, this has been changed to steps of -2dB. Although this allows a finer control of the TX Power, this change is not backwards compatible.

This means that when LoRa Server implements these new delta’s and when ADR is turned on, it thinks it is asking a node to lower its tx power by -2dB but this could (for older nodes) also mean -6dB and in case no link margin is left, could disconnect the node.

I’ve asked this question to somebody of the LoRa Alliance and the feedback was:

The assumption is that the server MUST know the device’s LoRaWAN version and the regional param revision. This information must be provided out-of-band when the device is registered.

This would mean that for each node, you need to know the:

  • exact LoRaWAN version (e.g. 1.0.2)
  • exact LoRaWAN Regional Parameters version (e.g. 1.0 or 1.0.2)

To me this doesn’t feel the way to go, as most nodes don’t even expose this information. Also as an end-user I would expect a LoRaWAN node to be working with a LoRaWAN network, without worrying about LoRaWAN 1.0 or 1.0.2 and the same for the LoRaWAN Regional Parameters. To me this should be handled by the protocol.

Please share your feedback / opinion / experience on this!

Updated 26/06/2017 18:09 1 Comments

Field boost on query and custom sorting

codelibs/fess

Hi! Currently me and my team are using a search engine that runs on SOLR. However, that search engine has several problems, so we’re currently analyzing Fess as a possible replacement.

However, we have a question regarding boosting on fields. On the current system, we have a boosting of the title on factor 5, and boosting of content on factor 1 (so, no boosting on the content). We are trying to apply the same boost on Fess, so the results from both systems would appear the same. However, that doesn’t seem to happen…

The way we tried to apply boosting on the title and the content in Fess was updating the following properties in the fess_config.properties file:

# boost
query.boost.title=5.0
query.boost.title.lang=5.0
query.boost.content=1.0
query.boost.content.lang=1.0

However, the results are displayed in the same order as if we put the factor 5 on content and factor 1 on title, or if we leave these properties with their initial values (0.2, 1.0, 0.1, 0.5).

So, the question is: are we applying the boost in the correct way? Or should this be done any other way? (by the way, what is the difference between the property and property.lang in these boost properties?)

The second question is: is there a way to add custom sorting, or is Fess restricted to the sorting terms described in the docs here? For example, we would like to sort the results by how many times the search terms appear in the title, descendant. On a more complex side, and ideally, we would like to combine that value with the length of the title, so the search results would display first the documents with titles that had both small length and high number times the keywords appeared there. Is something like this possible? If so, how can we do it?

Updated 26/06/2017 17:22 2 Comments

课程对象中的各个属性应该叫什么呢..

Trim21/sdu_bkjws
{
    "kch": "sd00920070",
    "kcm": "线性代数",
    "kxh": 301,
    "xh": "201500100067",
    "kssj": "20170102",
    "jsh": "200799012898",
    "xnxq": "2016-2017-1",
    "xf": 3.0,
    "xs": 48.0,
    "kcsx": "必修",
    "kscj": 100.0,
    "kscjView": "100.0",
    "kccj": null,
    "cxbkbz": null,
    "sfgd": "否",
    "bz": null,
    "qmcj": "75",
    "lrsj": "2017011017: 33: 02",
    "czr": "200799012898",
    "pscj": 18.0,
    "sycj": 0.0,
    "qzcj": 0.0,
    "tdkch": "sd00920070",
    "kslx": "考试",
    "bz2": null,
    "bz3": null,
    "kclb": null,
    "wfzdj": "A+",
    "wfzjd": "5.0",
    "id": null
}

一个课程对象长这样,其中属性为null的直接去除就好了,但是一些有用的比如学分,成绩,五分制绩点,五分制等级等等叫什么比较好呢.- -

Updated 26/06/2017 15:23 2 Comments

Taint Analysis works in tests, but not when packaged?

find-sec-bugs/find-sec-bugs

Created a custom detector class to catch certain commands with taint analysis, like

public class X extends BasicInjectionDetector {

    public X(BugReporter bugReporter) {
        super(bugReporter);
        loadConfiguredSinks("X.txt", "COMMAND_INJECTION");
    }

    @Override
    protected int getPriority(Taint taint) {
        System.out.println(taint.)
        if (!taint.isSafe() && taint.hasTag(Taint.Tag.COMMAND_INJECTION_SAFE)) {
            return Priorities.IGNORE_PRIORITY;
        } else {
            return super.getPriority(taint);
        }
    }
}

And included it in findbugs.xml and messages.xml.

When I run this in tests, everything works fine. But when I run the package in Intellij against actual code, the test fails. I think it’s something with regards to how I’m setting up the taint analysis, but all I’ve done is provision the new class and include it. Is there any potential misconfiguration I’ve done?

Updated 26/06/2017 20:25 2 Comments

Table-Width isn't 100% on persistentLayout

olifolkerd/tabulator

Hi Oli, I have a HTML-Page with two Tabulator-Tables. When I use the following tabulator-options, the #table1 is not rendered with 100% width. table1: movableColumns: true, persistentLayout:“cookie”, persistentLayoutID:“table1”, table2: movableColumns: true, persistentLayout:“cookie”, persistentLayoutID:“table2”,

If I remove this tabulator-options on table1, the Table is rendered correct with 100% Width. Do you have an Idea?

Thanks!

Updated 26/06/2017 20:07 2 Comments

Results vs Resources

LDMW/app

As a user, when I come onto the site having not performed a search, I expect to see ‘Showing x resources’ rather than ‘Showing x results’, as it is confusing to see ‘results’ for a search I have not done.

  • [x] Change ‘results’ to ‘resources’ if a search has not been performed
Updated 26/06/2017 14:53 1 Comments

Exception in `/tmp/mexec/tmp-script.sh`

Shippable/support

Description of your issue:

https://app.shippable.com/bitbucket/finovertech/factern-data-library/runs/221/1/console

With this current build I either see timeouts or this shippable script exception (after everything seemed to build fine):

Console size exceeds 64 MB limit. Truncated from                 here.ERROR:script_runner - script_runner:Command failed : ssh-agent bash -c 'ssh-add /tmp/ssh/00_sub;ssh-add /tmp/ssh/01_deploy; cd /root && /root/dcbe81c7-3b49-4ecf-b967-c2334758cdc3.sh'
Exception Script failure tag received
ERROR:script_runner - script_runner:Command failed : ssh-agent bash -c 'ssh-add /tmp/ssh/00_sub;ssh-add /tmp/ssh/01_deploy; cd /root && /root/dcbe81c7-3b49-4ecf-b967-c2334758cdc3.sh'
Exception Script failure tag received
echo Container c.exec.XXXXXXXXXXXXXX.221.1 exited with 99Container c.exec.XXXXXXXXXXXXXX.221.1 exited with 99
sudo docker rm -fv $CONTAINER_NAMEc.exec.XXXXXXXXXXXXXX.221.1
Debug logsScriptType:setup_integrations|msg:/tmp/mexec/tmp-script.sh: line 325: [: ==: unary operator expected
ScriptType:collect_stats|msg:Identity added: /tmp/ssh/00_sub (/tmp/ssh/00_sub)
ScriptType:collect_stats|msg:Identity added: /tmp/ssh/01_deploy (/tmp/ssh/01_deploy)
ScriptType:on_start_job_envs|msg:Identity added: /tmp/ssh/00_sub (/tmp/ssh/00_sub)
ScriptType:on_start_job_envs|msg:Identity added: /tmp/ssh/01_deploy (/tmp/ssh/01_deploy)
ScriptType:boot|msg:Identity added: /tmp/ssh/00_sub (/tmp/ssh/00_sub)
ScriptType:boot|msg:Identity added: /tmp/ssh/01_deploy (/tmp/ssh/01_deploy)
Updating job statusSuccessfully updated job status
Successfully validated inputPreparing pretextValidating inputValidating inputSuccessfully generated pretextPreparing textPreparing textPreparing pretextSuccessfully generated textSending slack messageSuccessfully sent slack messageSending slack message

where XXXXXXXXXXXXXX is redaction of internal project name.

Any idea how we can diagnose this problem?

Updated 26/06/2017 16:03 1 Comments

Is it possible to run Robot Framework unittests with Pytest?

pytest-dev/pytest

Hi guys! Could you please help with this Stackoverflow Question.

I would like to run all unit tests from Robot Framework repo with Pytest. But when I call pytest utest/ I get nothing but lot of errors. When I point pytest to single test files e.g. pytest utest/api/test_logging_api.py it works in many cases but not in all. E.g. the next one doen not work

pytest utest/api/test_run_and_rebot.py
====================================== test session starts ======================================
platform linux2 -- Python 2.7.13, pytest-3.1.2, py-1.4.34, pluggy-0.4.0
rootdir: /home/wlad/_GITHUB/robotframework, inifile:
collected 0 items / 1 errors 

============================================ ERRORS =============================================
_______________________ ERROR collecting utest/api/test_run_and_rebot.py ________________________
ImportError while importing test module '/home/wlad/_GITHUB/robotframework/utest/api/test_run_and_rebot.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
utest/api/test_run_and_rebot.py:18: in <module>
    from resources.runningtestcase import RunningTestCase
E   ImportError: No module named resources.runningtestcase
!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!
==================================== 1 error in 0.34 seconds ====================================

I think it’s because run.py gets some stuff from the atest folder which does not happen when I just call pytest utest/.

addition information

  • I am a python beginner
  • OS: Ubuntu 17.04 - VirtualBox VM on Windows 7 64bit host. ```shell (PYTEST_RF) wlad@wlad-VirtualBox:~/_GITHUB/robotframework/utest$ pip list Package Version Location

pip 9.0.1
pkg-resources 0.0.0
py 1.4.34
pytest 3.1.2
robotframework 3.0.3.dev20170213 /home/wlad/_GITHUB/robotframework/src setuptools 36.0.1
wheel 0.29.0
```

Updated 26/06/2017 13:57 1 Comments

Global & local package names, feedback wanted

zetavm/zetavm

Soliciting opinions & feedback regarding this issue. This is an problem that we have to address before the zeta package manager goes online, and preferably should be solved early on.

Currently, when you import a package in Zeta, there’s a non-trivial amount of logic going on in packages.cpp: https://github.com/zetavm/zetavm/blob/master/vm/packages.cpp#L496 https://github.com/zetavm/zetavm/blob/master/vm/packages.cpp#L556

I have two regexes to validate the package path format in there. I’m wanting to force packages paths to be of the form “foo/bar/bif/N”, where N is a version number. One of the issues there is that people may want to import local files as packages. This is in conflict with my desire to standardize the paths of packages in the packages directory, the “standard” packages that come with the VM or will be managed by the package manager.

I’m starting to think that probably, what we need is a different package name syntax for global/non-local packages. Those being the core packages, what’s under packages, those that will be managed by the VM and package manager.

I was thinking that we could force global package names to begin with a colon character, like this:

var io = import ":core/io/0";

This would be in contrast to local package names, which can be any local file path:

var myPkg = import "/user/foobar/../some_unix_path.pls";

Having a separate format for non-local path will simplify the path validation logic, and it might have some security benefits. That is, it’s more difficult to accidentally import a local package when you wanted to import a global one, and vice versa.

It is technically possible on unix/linux to create a path or file name with a colon in it, but with this syntax, any package with a name starting with “:” will be looked up as a global package. To look up a local module with a colon, you would do:

``` var myModuleWithAWeirdName = import “./:colonFileName.pls”;

Updated 27/06/2017 01:55 2 Comments

Encrypted rooms/chats

epiphyte/matrix-d-api

https://matrix.org/git/olm/about/docs/olm.rst

Olm: A Cryptographic Ratchet

An implementation of the double cryptographic ratchet described by https://whispersystems.org/docs/specifications/doubleratchet/.

need to: * investigate building/binding ^ for D - possible? * if yes - figure out what this means for * keys handling * passing/communicating keys * response handling (won’t be JSON, I assume…) * etc. * lots of testing

Updated 26/06/2017 13:01

quack.php doesn't run

aidantwoods/RPi0w-keyboard

Copying this to a separate issue for searchability, quoting @Ax3l-91

https://github.com/aidantwoods/RPi0w-keyboard/issues/1#issuecomment-310905023

Hi again @aidantwoods,

I wanted to try your quack.php script but I have this error relative to this line: function translate(string $l) : array PHP Parse error: syntax error, unexpected ':', expecting '{' in /home/pi/quack.php on line 47

Thanks in advance for your help!

Updated 26/06/2017 12:59 1 Comments

Cannot access my Bitbucket subscription

Shippable/support

I can no longer access my BB subscription.

My BB username is kamilce. I’m an admin for the u9 BB team. I have granted OAuth access and I can see Shippable among authorised apps in my profile settings at https://bitbucket.org/account/user/kamilce/api. I can also see BB among identities in my Shippable dashboard (new UI) at https://app.shippable.com/accounts/540f414740b70914000b1243/settings.

I can’t see the subscription, nor any of my BB teams, in neither the old nor the new UI.

I’ve tried deleting and re-adding the OAuth integration, logging out&in, removing&re-adding account integration, nothing helps.

(EDIT: obviously this has been working great for me for the past year or two.)

Updated 26/06/2017 17:07 1 Comments

What does the term "license possible" mean?

krysnuvadga/license-coverage-grader

@kestewart From the rough pseudo code I was give by Kate, I have to get the “total number of license possible” I don’t understand the term “license possible”.

main() {
 num_total_files = 0
 num_source_files = 0
 num_license_concluded = 0 
 num_license_possible = 0


 print package name

for every file in package {
    ++num_total_files; 

   if is_source(file) {
       if worth_counting(file) ++num_source_files;
       if license_conclude from SPDX for file != (NONE or NOASSERTION) then
          ++num_license_concluded;
       if license_possible from SPDX for file != (NONE or NOASSERTION) then
           ++num_license_possible

       }
    }
Updated 26/06/2017 15:00 3 Comments

argument statement don't work with e_() - bug or feature?

vermaseren/form

Hello.

Why in this code: ``` FORM 4.1 (May 24 2017, v4.1-20131025-346-gcf71752) 64-bits Run: Mon Jun 26 15:15:48 2017 index i,j,k; Cfun f;

local E = f(i)*e_(i,j,k);
argument;
id i = j;
endargument;

print;
.end

Time = 0.00 sec Generated terms = 1 E Terms in output = 1 Bytes used = 64

E = f(j)*e_(i,j,k);

0.00 sec out of 0.00 sec result is f(j)e_(i,j,k);but not is f(j)e_(j,j,k)``` (=0) ?

Updated 26/06/2017 19:31 1 Comments

Uncaught Error: Class 'Paypal\Rest\ApiContext'

paypal/PayPal-PHP-SDK

Good morning, I am having this message:

Fatal error: Uncaught Error: Class ‘Paypal\Rest\ApiContext’ not found

and I keep searching how to fix it but nothing is working.

Here is my code

<?php

require ‘vendor/autoload.php’;

define(‘SITE_URL’, ‘https://mysite.com/products.php’);

$paypal = new Paypal\Rest\ApiContext( new \Paypal\Auth\OAuthTokenCredential( ‘CLIENT_ID, 'SECRET_ID’ ) );

Can you help me? Thank you so much

Updated 26/06/2017 22:06 4 Comments

Feedback on screencast tutorial.

dwyl/video

Hey, I’ve created this screencast on UI Tests in Swift 3.0 and wanted to get feedback from everyone at dwyl.

You can watch the video down below, it will take up 12 mins of your time but the feedback will be really really important https://drive.google.com/open?id=0BzAgO1bmVDtpdUgzMjNGQ1YzMk0

Here are some of the things that we’ve noticed already and will require some change: - Default Screen resolution of 1920 x 1080 for all videos - Strict format of the video #32

Updated 26/06/2017 12:00

Formats for tutorials and screencasts

dwyl/video

So we need to decide upon a format for all the learning videos that we’ll be creating. Here is the rough format that has been proposed by @nelsonic in the discussion we’ve had earlier.

  1. Brief Intro (30 sec)
  2. name and what the topic you are covering
  3. Show the end result (1 min)
  4. What will the user see at the end of the tutorial, this is important as we don’t want the user to sit through the whole video if the end result is not what they are looking for
  5. Run through the tutorial (4 - 5 min)
  6. Retrospective / debrief
  7. tell them what they’ve achieved and also to find us on GitHub etc.

This would mean that individual videos are no longer than 6 -7 mins. If a video has more content, we can split it up and have it as parts. (e.g. part - 1, part - 2)

What do you think of the format? Feedback Welcome!

Updated 26/06/2017 11:59

Fork me on GitHub