Contribute to Open Source. Search issue labels to find the right project for you!

Demo site is down

beancount/fava

The demo site is failing with the following error:

This website is hosted by PythonAnywhere, an online hosting environment. Something went wrong while trying to load it; please try again later.

Updated 30/04/2017 11:21

Segfault when reading wrong type from channel

ocsigen/lwt
OS: Macos 10.12.4
Ocaml: 4.04.1 (installed from homebrew)
Lwt: 2.7.1

I’m trying to do the following (this is from an utop session): ```

require “lwt”;;

open Lwt.Infix;; let (ic, oc) = Lwt_io.pipe ();; let read_data ic = Lwt.async (fun () -> let rec loop () = Lwt_io.read_value ic >>= fun v -> print_endline (“Data: ” ^ v); loop () in loop ());; read_data ic;; Lwt_io.write_value oc “msg1”;; ( This works fine, i.e. the message is being consumed ) Lwt_io.write_value oc “msg2”;; ( This works fine, i.e. the message is being consumed ) Lwt_io.write_value oc 5;; zsh: segmentation fault utop ``` Not sure what I’m missing here; also, apologies if this was already opened, but I wasn’t able to find it. Thanks.

Updated 29/04/2017 18:56 7 Comments

Doc: Clarify count for index.mapping.total_fields.limit

vagnerclementino/elasticsearch

https://www.elastic.co/guide/en/elasticsearch/reference/5.3/mapping.html#mapping-limit-settings

index.mapping.total_fields.limit The maximum number of fields in an index. The default value is 1000.

Currently, documentation simply says index.mapping.total_fields.limit is the max number of fields in an index. For end users who are looking at the mappings api output and doing a count for validation, they will notice that ES will return a limit exception before the perceived # of fields hit 1000.

This is because index.mapping.total_fields.limit includes the count of meta fields and most of these will not show up in the mappings api output. It will be nice to clarify this in the documentation, similar to how we do clarify that index.mapping.depth.limit counts also the root level as 1 level.

Updated 29/04/2017 15:58

Update the More Like This Query documentation for the unlike parameter

vagnerclementino/elasticsearch

Update the documentation on the “More Like This Query” page to make it clear that the “unlike” parameter can use the Multi GET API, just like the “like” parameter.

Also, in the 1st example on the page the “like” parameter is being used like I imagine “like_text” is supposed to be used. Perhaps the documentation for the More Like This Query is in need of some larger updates overall.

Original issue: Currently the More Like This Query can operate on two types of input: text via the like and unlike parameters, or full documents through the ids parameter.

The ids parameter allows you to input a number of document ids into the query which will return documents similar to the ones provided. There is however, in contrast to the text input options, no way to provide document IDs for documents unlike the desired results.

I propose we break out the ids parameter into two parameters, much like the text parameters: - likeIds - unlikeIds

Updated 29/04/2017 15:58

Update index_.asciidoc: "result" : created -> "result" : "created"

vagnerclementino/elasticsearch

“result” : created -> “result” : “created”

<!– Thank you for your interest in and contributing to Elasticsearch! There are a few simple things to check before submitting your pull request that can help with the review process. You should delete these items from your submission, but they are here to help bring them to your attention. –>

  • Have you signed the contributor license agreement?
  • Have you followed the contributor guidelines?
  • If submitting code, have you built your formula locally prior to submission with gradle check?
  • If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.
  • If submitting code, have you checked that your submission is for an OS that we support?
  • If you are submitting this code for a class then read our policy for that.
Updated 29/04/2017 15:58

Scoring doc should warn about precision

vagnerclementino/elasticsearch

<!– GitHub is reserved for bug reports and feature requests. The best place to ask a general question is at the Elastic Discourse forums at https://discuss.elastic.co. If you are in fact posting a bug report or a feature request, please include one and only one of the below blocks in your new issue. Note that whether you’re filing a bug report or a feature request, ensure that your submission is for an OS that we support. Bug reports on an OS that we do not support or feature requests specific to an OS that we do not support will be closed. –>

<!– If you are filing a bug report, please remove the below feature request block and provide responses for all of the below items. –>

Elasticsearch version: 5.3

Plugins installed: []

JVM version: N/A

OS version: N/A

Describe the feature: The documentation on scoring should warn of the consequences that scores are represented as single precision floating point numbers and hence only have 24 bits of precision. For example constant score queries assigning scores from 100,000,001 to 100,000,010 will all result in scores rounded to 100,000,000.

There doesn’t seem to be anywhere obvious to write this in the documentation. Perhaps a page devoted to the topic could be added after the page on “Query and filter context”: https://www.elastic.co/guide/en/elasticsearch/reference/5.3/query-filter-context.html

The following example highlights the problem:

POST /foo/foo/_bulk
{"index": {}}
{"a": "apple"}
{"index": {}}
{"a": "pear"}
{"index": {}}
{"a": "banana"}
{"index": {}}
{"a": "grape"}
{"index": {}}


GET /foo/foo/_search
{
  "query": {
    "bool": {
      "should": [
        {
          "constant_score": {
            "filter": {
              "match": {
                "a": "apple"
              }
            },
            "boost": 100000001
          }
        },
        {
          "constant_score": {
            "filter": {
              "match": {
                "a": "pear"
              }
            },
            "boost": 100000002
          }
        },
        {
          "constant_score": {
            "filter": {
              "match": {
                "a": "banana"
              }
            },
            "boost": 100000003
          }
        },
        {
          "constant_score": {
            "filter": {
              "match": {
                "a": "grape"
              }
            },
            "boost": 100000004
          }
        }
      ]
    }
  }
}

The scores for all four documents actually end up as 100,000,000 and so the actual ranking becomes arbitrary.

Updated 29/04/2017 15:56

[feature request]make request cache more memory efficiency

vagnerclementino/elasticsearch

Describe the feature: nowadays when eable request cache, each node hold the request results by its own. so can we make it more memory efficiency by below ways: 1. if enable request cache, can we route request to node by request hash so that same request will route to same node, therefore, they can use/hit the same cache 2. we don’t route request but at the time we fetch/put cache, we make a distributed cache based on curreny facilities.

Updated 29/04/2017 15:56

Fast vector highlighting span queries for nested documents

vagnerclementino/elasticsearch

<!– GitHub is reserved for bug reports and feature requests. The best place to ask a general question is at the Elastic Discourse forums at https://discuss.elastic.co. If you are in fact posting a bug report or a feature request, please include one and only one of the below blocks in your new issue. Note that whether you’re filing a bug report or a feature request, ensure that your submission is for an OS that we support. Bug reports on an OS that we do not support or feature requests specific to an OS that we do not support will be closed. –>

<!– If you are filing a bug report, please remove the below feature request block and provide responses for all of the below items. –>

Elasticsearch version: 5.3.1

Plugins installed: [mapper-murmur3, mapper-size, x-pack]

JVM version: OpenJDK 1.8.0_131

OS version: CentOS Linux release 7.3.1611 (Core)

Description of the problem including expected versus actual behavior: Fast vector highlighting does not work on a nested document, with span_near or span_first queries (additional span queries may be affected)

Steps to reproduce: 1. ``` PUT /test { “settings” : { “index” : { “number_of_shards” : 1, “number_of_replicas” : 0 } }, “mappings” : { “object” : { “properties” : { “document_part” : { “type” : “nested”, “properties”: { “text”: { “type”: “text”, “term_vector”: “with_positions_offsets” } }

    }

  }
}

} } ```

  1. POST /test/object/1
    {
    "document_part": {
     "text": "Here is some text to be analyzed and later queried on.  Let us use everyones favourite... The cat sat on the mat near the dog.  The cat saw the dog."
    }
    }
    
  2. GET /test/object/_search
    {
    "_source": false,
    "query": {
     "nested": {
       "path": "document_part",
       "query": {
         "span_near": {
           "clauses": [
             {
               "span_term": { "document_part.text": "cat" }
             },
             {
               "span_term": { "document_part.text": "dog" }
             }
           ],
           "slop": 5,
           "in_order": false
         }
       },
       "inner_hits": {
         "_source": false,
         "highlight": {
           "fields": {"document_part.text": {}}
         }
       }
     }
    }
    }
    
  3. Note that in the response the _search in step 3, a hit is found, but no higlight. With just a span_term query the highlight is seen. And forcing the highlight type to be “plain” the highlight is seen.
Updated 29/04/2017 15:55

QueryDSLDocumentationTests should be imported into the java API docs

vagnerclementino/elasticsearch

Right now there is a big comment at the top of QueryDSLDocumentationTests saying that we hope that they lines in there match the docs in the java API documentation. We should really use the include-tagged::{doc-tests}/DeleteDocumentationIT.java[delete-request] style syntax to include it directly into the asciidoc. The tags that you’d end up putting in QueryDSLDocumentationTests would make it super clear that this file is included in documentation. That way someone who missed the comment won’t break the tests.

Updated 29/04/2017 15:55

Build the java query DSL api docs from a test

vagnerclementino/elasticsearch

We’ve had QueryDSLDocumentationTests for a while but it had a very hopeful comment at the top about how we want to make sure that the example in the query-dsl docs match up with the test but we never had anything that made sure that they did. This changes that!

Now the examples from the query-dsl docs are all built from the QueryDSLDocumentationTests. All except for the percolator example because that is hard to do as it stands now.

To make this easier this change moves QueryDSLDocumentationTests from core and into the high level rest client. This is useful for two reasons: 1. We expect the high level rest client to be able to use the builders. 2. The code that builds that docs doesn’t check out all of Elasticsearch. It only checks out certain directories. Since we’re already including snippets from that directory we don’t have to make any changes to that process.

Closes #24320

Updated 29/04/2017 15:55

Update production notes in docs for Docker

vagnerclementino/elasticsearch

Add info about the base image used and the github repo of elasticsearch-docker.

Clarify that setting memlock=-1:-1 is only a requirement when bootstrap_memory_lock=true and the alternatives we document elsewhere in docs for disabling swap are valid for Docker as well.

Additionally, with latest versions of docker-ce shipping with unlimited (or high enough) defaults for nofile and nproc, clarify that explicitly setting those per ES container is not required, unless they are not defined in the Docker daemon.

Finally simplify production docker-compose.yml example by removing unneeded options. One such option is cap_add: IPC_LOCK which seems to be not required anymore in combination with memlock=-1:-1.

Updated 29/04/2017 15:54

Move content about processors to filtering section in the doc

elastic/beats

@monicasarbu I’d like for you to take a quick look at this before I change the docs for the other beats. I’ve streamlined the intro because I want users to know that the prospector-level options exist, but I don’t think we should cover them in very much detail since the main point of the section (after the reorg) is to cover processors. Here’s what the TOC looks like (with some slight changes from your suggested titles because they made the TOC entries harder to scan):

image

Updated 29/04/2017 03:14

Note on updating the astropy conda channel for affiliated packages

astropy/astropy

I was releasing a new version of astroplan this week, and noticed that there aren’t instructions in the astropy docs on releasing affiliated packages on how to update the package version number in the conda channel.

In this PR, I add one extra step to the Releasing an affiliated package section of the docs, directing readers to the astropy/conda-channel-astropy README for details on how to update the package version in the conda channel. At the very least, this will ensure that I don’t bother @mwcraig and @bsipocz for that again 😄

Updated 29/04/2017 12:40

Latest version does not hold multiple errors per field

logaretm/vee-validate

Versions:

  • VueJs: 2.3.0
  • Vee-Validate: 2.0.0-beta.25

Description:

I noticed that the ErrorBag in the latest version does not hold all errors for a field. In otherwords, if I set rules as such: v-validate="'email|min:6'", only 1 error will display at a time.

I browsed around and found another fiddle that displayed multiple errors per field. Saw that it was using an older version. I’ve provided fiddles for both version below.

I’m assuming this is not expected since http://vee-validate.logaretm.com/examples.html#selectors-example is clearly showing multiples per field? Or maybe I’m not defining something correctly?

Steps To Reproduce:

https://jsfiddle.net/devpake/yrbw8dL5/6/ - This uses vee-validate 2.0.0-beta.25

https://jsfiddle.net/devpake/dqoL3b5r/4/ - This uses vee-validate 2.0.0-beta.18

Updated 28/04/2017 22:14 1 Comments

Sphinx test failing with deprecation warning

astropy/astropy

It has been failing for the past several builds, so can hardly be transient. I suspect this is the culprit:

DeprecationWarning: Old configuration for backreferences detected 
using the configuration variable `mod_example_dir`

Example failed build: https://travis-ci.org/astropy/astropy/jobs/226804941

@adrn?

Updated 28/04/2017 19:46 1 Comments

Add a whole lot of logos.

DataDog/integrations-core

Docs site, corpsite, and perhaps the App will eventually pull from these.

Perhaps we’ll need more than one size/resolution for each logo? (i.e. logo-small.png, logo-large.png) These are all pretty small right now. Some come from the corporate site, some from the App.

Updated 28/04/2017 19:38

Readme improvments

Azure/BatchLabs
  • Need a description of who the tools is for and a summary of what it does.
  • Make it clear that after the repo clone, they need to ‘cd’ to the directory where the code was cloned to; maybe even state how they control the code destination folder.
Updated 28/04/2017 15:58

Added various linked Stream examples

ioam/holoviews

A number of relatively simple linked stream examples with synthetic data. It might be nice to also have more complex examples which use real data. Feedback on structure of the examples welcome. Also worth considering whether I should add gifs to these notebooks so you can see what they’re meant to do when viewed statically.

Updated 28/04/2017 16:16 1 Comments

Update production notes in docs for Docker

elastic/elasticsearch

Add info about the base image used and the github repo of elasticsearch-docker.

Clarify that setting memlock=-1:-1 is only a requirement when bootstrap_memory_lock=true and the alternatives we document elsewhere in docs for disabling swap are valid for Docker as well.

Additionally, with latest versions of docker-ce shipping with unlimited (or high enough) defaults for nofile and nproc, clarify that explicitly setting those per ES container is not required, unless they are not defined in the Docker daemon.

Finally simplify production docker-compose.yml example by removing unneeded options. One such option is cap_add: IPC_LOCK which seems to be not required anymore in combination with memlock=-1:-1.

Updated 28/04/2017 15:45

Remove docker mention in system module docs

elastic/beats

In the system module there is a misplaced paragraph. I think it should be completely removed.

It is strongly recommended to not run docker metricsets with a period smaller then 3 seconds. The request to the docker API already takes up to 2s seconds. Otherwise all the requests would timeout and no data is reported.

Updated 28/04/2017 13:17

Polishing and improving the tutorials added in 1.7

ioam/holoviews

This PR aims to improve our new tutorials before 1.7.1 is released, hopefully getting the improved versions of the tutorials live before then. The tutorials to improve are: DynamicMap, Streams and Linked_Streams and they are expected to be read in that order.

I think any of us can use this PR to make the improvements we feel are necessary. Obviously anything that might be controversial should be discussed first!

My first commit improves the listing of available linked stream classes.

Updated 28/04/2017 13:18

Dimension ordering is not preserved when defined

ioam/holoviews

Hi,

According to the docs (http://holoviews.org/Tutorials/Elements.html >> Bars),

To preserve the initial ordering specify the Dimension with values set to ‘initial’, or you can supply an explicit list of valid dimension keys

In dimension.Dimension.values, the doc says:

Optional specification of the allowed value set for the dimension that may also be used to retain a categorical ordering

But the code reads: values = params.get('values', []) if isinstance(values, basestring) and values == 'initial': self.warning("The 'initial' string for dimension values is no longer supported.") values = [] all_params['values'] = sorted(list(unique_array(values)))

So that (1) ‘initial’ cannot be used, (2) ordering is not preserved (thanks to the call to sorted)

Updated 28/04/2017 11:46 3 Comments

Expo.io setup

infinitered/reactotron

If anyone made Reactotron work with Expo/Exponent do you mind sharing here the full exponent setup?

If is it really working it would be useful for further reference to have it included in the readme

Thanks

Updated 29/04/2017 16:43 6 Comments

Doc errors

hyperledger/composer

<!— Provide a general summary of the issue in the Title above –>

Context

There are some errors in the tutorial docs

getting started:

https://fabric-composer.github.io/tutorials/getting-started-playground.html

If everything started OK, you should be able to access Fabric Composer Playground by clicking on this link: http://localhost:8080

That’s not true… It’ll be available at docker-machine-ip:8080

Tutorial

https://hyperledger.github.io/composer/tutorials/defining-a-business-network.html

{ “$class”: “org.acme.vehicle.auction.Offer”, “bidPrice”: “1000”, “listing”: “listing_1”, “member”: “alice@biznet.org”, }

Need to remove final comma at end of “member”: “alice@biznet.org”

Managing your Hyperledger Composer Solution

https://hyperledger.github.io/composer/managing/managingindex.html

Typo in hyperlink to Issuing an identity results in 404

Updated 28/04/2017 09:00 1 Comments

Questions about message persistence

emqtt/emqttd

Environment

  • OS: centos 6.5
  • Erlang/OTP: 19
  • EMQ: 2.1.2

Description

Hi, I have tested EMQ version 2.1.2 for message and subscriptions persistence feature.I found that as long as there is one node to survive,the subscriptions and message router relationship will sync to other restarted node.but when all nodes restart at same time,the subscriptions and message router relationship will disappear.I wonder if EMQ support to persist subscriptions and message router relationship when all nodes restart? there is another questions, I found that when one node restart,one it can persist message which pub from local node ,but can’t persist message which routed from other nodes.Can EMQ support this situation now? thanks.

Updated 29/04/2017 01:13

Investigación sobre Drag n' drop

juandjara/open-crono

Esta tarea agrupa el trabajo relacionado con la investigación de la librería react-dnd, cuya documentación se puede encontrar aquí

Se han realizado varios tutoriales y desarrollado varios ejemplos en pequeños proyectos aparte para entender el funcionamiento de esta librería y poder incorporarlo al proyecto más tarde en la funcionalidad de las tarjetas del tablero Kanban.

Updated 28/04/2017 23:02

Mention that dynamic setter for view's duration won't override everything

fullcalendar/fullcalendar

As mentioned in #3630

$('#calendar').fullCalendar({
  views: {
    flexibleView: {
      type: 'agenda',
      duration: { days: 3 } // won't change
    }
  }
});
$('#calendar').fullCalendar('option', 'duration', { weeks: 1 });

That dynamic setter won’t work because a view’s duration will always take precedence.

Instead do this:

$('#calendar').fullCalendar({
  duration: { days: 3 }, // the default duration for flexible views
  views: {
    flexibleView: {
      type: 'agenda'
    }
  }
});
$('#calendar').fullCalendar('option', 'duration', { weeks: 1 });

Document this!

Updated 27/04/2017 22:25

LS Monitoring UI docs page

elastic/logstash

Our Logstash docs don’t currently link to or mention the monitoring UI. My proposal would be to have a top level Monitoring section that encompasses a new Monitoring UI page (which can link off to the relevant x-pack docs) and the monitoring API endpoints and response payloads.

/cc @dedemorton

Updated 28/04/2017 22:34 3 Comments

Doc of how to adjust op status polling interval

Azure/azure-sdk-for-python

Is there a document and example showing how to control the interval at which Azure Python SDK polls operations for completion? For example, creation of managed disk should take only a few seconds, but when waiting for the Azure Python SDK to complete it appears to take 30 seconds (likely due to polling interval being ~30 seconds). It would be great to be able to adjust the poll interval to decrease the overall run-time of the operation.

Updated 27/04/2017 21:02 1 Comments

Expanding use of flow for propType, include flow types in package, add flow-typed

callemall/material-ui
  • [x] PR has tests / docs demo, and is linted.
  • [x] Commit and PR titles begin with [ComponentName], and are in imperative form: “[Component] Fix leaky abstraction”.
  • [x] Description explains the issue / use-case resolved, and auto-closes the related issue(s) (http://tr.im/vFqem).

Prerequisite: generated api docs as a baseline, that’s why DialogTitle.md and Typography.md changed.

These initial changes for flow type prop support include: 1. .babelrc - two plugins to generate propTypes from flow and strip flow types from code 2. generate-docs-markdown.js - alter to obtain type information from prop.flowType || prop.type 3. Layout.js - convert to flow and re-gen docs 4. copy-files.js - add the flow-copy-source to shadow copy source files as .js.flow. This will distribute flow type definitions with the package. 5. flow-typed for external libdefs - e.g. mocha.

Given this setup, I don’t believe the user of the library will see any changes. For those that use flow, their use of material-ui will automatically be type-checked for any file using flow types with no setup needed - flow auto-recognizes the shadow files.

Review items requested

  • Switching stateless function to PureComponent<DefaultProps, Props, void> in order to get defaultProps type checking. I think this is the right way to go as it is the most bullet-proof. We can stick to stateless functions, but any reference to defaultProps would then not be type-checked.

Implementation notes

The definition of type DefaultProps and type Props is quite specific. This is the only way I have seen it work properly, more info here. Specifically, DefaultProps are all made required and Props relists those same props as optional. The ordering of this statement is important: type Props = DefaultProps & {...}. Any other ordering causes flow errors. This doesn’t seem to be a problem, it’s just a quirk I uncovered.

Docs are generated from comments in whatever type name is bound to props, e.g. type Props

Updated 29/04/2017 10:58 7 Comments

Documentation: Term "development mode" is unclear

pallets/flask

I’m doing some work which requires defining some context hooks for our flask app. When reading the documentation for request contexts* I came across the passage:

In production mode if an exception is not caught, the 500 internal server handler is called. In development mode however…

As a user, I want to know: What is ‘development mode?’ I think it’s debug mode (e.g. app.run(debug=True)). I’ve seen several people use the terms interchangeably when searching for help, but not either the flask developers or the official documentation. The flask docs don’t appear to have an index (is it hidden somewhere?), and quick search does not appear to work. Google site search of the dev docs does not immediately turn up an obvious answer.

IMO, this is a bug in the documentation. Assuming for the sake of argument I’m right about dev=debug, that passage should be either:

  1. “In development mode (i.e. with debug=True) however…” or
  2. The text ‘development mode’ is a link to a list of terms & their definitions
  3. Something along those lines.

Maybe if I had carefully read all of the docs from beginning to end, or at least carefully read the tutorial twice, I might not be confused about this. But lots of your users won’t do that. That’s why I call it a bug (it’s sorta like failing to validate input).

*I was in fact reading the docs for the version of flask I’m using, but pointing to dev-version docs b/c I presume they represent the current state of the documentation

Updated 28/04/2017 02:33 4 Comments

Update the access token usage documentation for the v2 API

release-monitoring/anitya

The work in #401 added support for OpenID Connect, which is built on top of OAuth 2.0 (RFC 6749) and Bearer Token Usage (RFC 6750). While perusing the Bearer Token RFC, I noticed it states passing the access token SHOULD NOT be used unless it is impossible to transport the access token using the other two options (The Authorization header or the HTTP form-encoded body parameter).

I poked around in flask-oidc and it will search the request in the correct order (Authorization, form, then query parameter), but we only document the query parameter method.

Furthermore, if we do accept a query parameter, the RFC states that we SHOULD add a Cache-Control header with the private option, which I don’t believe we do.

I think we should update the documentation to use the Authorization header approach, and we should push a patch upstream to flask-oidc to automatically attach the correct Cache-Control header if the token arrived via a query parameter.

Updated 27/04/2017 19:11

Document popular processing capabilities

elastic/logstash

Adds doc enhancement requested in https://github.com/elastic/logstash/issues/6820

@acchen97 Note that this content can’t be formatted in tables because the examples would cause text overruns in the formatted doc. The best compromise is definition lists. I’ve broken the content into separate topics, but we could just as easily have one topic. I think separate topics are easier to skim.

@suyograo or @jordansissel Hoping that one of you can also review this. Our plugin docs are woefully lacking in examples and some of the descriptions are hard to follow. :-/ Where possible, I just reused examples from the docs, but in some cases, I had to create new examples. There are a couple of places where I need help coming up with a good example because I didn’t have time to create it myself or felt the example in the existing doc was too complicated for an overview.

ALL: Please respond to questions that I’ve added as comments (addressed to either you or general reviewers).

Here’s a screen grab that shows the basic info architecture of these topics:

image

Updated 28/04/2017 19:17

Improve Gammapy conda install docs

gammapy/gammapy

In #994 the install docs were improved a bit, they now look like this: https://gammapy.readthedocs.io/en/latest/install/index.html

The next steps are to introduce an environment.yml for Gammapy as already mentioned in https://gammapy.readthedocs.io/en/latest/install/index.html#development-version and to improve the conda install docs here: https://gammapy.readthedocs.io/en/latest/install/conda.html

This content should be moved over to the Sphinx docs, so that we don’t maintain installation / getting started instructions in the notebooks in parallel: https://nbviewer.jupyter.org/github/gammapy/gammapy-extra/blob/master/index.ipynb#Set-up https://nbviewer.jupyter.org/github/gammapy/gammapy-extra/blob/master/notebooks/tutorial_setup.ipynb

ctapipe has this: https://cta-observatory.github.io/ctapipe/getting_started/index.html#step-4-set-up-your-package-environment but what is missing is how to get the environment.yml for people that just want to get the stable version, and never clone the code repo!?

I’m putting this under the v0.7 milestone.

@joleroi - You or me?

Updated 27/04/2017 17:30

Named route throws Cannot assign to read only property error

vuejs/vue-router

Version

2.5.2

Reproduction link

https://github.com/fandaa/vue-router-bug/commits/master

Steps to reproduce

  1. Clone repo
  2. npm i and npm run dev
  3. Navigate to http://localhost:8080/x and click on go to y

What is expected?

Browser navigated to route /y.

What is actually happening?

Getting TypeError: Cannot assign to read only property 'path' of object '#<Object>' as error.


You can try to start at http://localhost:8080/y and try to navigate to /x by clicking on links with go to x using as a prefix.

vue-router seems to not cooperate with vue@2.3.0, it works just fine within my project using vue@2.2.6.

The problem is only with named routes, you can try http://localhost:8080/a, where you can navigate to /b and vice-versa without any difficulty.

<!– generated by vue-issues. DO NOT REMOVE –>

Updated 30/04/2017 13:07 7 Comments

Broken link

pagarme/pagarme-js

https://pagarme.github.io/pagarme-js/module-transactions.html

The update API reference link is pointing to the wrong anchor.

Updated 28/04/2017 20:38 2 Comments

Missing reference links on docs

pagarme/pagarme-js

All endpoints must have a link to the API reference docs. This is used to document the payloads that are available on most endpoints. Without these links, the Documentation becomes obscure and provides a poor experience to developers.

Current missing links:

  • https://pagarme.github.io/pagarme-js/module-balance.html
  • https://pagarme.github.io/pagarme-js/module-acquirersConfigurations.html#~all
  • https://pagarme.github.io/pagarme-js/module-acquirers.html
  • https://pagarme.github.io/pagarme-js/module-bulkAnticipations.html
  • https://pagarme.github.io/pagarme-js/module-company.html
  • https://pagarme.github.io/pagarme-js/module-events.html
  • https://pagarme.github.io/pagarme-js/module-invites.html
  • https://pagarme.github.io/pagarme-js/module-payables.html
  • https://pagarme.github.io/pagarme-js/module-subscriptions.html
  • https://pagarme.github.io/pagarme-js/module-user.html
Updated 28/04/2017 20:38 1 Comments

Explain why Dredd tests just 2xx responses by default

apiaryio/dredd

See https://github.com/apiaryio/dredd/issues/557#issuecomment-297723295. The reason is, HTTP defines idempotent methods, OPTIONS, GET, HEAD, PUT, DELETE. When using these methods, by specification you should get always the same response for the same request. Therefore Dredd cannot expect your API to answer identical request once with 200 response and second time with 400 response. Possibly, it could with non-idempotent methods, such as POST and PATCH, but in general, to get a different response, most of the time you need a different request.

In API description, such as API Blueprint and Swagger, you usually specify a request, successful response, and possible unsuccessful response. But to get the unsuccessful response, usually some special conditions need to be met, either in the request sent (invalid data), or related to the state of the server (e.g. missing database entry, causing 404). Usually, during testing these special conditions cannot be met until manually simulated by hooks.

Therefore in Swagger, Dredd skips non-2xx responses by default, and in API Blueprint, Dredd ignores multiple responses for a single request (takes always the first one), at the moment. In case of Swagger, the extra test cases are just skipped and you can unskip them in hooks. In API Blueprint, there’s no way to unskip them, you need to reformat multiple responses into multiple request-response pairs.

Updated 27/04/2017 17:11 1 Comments

Reorganize docs

scalameta/scalameta

Currently there’s

  • http://scalameta.org/
  • http://scalameta.org/tutorial
  • https://github.com/scalameta/sbt-semantic-example (see readme)
  • https://github.com/scalameta/scalameta/blob/master/notes/quasiquotes.md
  • https://github.com/scalameta/sbt-macro-example

It would be nice if these documents were available from a single page.

I propose that we

  1. rename tutorial to docs so that it will have the url http://scalameta.org/docs It’s not really a tutorial anymore since I recently removed the exercise sessions because they were outdated and were based on an ancient fork of scalafix.
  2. inline the quasiquote table into the tutorial around trees http://scalameta.org/tutorial/#Trees
  3. add a section in the tutorial on sbt-scalahost and move the contents from the sbt-semantic-example readme to that new section
  4. move sbt-semantic-example to sbt-scalahost.g8 and make it a giter8 template
  5. move sbt-macro-example to paradise.g8 and make it a giter8 template
Updated 27/04/2017 23:05 2 Comments

"Creating an icon" example section has dead link

Leaflet/Leaflet

What behaviour I’m expecting and which behaviour I’m seeing

http://leafletjs.com/examples/custom-icons/#creating-an-icon examplee section has link “L.Icon” which links to http://leafletjs.com/examples/reference.html#icon that returns 404 File not found.

Updated 27/04/2017 13:04

Adding a top-level examples directory

ioam/holoviews

A major part of improving our docs is to add various examples. The way many libraries structure their examples is in a top-level examples directory (see bokeh, and matplotlib). These examples then usually get built into a gallery (see bokeh, matplotlib, and cartopy).

Adding an examples directory also encourages adding examples when new features are added. I’d even suggest that in future new features should be accompanied by small example notebooks or scripts. We are going to be splitting out the different plotting backends but I’d still strongly argue examples for officially supported backends should live on the core repo, where they are all in one place.

We’ve also had various definitions of examples in the past, what I think it should mean in this context is small self-contained notebooks with at most one or two examples, which are focused on the code, not on telling a story or explaining deeper concepts. That contrasts with quickstart guides, tutorials and the “examples” that are linked to from holoviews.org/Examples, which are really case studies. My suggestion for the different types of notebooks:

  1. Tutorials - Long, detailed notebooks explaining particular concepts in detail, living in doc/Tutorials. New tutorials should be added to holoviews-contrib and can move to the main repo once polished.
  2. Quickstart guides - Shorter notebooks getting the user started on using a particular feature without providing extensive background. Again should start out in holoviews-contrib but once we have a few I’d suggest creating a User Guide (see bokeh) that provides a quick introduction to holoviews.
  3. Examples - These are what this issue is about, they are short and self-contained and generally should just go straight into the main repo since they don’t need detailed explanation.
  4. Case studies - These are what’s currently on holoviews.org/Examples, and basically show how to apply holoviews to a particular domain or problem. I believe these should all live in holoviews-contrib providing a wide-ranging collection of user examples. Keeping them all in one place this way will encourage us to test and update them for each release.

If we agree on these different formats and where they live we should settle the structure of the examples, my suggestion is that each example should be implemented for all backends that support the features used in the example. Then each example links to the equivalent versions for other backends. Each example should contain the following information:

  • Table with links to the example implemented using other backends using unicode tickmarks to show supported and unsupported backends
  • List of requirements, e.g. if the data uses bokeh sample data lists that as a requirement
  • A link to the original source of the example if any
  • (Optional) A list of tags to make the examples more searchable

We also want to structure the examples into sensible subfolders. Here’s some subfolders I can currently imagine:

  • apps - bokeh apps and in future maybe matplotlib webagg based apps
  • elements - All the supported elements split out into individual notebooks
  • plotting - Basic examples showing off specific plotting features
  • streams - Various examples using regular streams and linked streams
Updated 27/04/2017 12:31 6 Comments

Update timepicker docs

elastic/kibana

This PR updates the docs for recent changes to the time picker.

Screenshots that contain the time picker have been updated to show the move forward/backward in time buttons, and an explanation of the buttons has been added.

In addition, the “relative” timepicker docs have been updated to explain that you can choose both a start and end time that is relative, and both in the future and in the past.

Closes https://github.com/elastic/kibana/issues/11252.

Updated 28/04/2017 21:25 1 Comments

Fork me on GitHub