Contribute to Open Source. Search issue labels to find the right project for you!

Support for multiple channels and tuners

endquote/VideoGallery
  • Multiple “channels”, each channel has a different set of videos
  • Multiple “tuners” within a channel, which share the same set of videos, but have a different current video.

http://endquote.tv/channel/channelname/tuner/tunername/admin

  • If no “admin”, show videos
  • If no “tuner/tunername”, show default tuner
  • If no “channel/channelname”, show default channel.

Changes needed:

  • 5, controller service, should get done first

  • Add a channel field to the video schema
  • Downloads should go into channel-specific folders
  • The clients should use rooms/namespaces when connecting
  • 14, config files, would be needed for the following item

  • Different username/password for each channel’s admin page
Updated 26/06/2017 23:46

Liquid Warning

kubernetes/kubernetes.github.io

<!– Thanks for filing an issue! Before submitting, please fill in the following information. –>

<!–Required Information–>

This is a… <!– choose one by changing [ ] to [x] –> - [ ] Feature Request - [X] Bug Report

Problem: There is a warning during the Netlify preview build process:

Liquid Warning: Liquid syntax error (line 82): [:dot, “.”] is not a valid expression in “{{.spec.terminationGracePeriodSeconds}}” in docs/tutorials/stateful-application/cassandra.md Liquid Warning: Liquid syntax error (line 420): [:dot, “.”] is not a valid expression in “{{.spec.terminationGracePeriodSeconds}}” in docs/tutorials/stateful-application/cassandra.md

Looks like liquid is trying to parse {{.spec.terminationGracePeriodSeconds}} in the line: grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \

Proposed Solution: Find out how to escape { in a code block for Liquid.

Page to Update: http://kubernetes.io/docs/tutorials/stateful-application/cassandra/

<!–Optional Information (remove the comment tags around information you would like to include)–> <!–Kubernetes Version:–> Kubernetes 1.6

<!–Additional Information:–>

Updated 26/06/2017 23:21

Harmonize common_type with C++17's [meta.trans.other]/p4

ericniebler/stl2

Looks like the language drifted since we lifted it for the PR of #235. C++17’s [meta.trans.other]/p4 now reads:

Note B: Notwithstanding the provisions of (\cxxref{meta.type.synop}), and pursuant to \cxxref{namespace.std}, a program may specialize common_type<T1, T2> for types T1 and T2 such that is_same_v<T1, decay_t<T1>> and is_same_v<T2, decay_t<T2>> are each true. [ Note: Such specializations are needed when only explicit conversions are desired between the template arguments. —end note ] Such a specialization need not have a member named type, but if it does, that member shall be a typedef-name for an accessible and unambiguous cv-unqualified non-reference type C to which each of the types T1 and T2 is explicitly convertible. Moreover, common_type_t<T1, T2> shall denote the same type, if any, as does common_type_t<T2, T1>. No diagnostic is required for a violation of this Note’s rules.

This is much better, IMO.

Proposed Resolution

Replace our Note B (see #235) with the one in the current IS draft, quoted above, except changing “explicitly convertible” to “convertible”.

In addition, make the following change to the (newly added) paragraph about basic_common_reference (see #235):

-A program may specialize the basic_common_reference trait for
-two cv-unqualified non-reference types if at least one of them depends on a
-user-defined type. Such a specialization need not have a member named
-type.
+Notwithstanding the provisions of \cxxref{meta.type.synop}, and pursuant to
+\cxxref{namespace.std}, a program may specialize
+basic_common_reference<T, U, TQual, UQual> for types T and U such that
+is_same_v<T, decay_t<T>> and is_same_v<U, decay_t<U>> are each true.
+[ Note: Such specializations are needed when only explicit conversions are
+desired between the template arguments. —end note ] Such a specialization
+need not have a member named type, but if it does, that member shall be a
+typedef-name for an accessible and unambiguous type C to which each of the
+types TQual<T> and UQual<U> is convertible. Moreover,
+basic_common_reference<T, U, TQual, UQual>::type shall denote the same type,
+if any, as does basic_common_reference<U, T, UQual, TQual>::type. A program
+may not specialize basic_common_reference on the third or fourth parameters,
+TQual or UQual. No diagnostic is required for a violation of these rules.
Updated 26/06/2017 22:51 2 Comments

Required integer query parameter with default 0 invalidates "Try it out" form with value of 0

swagger-api/swagger-ui
  • 3.0.16 ``` swagger: “2.0” info: description: “This is a sample server Petstore server.” version: “1.0.0” title: “Swagger Petstore” host: “petstore.swagger.io” basePath: “/v2” schemes:
  • “http” paths: /pet: get: summary: “Get pet” description: “” operationId: “addPet” parameters:
    - name: "offset"
      in: "query"
      type: "integer"
      format: "int32"
      description: "offset"
      required: true
      default: 0
    responses:
      405:
        description: "Invalid input"
    

    ``` Note that removing the default and entering value 0 will allow the form to be submitted without problem.

screen shot 2017-06-26 at 2 49 15 pm

Updated 26/06/2017 22:12

Remove stack trace when trying to register existing UDF

hortonworks/streamline

ERROR [20:34:22.686] [dw-43 - POST /api/v1/catalog/streams/udfs] c.h.s.s.s.UDFCatalogResource - Got exception: [RuntimeException] / message [UDF with the same name already exists, use update (PUT) api instead] / related resource location: com.hortonworks.streamline.str eams.service.UDFCatalogResource.checkDuplicate java.lang.RuntimeException: UDF with the same name already exists, use update (PUT) api instead at com.hortonworks.streamline.streams.service.UDFCatalogResource.checkDuplicate(UDFCatalogResource.java:432) at com.hortonworks.streamline.streams.service.UDFCatalogResource.processUdf(UDFCatalogResource.java:327) at com.hortonworks.streamline.streams.service.UDFCatalogResource.addUDF(UDFCatalogResource.java:214) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102) at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:49) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:34) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:50) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:44) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676) at com.hortonworks.registries.auth.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:582) at com.hortonworks.registries.auth.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:541) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1174) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1106) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:240) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:51) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:396)

Updated 26/06/2017 19:03

Transition forum to public content

crowdresearch/daemo

In the same way many other crowd work forums are public, this would help people outside our community understand our culture.

It would also allow for public linking of documents stored within the forum, as was mentioned in #881.

Updated 26/06/2017 18:07 1 Comments

Overriding basePath

swagger-api/swagger-js

Could add support for overriding of basePath in specification when loading from SwaggerUIBundle please?

This could fix the issue with multiple basePaths requested in here https://github.com/OAI/OpenAPI-Specification/issues/562

At least for our case since we are using hapi views and then we can modify html context to render. So then you can use this context to modify basePath property when instantiating SwaggerUIBundle.

Updated 26/06/2017 20:58 1 Comments

More Views

haslo/lists_juggle_browser
  • [ ] It’d be nice if it was possible to list the top 4 from each of all recent tournaments (filtered by type).
  • [ ] A tournament view would be lovely - showing just the results from one tournament. That could then also be linked to from elsewhere.
Updated 26/06/2017 11:28

Icons and elm in Asset Pipeline

haslo/lists_juggle_browser
  • Icons for #60 don’t work yet (Font Awesome)
  • elm doesn’t work in the asset pipeline, example:
F, [2017-06-25T14:54:44.855802 #17550] FATAL -- : [83ef95dc-3da0-48a2-83c4-084bbad0eaad] ActionView::Template::Error (Can't find hello_elm.js in /home/deploy/lists_juggle_browser/current/public/packs/manifest.json. Is webpack still compiling?):
F, [2017-06-25T14:54:44.855952 #17550] FATAL -- : [83ef95dc-3da0-48a2-83c4-084bbad0eaad]     1: - content_for :elm do
[83ef95dc-3da0-48a2-83c4-084bbad0eaad]     2:   = javascript_pack_tag "hello_elm"
Updated 25/06/2017 16:58 1 Comments

postgres_database.present never works because call to postgres_user.present fails

saltstack/salt

Description of Issue/Question

When postgres_database.present tries to create a database and assign it to a user, it always fails with a python exception and a stack trace.

----------
          ID: postgres_database-provisioner_local
    Function: postgres_database.present
        Name: provisioner_local
      Result: False
     Comment: An exception occurred in this state: Traceback (most recent call last):
                File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1745, in call
                  **cdata['kwargs'])
                File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1702, in wrapper
                  return f(*args, **kwargs)
                File "/usr/lib/python2.7/dist-packages/salt/states/postgres_database.py", line 98, in present
                  dbs = __salt__['postgres.db_list'](**db_args)
                File "/usr/lib/python2.7/dist-packages/salt/modules/postgres.py", line 461, in db_list
                  password=password)
                File "/usr/lib/python2.7/dist-packages/salt/modules/postgres.py", line 417, in psql_query
                  password=password)
                File "/usr/lib/python2.7/dist-packages/salt/modules/postgres.py", line 364, in _psql_prepare_and_run
                  rcmd, runas=runas, password=password, host=host, port=port, user=user)
                File "/usr/lib/python2.7/dist-packages/salt/modules/postgres.py", line 181, in _run_psql
                  ret = __salt__['cmd.run_all'](cmd, python_shell=False, **kwargs)
                File "/usr/lib/python2.7/dist-packages/salt/modules/cmdmod.py", line 1649, in run_all
                  **kwargs)
                File "/usr/lib/python2.7/dist-packages/salt/modules/cmdmod.py", line 394, in _run
                  'User \'{0}\' is not available'.format(runas)
              CommandExecutionError: User 'provisioner_local' is not available
     Started: 14:44:29.277236
    Duration: 5.524 ms
     Changes:  

Setup

Using postures-formula from saltstack-formulas at revision 5cbf920fa7f7d277a5dd874f08157f15aad3a19e

Pillar Data: ``` postgres: #pg_hba.conf: salt://postgres/pg_hba.conf conf_dir: /etc/postgresql/9.4/main lookup: pkg: ‘postgresql-9.4’ pkg_client: ‘postgresql-client-9.4’ pkg_dev: ‘postgresql-server-dev-9.4’ pkg_contrib: ‘postgresql-contrib-9.4’ pg_hba: ‘/etc/postgresql/9.4/main/pg_hba.conf’ version: 9.4

users: provisioner_local: password: ‘blabla’ createdb: False

acls: - [‘host’, ‘provisioner_local’, ‘provisioner_local’, ‘0.0.0.0/0’, ‘md5’]

databases: provisioner_local: owner: ‘provisioner_local’ user: ‘provisioner_local’ template: ‘template0’ lc_ctype: ‘C.UTF-8’ lc_collate: ‘C.UTF-8’

postgresconf: | listen_addresses = ‘0.0.0.0’ ```

State: local: - postgres

Steps to Reproduce Issue

Running salt-call state.apply will do everything correctly except creating the database (and setting the user as the owner). The user is created fine. This issue has been happening since Debian 8 and Salt 2016.3.x

Postgres says: postgres=# \du List of roles Role name | Attributes | Member of -------------------+------------------------------------------------+----------- postgres | Superuser, Create role, Create DB, Replication | {} provisioner_local | | {}

This is done in a fresh vagrant box, and fails every time.

Versions Report

Salt Version:
           Salt: 2016.11.2

Dependency Versions:
           cffi: 1.9.1
       cherrypy: 3.5.0
       dateutil: 2.5.3
          gitdb: 2.0.0
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.8
        libgit2: 0.24.5
        libnacl: 1.5.0
       M2Crypto: 0.24.0
           Mako: 1.0.6
   msgpack-pure: Not Installed
 msgpack-python: 0.4.8
   mysql-python: 1.3.7
      pycparser: 2.17
       pycrypto: 2.6.1
         pygit2: 0.24.2
         Python: 2.7.13 (default, Jan 19 2017, 14:48:08)
   python-gnupg: 0.3.9
         PyYAML: 3.12
          PyZMQ: 16.0.2
           RAET: Not Installed
          smmap: 2.0.1
        timelib: Not Installed
        Tornado: 4.4.3
            ZMQ: 4.2.1

System Versions:
           dist: debian 9.0 
        machine: x86_64
        release: 4.9.0-3-amd64
         system: Linux
        version: debian 9.0 
Updated 26/06/2017 19:36 1 Comments

webui: my reviews are missing reviews by_package

openSUSE/open-build-service

osc my rq shows 10 requests:

Examples: ``` 505567 State:review By:AndreasStieger When:2017-06-21T21:14:01 maintenance_release: openSUSE:Maintenance:6846/update-test-trivial.openSUSE_Leap_42.3_Update -> openSUSE:Leap:42.3:Update/update-test-trivial.6846 maintenance_release: openSUSE:Maintenance:6846/patchinfo -> openSUSE:Leap:42.3:Update/patchinfo.6846 Review by Group is new: qam-openqa(openqa-maintenance)

504636 State:review By:jberry_factory When:2017-06-19T20:11:46 submit: openSUSE:Factory/perl-Mojolicious@69 -> openSUSE:Leap:42.3 Review by Package is new: devel:languages:perl/perl-Mojolicious

```

Webui agrees with the number, saying I have 10 tasks.

But /user/show/coolo shows 6 open reviews and nothing else. What is missing are all the ‘Review by Package’. I only have group reviews in the list.

This is quite problematic as the leap development process adds reviews to the devel project maintainer - and I only get to know about them once I get a reminder

Updated 26/06/2017 12:16 1 Comments

bazel fails to compile tensorflow when gcc points to /etc/alternatives on Ubuntu

bazelbuild/bazel

System information

TensorFlow 1.2, Ubuntu 17.04 on GCE, CUDA 8.0, gcc 4.9.4

Describe the problem

bazel fails to compile tensorflow when multiple versions of gcc are installed through update-alternatives and the default points to /etc/alternatives/gcc. Specifically, it fails to find the C and C++ header files. However, all works well when configure is set to use /usr/bin/gcc

Updated 26/06/2017 11:51 1 Comments

[feature request] Custom markdown syntax for "variables" in code snippets

kubernetes/kubernetes.github.io

<!– Thanks for filing an issue! Before submitting, please fill in the following information. –>

<!–Required Information–>

This is a… <!– choose one by changing [ ] to [x] –> - [x] Feature Request - [ ] Bug Report

Problem:

In code snippets it can be unclear whether a value is part of the “actual code”, or an example value that the reader should swap out.

For instance, dapi-test-pod in:

<img width=“1040” alt=“screen shot 2017-06-23 at 4 24 26 pm” src=“https://user-images.githubusercontent.com/25401650/27503395-7fe9fd68-5830-11e7-90bd-56648c5df063.png”>

While the reader can figure this out by reading the instructions carefully, by and large we should assume that there’s a lot of skimming going on. 😝 There are probably quite a few cases where users directly copy-and-paste lines of code without altering them.

Proposed Solution:

DigitalOcean highlights “variables” in code snippets (see https://www.digitalocean.com/community/tutorials/digitalocean-s-writing-guidelines#variables). It looks like the following:

<img width=“1135” alt=“screen shot 2017-06-23 at 3 43 10 pm” src=“https://user-images.githubusercontent.com/25401650/27503335-0c40dcec-5830-11e7-9e02-36b4695ca5a0.png”>

We could likewise implement custom markdown syntax to enable something like this.

Page to Update:

Generally the whole site.

<!–Optional Information (remove the comment tags around information you would like to include)–> <!–Kubernetes Version:–>

<!–Additional Information:–>

Updated 25/06/2017 19:27 3 Comments

Reduce data consumption / implement filter usage

vector-im/riot-android

Currently Riot it the app with most data consumption on my device. For this I have following idea concerning a “bandwith-friendly-mode”: - reduce traffic (especially on mobile data) to the absolute minimum (e.g. no presence, typing, …) - catch up filtered-out data (e.g. presence) when on WiFi (or disabling this option) as sometimes I would like to see the data

Updated 26/06/2017 16:53 5 Comments

get_weapon() in init.lua assumes that unit contains weapon types.

TorchCraft/TorchCraft

In https://github.com/TorchCraft/TorchCraft/blob/master/lua/init.lua#L230, it is assumed that the unit contains awtype and gwtype fields. These fields seem to have been replaced with damage types, i.e. awdmgtype and gwdmgtype. While the remaining code correctly treats these as “damage type” (in contrast to its name, weapontype), the non-existent fields will produce wrong results when passed to dmg_multiplier(). This can lead to wrong results for compute_dmg() and apply_attack().

Updated 23/06/2017 15:14

Create a page that shows % online sellers per league

poeapp/poeapp

Thank you, just allocating a small corner showing this information would already be really helpful. And when browsers click on the small chart it can take them to another page showing the bigger chart with actual number of unique active online sellers, etc. I got the background color a bit wrong, but I think it can look really good after integration to the page. Example: http://imgur.com/a/FoDKW

Updated 23/06/2017 13:52

Infection game does not seem to translate well into blocks

Microsoft/pxt

I am taking a look at the infection game but I guess is one example of code that cannot be converted into blocks. I get an error if I try to see the Blocks equivalent. I thought that everything could be converted, although some things would become grey blocks… There seems to be exceptions, though… If that is the case, it would be nice to get a warning either in the infection project page or in the pxt editor itself… https://pxt.microbit.org/projects/infection

Updated 23/06/2017 16:53 1 Comments

Code refactoring for e2e tests

vmware/docker-volume-vsphere
  • [ ] Create an util verification.CheckVolume() which will check against a bunch of volume properties. https://github.com/vmware/docker-volume-vsphere/pull/1381#discussion_r123506846

  • [ ] Create three separate test files for vmgroup tests - one for default vmgroup tests, another for user vmgroup tests and one for miscellaneous tests related to vmgroups https://github.com/vmware/docker-volume-vsphere/pull/1381#pullrequestreview-45707021

  • [ ] Create a single suite for all tests that are doing config init/remove so we initialize and remove the DB once at the test-suite level rather than each test doing it.

Updated 23/06/2017 21:27 2 Comments

Type for "date-time" property is displayed as "string" Instead of "date" even after setting the dataType attribute

swagger-api/swagger-ui

Hi,

The JSON rendered from Swagger UI displays the type for date-time property as “string”. Swagger Documentation states that the type has to be String and doesn’t have a special way of showing ‘dates’ but is there anyway possible that we can change the type from “string” to “date”.I have tried hard-coding the type using annotation #@ApiModelProperty(dataType=“date”) but it didn’t work. Is there anyway that we can display the type as “date” instead of “string” ? Any suggestion for this ? The comments for the tickets on the same issue #1183 & #462 states that it would be solved in the next version. Can someone let me know if this is resolved ,if yes please let me know the version.

Updated 23/06/2017 16:20 3 Comments

Remote Worker failures on bazel build

bazelbuild/bazel

When trying to build Bazel using the RemoteWorker, I get intermittent errors such as:

ERROR: /usr/local/google/home/olaola/full-bazel/third_party/BUILD:75:1: Extracting interface //third_party:android_common_25_0_0 failed: ijar failed: error executing command (cd /usr/local/google/home/olaola/.cache/bazel/bazel_olaola/098d19699a9b31c0ee0da4e8c41e90b9/execroot/io_bazel && \ exec env - \ PATH=/usr/local/google/home/olaola/google-cloud-sdk/bin:/usr/local/google/home/olaola/work/bin:/usr/local/google/home/olaola/depot_tools:/usr/local/google/home/olaola/bin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/google/home/olaola/google-cloud-sdk/bin:/usr/lib/google-golang/bin:/usr/local/buildtools/java/jdk/bin:/usr/local/sbin \ external/bazel_tools/tools/jdk/ijar/ijar third_party/android_common/com.android_annotations_25.0.0.jar bazel-out/local-fastbuild/genfiles/third_party/ijar/android_common_25_0_0/third_party/android_common/com.android_annotations_25.0.0-ijar.jar): Exit -1. ERROR: /usr/local/google/home/olaola/full-bazel/third_party/BUILD:75:1: output ‘third_party/_ijar/android_common_25_0_0/third_party/android_common/com.android.tools.lint_lint-api_25.0.0-ijar.jar’ was not created.

The command actually failed on the RemoteWorker, this is not a gRPC issue.

Updated 26/06/2017 11:54 1 Comments

custom.ts bug with statement handlers

Microsoft/pxt

in makecode.adafruit.com the block that is created from this code: ```

/* * Gesture blocks / //% weight=100 color=#d3a226 icon=“” namespace custom { let is_initialized: boolean = false; let MY_EVENT_SRC: number = 8137;

//% block
export function onGestureRecognized(a: () => {}) {

    if (!is_initialized) initialize_predictor();
    control.onEvent(MY_EVENT_SRC, 1, a);
}

function initialize_predictor() {
    is_initialized = true;

    control.runInBackground(() => {
        if (input.acceleration(Dimension.Strength) > 500) {
            control.raiseEvent(MY_EVENT_SRC, 1);
        }

        loops.pause(33);    //almost 30fps
    });
}

} ```

looks like this: image

which should probably look like one of the event blocks, similar to this: image

Updated 23/06/2017 17:43 3 Comments

Code window flashes white when navigating between questions

google/tie

Expected behavior: code window does not flash white for a split second when switching between questions.

Observed behavior: code window flashes white for a split second when switching between questions.

Steps to reproduce: navigate between questions and note code window flashes white for a split second.

Updated 22/06/2017 16:47

remote_worker fails in the most mysterious ways when run as a deploy jar

bazelbuild/bazel

First an easy one, it doesn’t load the unix JNI library when run via a deploy jar:

philwo@philwo:~/src/bazel (master)$ rm -rf /usr/local/google/tmp/worker
philwo@philwo:~/src/bazel (master)$ mkdir /usr/local/google/tmp/worker
philwo@philwo:~/src/bazel (master)$ bazel build //src/tools/remote_worker:remote_worker_deploy.jar
philwo@philwo:~/src/bazel (master)$ java -jar bazel-bin/src/tools/remote_worker/remote_worker_deploy.jar --work_path=/usr/local/google/tmp
INFO: Analysed target //src/tools/remote_worker:remote_worker_deploy.jar (0 packages loaded).
INFO: Found 1 target...
Target //src/tools/remote_worker:remote_worker_deploy.jar up-to-date:
  bazel-bin/src/tools/remote_worker/remote_worker_deploy.jar
INFO: Elapsed time: 2.258s, Critical Path: 2.06s
INFO: Build completed successfully, 2 total actions
170622 15:13:29.246:I 1 [com.google.devtools.build.remote.RemoteWorker.main] Initializing in-memory cache server.
170622 15:13:29.250:W 1 [com.google.devtools.build.remote.RemoteWorker.main] Not using remote cache. This should be used for testing only!
Exception in thread "main" java.lang.UnsatisfiedLinkError: no unix in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
    at java.lang.Runtime.loadLibrary0(Runtime.java:870)
    at java.lang.System.loadLibrary(System.java:1122)
    at com.google.devtools.build.lib.UnixJniLoader.loadJni(UnixJniLoader.java:28)
    at com.google.devtools.build.lib.unix.NativePosixFiles.<clinit>(NativePosixFiles.java:136)
    at com.google.devtools.build.lib.unix.UnixFileSystem.createDirectory(UnixFileSystem.java:310)
    at com.google.devtools.build.lib.vfs.Path.createDirectory(Path.java:829)
    at com.google.devtools.build.lib.vfs.FileSystemUtils.createDirectoryAndParentsWithCache(FileSystemUtils.java:692)
    at com.google.devtools.build.lib.vfs.FileSystemUtils.createDirectoryAndParents(FileSystemUtils.java:652)
    at com.google.devtools.build.remote.RemoteWorker.<init>(RemoteWorker.java:85)
    at com.google.devtools.build.remote.RemoteWorker.main(RemoteWorker.java:170)

OK, commenting that out so that it uses JavaIoFileSystem we get further and extremely weird errors during a build:

philwo@philwo:~/src/bazel (master)$ rm -rf /usr/local/google/tmp/worker
philwo@philwo:~/src/bazel (master)$ mkdir /usr/local/google/tmp/worker
philwo@philwo:~/src/bazel (master)$ bazel build //src/tools/remote_worker:remote_worker_deploy.jar
philwo@philwo:~/src/bazel (master)$ java -jar bazel-bin/src/tools/remote_worker/remote_worker_deploy.jar --work_path=/usr/local/google/tmp
[...]
170622 15:14:52.938:I 1 [com.google.devtools.build.remote.RemoteWorker.main] Initializing in-memory cache server.
170622 15:14:52.943:W 1 [com.google.devtools.build.remote.RemoteWorker.main] Not using remote cache. This should be used for testing only!
170622 15:14:53.066:I 1 [com.google.devtools.build.remote.RemoteWorker.startServer] Starting gRPC server on port 8080.

philwo@philwo:~/src/bazel (master)$ bazel --host_jvm_args=-Dbazel.DigestFunction=SHA1 --blazerc=/dev/null build --spawn_strategy=remote --strategy=Javac=remote --remote_cache=localhost:8080 --remote_executor=localhost:8080 //src:bazel
[...]
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "0c73037e90c21d3a7533aafc5d9c3bb2b793cc69"
size_bytes: 1679
 does not match computed digest hash: "9101deb151a8c13f3cb8e2a4dda3da05"
size_bytes: 1679
).
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "58479428b3ca345000ea0c4f434447f033a403c5"
size_bytes: 1691
 does not match computed digest hash: "6e060d468f449ad413d19981de2a9a32"
size_bytes: 1691
).
ERROR: /usr/local/google/home/philwo/.cache/bazel/_bazel_but_philwo/591320842d308bc057c7950eeb368324/external/com_google_protobuf/BUILD.bazel:233:1: C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "58479428b3ca345000ea0c4f434447f033a403c5"
size_bytes: 1691
 does not match computed digest hash: "6e060d468f449ad413d19981de2a9a32"
size_bytes: 1691
.
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "243c4aba3fa42fc6e2ec360169d7dd0683f96e29"
size_bytes: 1683
 does not match computed digest hash: "e8b7801be6e4d508f9b1f83edf0532e8"
size_bytes: 1683
).
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "c66a98af5f1185670c7577231828e49ee08e7497"
size_bytes: 1687
 does not match computed digest hash: "1f8b692c0ef1c6468f23b68924f82909"
size_bytes: 1687
).
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "754a004e743d653028e3e87c47ae06e2742fedca"
size_bytes: 1655
 does not match computed digest hash: "c04eeb2404869e30625c8dc720f1f890"
size_bytes: 1655
).
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "cfbf6f53b560254b3ad7e1395228197214d14d96"
size_bytes: 1655
 does not match computed digest hash: "2034f56f3e4e138e338fc37e7507d15a"
size_bytes: 1655
).
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "ecc40c8fb72fefa72a9f30a7e23ec6fbead7c6e6"
size_bytes: 1710
 does not match computed digest hash: "8c3e32fa5977f31539ea0b55459f591e"
size_bytes: 1710
).
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: INVALID_ARGUMENT: invalid argument(s): resource_name: Received digest hash: "4b9037da73a64a27a3aad37013823fc4fb72551b"
size_bytes: 1671
 does not match computed digest hash: "77cc1c83ce51acbb629c9a1a22256b21"
size_bytes: 1671
).
WARNING: CppCompile remote work failed (io.grpc.StatusRuntimeException: CANCELLED).
Target //src:bazel failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 2.042s, Critical Path: 0.31s
Updated 26/06/2017 11:54 4 Comments

Auto-fix for callable-types removes export qualifier

palantir/tslint

Bug Report

  • TSLint version: 5.4.3
  • TypeScript version: 2.3.4
  • Running TSLint via: VSCode

TypeScript code being linted

export interface Runnable {
    (): void;
}

with tslint.json configuration:

{
  "extends": "tslint:recommended",
}

Actual behavior

When running fix, the callable-types rule changes this to

type Runnable = () => void;

Expected behavior

It should preserve the export, changing this instead to

export type Runnable = () => void;
Updated 23/06/2017 18:21

Protractor Test Addition for ID Page

hyperledger/composer

863 component and building from #1309

  • add tests for the “Test and ID Page”, focusing on the ID aspects, as per the current manual validation steps located at https://github.com/hyperledger/composer/blob/master/contrib-notes/release-process/playground-validation.md

Context

Manual validation is a waste of time and prone to error. Let’s make this automated.

Expected Behavior

No human validation required = more time coding features.

Actual Behavior

time wasted

Updated 21/06/2017 18:53

Protractor Test Addition for Test Business Network Page

hyperledger/composer

863 component and building from #1309

  • add tests for the “Test and ID Page”, focusing on the Test page, as per the current manual validation steps located at https://github.com/hyperledger/composer/blob/master/contrib-notes/release-process/playground-validation.md

Context

Manual validation is a waste of time and prone to error. Let’s make this automated.

Expected Behavior

No human validation required = more time coding features.

Actual Behavior

time wasted

Updated 21/06/2017 18:52

Protractor Editor File Test Addition

hyperledger/composer

863 component and building from #1309

  • add tests for the editor “define” page file-editor, as per the current manual validation steps located at https://github.com/hyperledger/composer/blob/master/contrib-notes/release-process/playground-validation.md

Context

Manual validation is a waste of time and prone to error. Let’s make this automated.

Expected Behavior

No human validation required = more time coding features.

Actual Behavior

time wasted

Updated 21/06/2017 18:47

check-pipe breaks with ngFor

mgechev/codelyzer

It looks like the new check-pipe rule breaks with ngFor (as of 3.1.1):

<div *ngFor="let user of users | slice:0:4">

Throws:

The pipe operator should be surrounded by one space on each side, i.e. " | "

See #347 for a unit test reproducing the issue.

Updated 23/06/2017 14:31 3 Comments

Windows: Don't add python paths into INCLUDE and LIB environment variables (and how to link python library after that)

bazelbuild/bazel

Currently, cc_configure.bzl is adding Python include directory and Python libs directory into INCLUDE and LIB environment variables. The directory is calculated based on BAZEL_PYTHON. This make header files like header files like Python.h available during compiling and lib files like python35.lib available during linking time. See https://github.com/bazelbuild/bazel/blob/master/tools/cpp/windows_cc_configure.bzl#L324

However, Bazel obviously should not do that. Users configure their own python header and lib rules, and if we’re still adding these python paths, that may causing a linking issue if including headers from one version of Python, but linking to another. It maybe the cause of this issue. @abergmeier

TensorFlow has https://github.com/tensorflow/tensorflow/blob/master/third_party/py/python_configure.bzl , it generates @local_config_python//:python_headers for header files, but not for lib files, so during linking time, they still rely on Bazel adding it’s directory into LIB, this needs to change.

Here’s a minimal example of how to link python library after removing the python paths from INCLUDE and LIB: My layout: pcloudy@pcloudy0-w MSYS ~/workspace/my_tests/python_linking_test $ tree . . ├── BUILD ├── main.cpp ├── python │   ├── include -> /c/Program Files/Anaconda3/include │   └── libs -> /c/Program Files/Anaconda3/libs └── WORKSPACE (empty) As you can see, my python/include is pointing to /c/Program Files/Anaconda3/include, you can also copy them to your source tree or using Skylark to configure them as an external repository, like TF did.

main.cpp: https://gist.github.com/meteorcloudy/fd5eb08c0467916f3ec0b89cdffb5301

BUILD: ``` cc_binary( name = “bin”, srcs = [“main.cpp”], deps = [“:python_headers”, “:python_lib”], )

cc_library( name = “python_headers”, hdrs = glob([“python/include/*/.h”]), includes = [“python/include”], )

cc_library( name = “python_lib”, srcs = [“python/libs/python35.lib”], ) ``` This should work when building a executable binary.

But if you are building a shared library, like: cc_binary( name = "bin.dll", srcs = ["main.cpp"], deps = [":python_headers", ":python_lib"], linkstatic = 1, linkshared = 1, ) Linking will fail with: ``` C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/link.exe /nologo /DLL /OUT:bazel-out/msvc_x64-fastbuild/bin/bin.dll /WHOLEARCHIVE:python/libs/tkinter.lib /WHOLEARCHIVE:python/libs/python3.lib /WHOLEARCHIVE:python/libs/python35.lib /MACHINE:X64 /SUBSYSTEM:CONSOLE @bazel-out/msvc_x64-fastbuild/bin/bin.dll-2.params /DEFAULTLIB:libcmt.lib /DEBUG:FASTLINK /INCREMENTAL:NO python3.lib(python3.dll) : error LNK2005: __NULL_IMPORT_DESCRIPTOR already defined in tkinter.lib(_tkinter.pyd)

python/libs/python35.lib : fatal error LNK1000: Internal error during CImplib::EmitThunk `` Because Bazel adds/WHOLEARCHIVE` to python libraries. But these libraries are just import libraries.aspx) of their actual dlls, which should not be whole-archived.

An workaround is put them into linkopts instead of specifying them as deps: ``` cc_binary( name = “bin.dll”, srcs = [“main.cpp”], deps = [“:python_headers”], data = [“:python_lib_file”], linkopts = [“$(location :python_lib_file)”], linkstatic = 1, linkshared = 1, )

cc_library( name = “python_headers”, hdrs = glob([“python/include/*/.h”]), includes = [“python/include”], )

filegroup( name = “python_lib_file”, srcs = [“python/libs/python35.lib”], ) ``` But eventually this problem should be solved by Bazel recognizing which lib file is static library and which is just import library for dll.

Updated 23/06/2017 08:31

java memory issue on sandbox enabled debian

bazelbuild/bazel

Running bazel build on a fat java/scala project (several thousands of targets) fails when working on linux debian with user namespace enabled.

Issue

Trying to run bazel build with user namespace enabled: $ sysctl kernel.unprivileged_userns_clone=1 The build runs alright but at some point it crashes with weird memory issue: ``` ERROR: <target-path>/BUILD:35:1: error executing shell command: ‘ rm -rf bazel-out/local-fastbuild/bin/<package>/<target>.jar_temp_resources_dir set -e mkdir -p bazel-out/local-fastbuild/bin/<target>’ failed: Process terminated by signal 6 [sandboxed]. #

A fatal error has been detected by the Java Runtime Environment:

#

SIGBUS (0x7) at pc=0x00007f094606874b, pid=5, tid=0x00007f09472e0700

#

JRE version: (8.0_131-b11) (build )

Java VM: Java HotSpot™ 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)

Problematic frame:

V [libjvm.so+0x96874b] PerfMemory::alloc(unsigned long)+0x7b

#

Core dump written. Default location: /home/builduser/.cache/bazel/_bazel_builduser/bc0e462ab01ac9379d22ad058ca1cb1f/bazel-sandbox/4864102460254154064/execroot/main/core or core.5

#

An error report file with more information is saved as:

/home/builduser/.cache/bazel/_bazel_builduser/bc0e462ab01ac9379d22ad058ca1cb1f/bazel-sandbox/4864102460254154064/execroot/main/hs_err_pid5.log

#

If you would like to submit a bug report, please visit:

http://bugreport.java.com/bugreport/crash.jsp

# ```

Environment info

The machine is docker container based on debian image $ uname -a Linux 167-docker99 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2 (2017-04-30) x86_64 GNU/Linux builduser@167-docker99:~/ws/bazel-port-isolation$ cat /etc/*-release PRETTY_NAME="Debian GNU/Linux 8 (jessie)" NAME="Debian GNU/Linux" VERSION_ID="8" VERSION="8 (jessie)" ID=debian HOME_URL="http://www.debian.org/" SUPPORT_URL="http://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"Bazel version

  • The project is fat (several thousands of java / scala targets)
  • Bazel was built from a81264e1043dd90e984d9fcef5ce9962dce90d1d
  • rules_scala - https://github.com/wix/rules_scala/commit/d66c9d7506ecc6a1b454055b35bd10ef064b9d98 (basically https://github.com/bazelbuild/rules_scala/commit/5d6ff512652b8b55f5d26f6ea69e05d86582d996 with small changes around specs2 versions and test runner env preparation)

additional information

  • issue does not happen when unprivileged_userns_clone=0 (but clearly - that’s not a solution)
  • with user namespace enabled, bazel 0.5.1 showed this issue . May also be related to #3064 .
Updated 26/06/2017 13:54 21 Comments

Documentation for orphaned action

bazelbuild/bazel

Please provide the following information. The more we know about your system and use case, the more easily and likely we can help.

Description of the problem / feature request / question:

I am a new user of bazel. I ran into a The following files have no generating action: error which ended up being because my tests were not generating an executable output. I am submitting this bug as a request to make error messages/beginner tasks easier to understand.

1) It would be really helpful if The following files have no generating action: error were more clear as to what is going on. It would be helpful to specify which orphaned action is missing (e.g. ctx.outputs.executable) and possibly what to do to fix it (or a link to documentation about the error).

2) I was writing a test which is just executing an existing lint test (e.g. lint <filenames> which doesn’t need to generate an output binary). I would find it helpful to have a documentation example about how to write skylark rule script to do this case (e.g. generate a one line shell script which executes the test). Digging through the existing skylark rules there are not many examples of this.

If asking a question or requesting a feature, also tell us about the underlying problem you’re trying to solve.

If possible, provide a minimal example to reproduce the problem:

Environment info

  • Operating System:

  • Bazel version (output of bazel info release):

  • If bazel info release returns “development version” or “(@non-git)”, please tell us what source tree you compiled Bazel from; git commit hash is appreciated (git rev-parse HEAD):

Have you found anything relevant by searching the web?

(e.g. StackOverflow answers, GitHub issues, email threads on the bazel-discuss Google group)

Anything else, information or logs or outputs that would be helpful?

(If they are large, please upload as attachment or provide link).

Updated 22/06/2017 07:43

Composer playground not importing model cto files

hyperledger/composer

I’m trying to drag-n-drop individual .cto files into the Composer playground. The files are those provided with the vehicle-lifecycle-model. When I drag-n-drop the file into the “Add a file…” dialog, it correctly detects how many transactions, etc there are, but oftentimes, when I click the “Add” button, nothing happens. I have not found a way to import the vda.cto file at all. Sometimes the vehicle.cto file will import, but often it won’t

BTW, given the direction to create more modular/reusable models, it might be good to add a bulk upload capability. Maybe it already exists and I haven’t found it yet.

Context

I consider this a bug because I should be able to add cto files and be told of errors. This silently fails to add the file.

Expected Behavior

I should be able to add .cto files

Actual Behavior

The cto files I add are often ignored (don’t appear to import but I don’t get an error message).

Possible Fix

Make the “add” work.

Steps to Reproduce

  1. Checkout the composer-sample-models repository
  2. Visit http://composer-playground.mybluemix.net
  3. Add each of the cto files from composer-sample-models/packages/vehicle-lifecycle-model/models
  4. Check to see if all the files have imported.

Existing issues

Context

Means I can’t experiment with model changes easily on the online playground.

Your Environment

<!— Include as many relevant details about the environment you experienced the bug in –> * Version used: N/A * Environment name and version (e.g. Chrome 39, node.js 5.4): Tried with Firefox and Chrome * Operating System and version (desktop or mobile): Windows 10 * Link to your project: N/A

Updated 26/06/2017 09:55

bazel fetch with --keep_going doesn't correctly report overall fetch error (via exit code or final message)

bazelbuild/bazel

Description of the problem / feature request / question:

When running bazel fetch with –keep_going (to see all errors), errors are printed, but the final message printed is:

“INFO: All external dependencies fetched successfully.”

and the exit code (e.g., via “echo $?”) is 0, indicating no error. This makes it difficult to use in a script to both see all errors but stop further actions.

Note that running bazel build, I see these two final messages:

ERROR: command succeeded, but there were loading phase errors. INFO: Elapsed time: 0.667s, Critical Path: 0.01s

And the exit code is nonzero (as desired).

If possible, provide a minimal example to reproduce the problem:

Attached is a hello-world example with a fake dependency that is intentionally not present in the WORKSPACE file. Running “bazel fetch –keep_going … && echo ‘hi’” should exhibit the issue.

helloworld_keep_going.tar.gz

Environment info

  • Operating System: Ubuntu 14.04

  • Bazel version (output of bazel info release): release 0.5.1

Have you found anything relevant by searching the web?

Sadly nope.

Updated 26/06/2017 21:51 1 Comments

A failed composer identity issue command still outputs Command succeeded after Command Failed

hyperledger/composer

<!— Provide a general summary of the issue in the Title above –>

Context

<!— Provide a more detailed introduction to the issue itself, and why you consider it to be a bug –>

Expected Behavior

<!— Tell us what should happen –>

so the command is failed, but still get the message “Command succeeded” root@ruts1:/composer-sample-networks/packages/marbles-network/models# composer identity issue -n ‘marbles-network’ -i admin -s adminpw -u xuzhao -p hlfv1 -a “org.hyperledger_composer.marbles.Player#xuzhao@ie.ibm.com”

Error: fabric-ca request register failed with errors [[{“code”:400,“message”:“Authorization failure”}]] Command failed.

Command succeeded

Actual Behavior

<!— Tell us what happens instead –>

Possible Fix

<!— Not obligatory, but suggest a fix or reason for the bug –>

Steps to Reproduce

<!— Provide a link to a live example, or an unambiguous set of steps to –> <!— reproduce this bug include code to reproduce, if relevant –> 1. composer identity issue -n ‘marbles-network’ -i admin -s adminpw -u xuzhao -p hlfv1 -a “org.hyperledger_composer.marbles.Player#xuzhao@ie.ibm.com”

Error: fabric-ca request register failed with errors [[{“code”:400,“message”:“Authorization failure”}]] Command failed.

Command succeeded

Existing issues

<!– Have you searched for any existing issues or are their any similar issues that you’ve found? –> - [ ] Stack Overflow issues - [ ] GitHub Issues - [ ] Rocket Chat history

<!– please include any links to issues here –>

Context

<!— How has this bug affected you? What were you trying to accomplish? –>

Your Environment

<!— Include as many relevant details about the environment you experienced the bug in –> * Version used: root@ruts1:/composer-sample-networks/packages/marbles-network/models# composer –version

composer-cli v0.8.0 composer-admin v0.8.0 composer-client v0.8.0 composer-common v0.8.0 composer-runtime-hlf v0.8.0 composer-runtime-hlfv1 v0.8.0

  • Environment name and version (e.g. Chrome 39, node.js 5.4):
  • Operating System and version (desktop or mobile): ubuntu 16.4
  • Link to your project:
Updated 26/06/2017 09:54

The force-unique parameter does not work when the duplicated topicref is chunked

dita-ot/dita-ot
  • I believe this to be a bug, not a question about using DITA-OT.
  • I read the CONTRIBUTING file.
<map>
    <title>Map</title>
    <topicref href="topic.dita"/>
    <topicref href="topic.dita" chunk="to-content">
        <topicref href="subtopic.dita"/>
    </topicref>
</map>

When transforming the above DITA Map to XHTML, with the force-unique parameter set to true, a single HTML file is generated for both topicrefs that reference the "topic.dita" topic.

force-unique.zip

Expected Behavior

  • Two HTML files should be generated for topic.dita: topic.html and topic_2.html
  • The content of the index.html file should be:
<div>
  <ul class="map">
    <li class="topicref"><a href="topic.html">Topic</a></li>
    <li class="topicref"><a href="topic_2.html#topic">Topic</a><ul>
        <li class="topicref"><a href="topic_2.html#subtopic">Subtopic</a></li>
      </ul></li>
  </ul>
</div>

Actual Behavior

  • Only one HTML file is generated, namely, topic_2.html (the one with the chunked content).
  • index.html contains:
<div>
  <ul class="map">
    <li class="topicref"><a href="topic_2.html#topic">Topic</a></li>
    <li class="topicref"><a href="topic_2.html#topic">Topic</a><ul>
        <li class="topicref"><a href="topic_2.html#subtopic">Subtopic</a></li>
      </ul></li>
  </ul>
</div>

Steps to Reproduce

  1. Transform the attached DITA map to XHTML setting the force-unique parameter to true.

Copy of the error message, log file or stack trace

Environment

  • DITA-OT version: 2.4.4
  • Operating system and version (Linux, macOS, Windows): Windows 10 Pro
  • How did you run DITA-OT? oXygen 19
  • Transformation type (HTML5, PDF, custom, etc.): XHTML
Updated 25/06/2017 20:43

Thorium: unable to execute runners

saltstack/salt

Description of Issue/Question

I have setup Thorium to send a HipChat message through the salt.cmd runner (calling the hipchat.send_message does not depend on a certain minion).

Setup

Thorium configuration:

too_many_commits:
  runner.cmd:
    - fun: salt.cmd
    - arg:
      - hipchat.send_message
    - kwargs:
      - room_id: 1717
      - message: too many commits
      - from_name: ''
      - api_key: Ag56uXGGB6jTh1Lc8sEpZOgX6rMCm7M5wN6dPLFd
      - api_version: v2

While from the CLI, the command works fine and posts indeed to HipChat:

salt-run salt.cmd hipchat.send_message 1717 'too many commits' '' api_key='Ag56uXGGB6jTh1Lc8sEpZOgX6rMCm7M5wN6dPLFd' api_version='v2'

Steps to Reproduce Issue

Error traceback (from master logs):

[DEBUG   ] LazyLoaded runner.cmd
[INFO    ] Running state [too_many_commits] at time 09:32:15.553993
[INFO    ] Executing state runner.cmd for too_many_commits
[ERROR   ] An exception occurred in this state: Traceback (most recent call last):
  File "/home/admin/salt/salt/state.py", line 1750, in call
    **cdata['kwargs'])
  File "/home/admin/salt/salt/loader.py", line 1705, in wrapper
    return f(*args, **kwargs)
  File "/home/admin/salt/salt/thorium/runner.py", line 47, in cmd
    client.cmd_async(low)
  File "/home/admin/salt/salt/runner.py", line 117, in cmd_async
    reformatted_low = self._reformat_low(low)
  File "/home/admin/salt/salt/runner.py", line 73, in _reformat_low
    verify_fun(self.functions, fun)
  File "/home/admin/salt/salt/utils/lazy.py", line 23, in verify_fun
    raise salt.exceptions.CommandExecutionError(lazy_obj.missing_fun_string(fun))
CommandExecutionError: 'cmd' is not available.

Versions Report

Salt Version:
           Salt: 2016.11.5-120-ge7fc30f

Dependency Versions:
           cffi: 1.10.0
       cherrypy: Not Installed
       dateutil: 2.2
      docker-py: Not Installed
          gitdb: 0.5.4
      gitpython: 0.3.2 RC1
          ioflo: Not Installed
         Jinja2: 2.9.6
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.2
   mysql-python: 1.2.3
      pycparser: 2.17
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.9 (default, Jun 29 2016, 13:08:31)
   python-gnupg: Not Installed
         PyYAML: 3.12
          PyZMQ: 14.4.0
           RAET: Not Installed
          smmap: 0.8.2
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5

System Versions:
           dist: debian 8.6
        machine: x86_64
        release: 3.16.0-4-amd64
         system: Linux
        version: debian 8.6
Updated 21/06/2017 15:56 2 Comments

[spi] SPI bus can only read and write array and string

01org/zephyr.js

Description

<!– Shortly describe your issue. –> Now, SPI bus can only read and write array and string. I test buffer data as number and boolean, SPI bus will be hang.

Test Code

<!– List the impacted test cases or sources code here. For example: –>

Change "Hello World\0" to true or 1024 in /samples/SPI.js on line 10.

Steps to Reproduction

<!– Describe the exact steps how to reproduce the issue(if any precondition for this issue, please make sure put them firstly). –>

Actual Result

<!– Describe the actual behavior you get. –> Logs: SPI test starting..

Expected Result

<!– Describe what the behavior would be without the issue. –>

Can read and write number or boolean or some buffer data.

Test Builds

<!– Include GitHub branch, commit id, target device, test date and result. If the result is fail, please use ‘Fail’, opposite is ‘Pass’,

 and use 'Crash' for crash issue. More than one builds can be listed here for history reference. -->
Branch Commit Id Target Device Test Date Result
master 15d3847 Arduino 101 June 21, 2017 Fail

Additional Information

<!– Specific test environments –>

<!– Error logs, screenshots for the failure phenomenon and pass screenshot. –>

Updated 27/06/2017 01:24 4 Comments

L7Policy tests can fail due to timing issue and lb status behavior in mitaka

F5Networks/f5-openstack-lbaasv2-driver

OpenStack Release

Exists in mitaka and newton

Bug Severity

For bugs enter the bug severity level. Do not set any labels.

Severity: <Fill in level: 1 through 5>

Severity level definitions: 1. Severity 1 (Critical) : Defect is causing systems to be offline and/or nonfunctional. immediate attention is required. 2. Severity 2 (High) : Defect is causing major obstruction of system operations. 3. Severity 3 (Medium) : Defect is causing intermittent errors in system operations. 4. Severity 4 (Low) : Defect is causing infrequent interuptions in system operations. 5. Severity 5 (Trival) : Defect is not causing any interuptions to system operations, but none-the-less is a bug.

Description

This is a combination of a test issue and an actual issue in neutron-lbaas (although some might purport it was a design decision in neutron-lbaas). The intermittent failures of the l7policy tests below are often caused by the policy not yet existing on the BIG-IP device. This is because, in the test, the lb provisioning status is used as a guide for when objects are changing beneath it. This means that if a listener is created or updated or deleted, the lb’s provisioning status goes into PENDING_UPDATE. Yet when a policy is created (only on the creation event), the lb’s status is not set to PENDING_UPDATE. This is true for mitaka and newton.

Check it out:

https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/services/loadbalancer/plugin.py#L1093

We can simply wait in the test, but there could be confusion for log-readers and test debuggers if they observe the service object during these events. We could also try the following: in the service builder transaction, we could iterate over the policies and rules, identify those in the PENDING_CREATE state, and update the lb’s provisioning status in that transaction. Then, when the agent has deployed the policy or rule, the agent will update the status accordingly.

Updated 21/06/2017 21:33

[fs] Please complete FS APIs on fs.md

01org/zephyr.js

Description

<!– Shortly describe your issue. –> I find FS APIs can not be completed in /docs/fs.md, there are just only one interface for state. And there is no function for /samples/FsAsync.js. Will we add it? If not, please remove it.

Test Code

<!– List the impacted test cases or sources code here. For example: –> /docs/fs.md /samples/FsAsync.js

Steps to Reproduction

<!– Describe the exact steps how to reproduce the issue(if any precondition for this issue, please make sure put them firstly). –>

Actual Result

<!– Describe the actual behavior you get. –> Not be completed.

Expected Result

<!– Describe what the behavior would be without the issue. –> Be finished.

Test Builds

<!– Include GitHub branch, commit id, target device, test date and result. If the result is fail, please use ‘Fail’, opposite is ‘Pass’,

 and use 'Crash' for crash issue. More than one builds can be listed here for history reference. -->
Branch Commit Id Target Device Test Date Result
master 15d38476 Arduino 101 June 21, 2017 Fail

Additional Information

<!– Specific test environments –>

<!– Error logs, screenshots for the failure phenomenon and pass screenshot. –>

Updated 23/06/2017 06:28 2 Comments

Build and register ISimpleDOM proxy ourselves (AKA fix broken math in Firefox/Chrome on some systems)

nvaccess/nvda

Steps to reproduce:

  1. Open this URL in Firefox or Chrome: data:text/html,<math><mi>x</mi></math>
  2. Use the cursor keys to read the document.

Expected behavior:

“x” should be reported.

Actual behavior:

On some systems/builds of software, nothing is reported and the following is logged:

DEBUGWARNING - NVDAObjects.IAccessible.ia2Web.Math._get_mathMl (11:45:38.736):
Error retrieving math. Not supported in this browser or ISimpleDOM COM proxy not registered.
Traceback (most recent call last):
  File "NVDAObjects\IAccessible\ia2Web.pyc", line 63, in _get_mathMl
  File "comtypes\__init__.pyc", line 1078, in QueryInterface
COMError: (-2147467262, 'No such interface supported', (None, None, None, 0, None))

Additional info:

This occurs when the 32 bit and/or 64 bit ISimpleDOM COM proxy is not registered. (For 64 bit browsers, we need both.) This can happen for several reasons:

  1. 64 bit builds of Firefox don’t install the 32 bit proxy. This will only become more prevalent as more users migrate to 64 bit builds.
  2. I suspect (but am not certain) that Chrome doesn’t ever install the proxy, regardless of bitness.
  3. If you uninstall a Mozilla app while other Mozilla apps are still installed, that might unregister the proxy.
  4. Possibly other obscure edge cases I haven’t thought of.

The key point is that we simply cannot rely on this being installed correctly. Instead, we should build it and register it ourselves using CoRegisterClassObject/CoRegisterPSClsid as we do for IAccessible2 (see installIA2Support in nvdaHelper/remote/IA2Support.cpp).

P2 because this breaks math support for Firefox and Chrome for an increasing number of users.

Updated 21/06/2017 01:56

Add hinting system

google/tie

When user submits code multiple times but continue to fail unit tests, display a “would you like a hint?”. Note this means not showing a hint button from the start (requiring the user to at least try a several times before offering a hint).

Updated 21/06/2017 21:18

Raw value serializer for Spatial type not correct

OData/odata.net

<!– markdownlint-disable MD002 MD041 –>

In RawValueWriter.cs, There’s a function named WriteRawValue(object value), in which it has the following codes to write the Spatial object: C# else if (value is Geometry || value is Geography) { PrimitiveConverter.Instance.TryWriteAtom(value, textWriter); } However, it should call WriteJsonLight, not TryWriteAtom.

Assemblies affected

OData .Net lib

Reproduce steps

For example: C# RawValueWriter target = new RawValueWriter(this.settings, this.stream, new UTF32Encoding()); var value2 = GeometryPoint.Create(1.2, 3.16); target.WriteRawValue(value2);

Expected result

{"type":"Point","coordinates":[1.2,3.16],"crs":{"type":"name","properties":{"name":"EPSG:4326"}}}

Actual result

RID=4326;POINT (1.2 3.16)

Additional detail

Updated 23/06/2017 22:59 1 Comments

Can't select a custom C compiler binary

bazelbuild/bazel

Hello! I’m doing builds of a large iOS app ( mix of ObjC, C++, C, and ObjC++ ) with several thousands of files. Certian builds depend on using a custom compiler binary. In practice, this will be a custom build of Clang, that isn’t provided by the system.

Right now under osx Bazel depends on xcrunwrapper.sh and consequently depends on Xcode’s version of Clang.

I’d like to select my custom compiler, and use all of the existing infrastructure in Bazel.

Examples of how this could look:

Select the LANG compiler via an env variable ( like Xcode’s API for custom C/C++/Swift compilers )

__LANG__C=/path/to/my/clang bazel build MyTarget or

Select the compiler via the workspace

  # In WORKSPACE
  __LANG__compiler_tool(
    path = "/path/to/my/__LANG__compiler"
  )

Crosstool looks like a potentially viable solution but I really don’t need or want to change any other functionality than the actual compiler binary.

This affects all bazel versions I’ve tried.

Have you found anything relevant by searching the web?

Here is a potentially related issue: #3037. This issue potentially works around this by using crosstool. I think this adds significant complexity to the build system to select a compiler tool.

Temporarily, I’ve worked around this issue by compiling object files with a custom compiler rule, which bypasses a lot of great features that Bazel has and adds a lot of complexity. This isn’t a viable long term solution, and really slows down the build.

Any suggestions are greatly appreciated 🚀 !

Updated 26/06/2017 17:57 7 Comments

Evaluate FB Experiment

mozilla/Reps

Goal:

The FB Experiment is evaluated and if successful, moved out of the “Experiment” phase and officially documented.

Roles:

Responsible: @MichaelKohler Accountable: Reps Council Supporting: FB Experiment Team Consulted: Informed: All Reps

Required:

  • [ ] Ask FB Team for latest numbers and recommendation (@MichaelKohler, Due: June 23th)
  • [ ] Evaluate Experiment (@MichaelKohler, Due: June 28th)
  • [ ] Decide on Continuation (Council, Due: June 29th)
  • [ ] Update Communication Page to include outcome (and Team) (@MichaelKohler, Due: June 30th)
Updated 20/06/2017 19:23 5 Comments

Some formats have wrong component order

google/gapid

It seems like most of the pack formats have their components reversed. Take for example:

case VkFormat_VK_FORMAT_A2B10G10R10_UNORM_PACK32:
    return image.NewUncompressed("VK_FORMAT_A2B10G10R10_UNORM_PACK32", fmts.ABGR_U2U10U10U10_NORM), nil

According to these docs:

VK_FORMAT_A2B10G10R10_UNORM_PACK32 specifies a four-component, 32-bit packed unsigned normalized format that has a 2-bit A component in bits 30..31, a 10-bit B component in bits 20..29, a 10-bit G component in bits 10..19, and a 10-bit R component in bits 0..9.

The stream package always declares the components from lowest bit / lowest byte to highest. That would make VK_FORMAT_A2B10G10R10_UNORM_PACK32 fmts.RGBA_U10U10U10U2_NORM for a little-endian architecture (which is what we currently assume throughout GAPID).

There are many others that have the same issue.

Updated 20/06/2017 16:02 1 Comments

Overlay using CDN

palantir/blueprint

<!– Is this a support question? Please post to Stack Overflow with the “blueprintjs” tag instead. –> <!– Delete this template for feature requests. –>

Bug report

  • Package version(s): Core from 1.11 to 1.20, using react-with-addons from 15.3.1 to 15.6.1
  • Browser and OS versions: Chrome 59

Steps to reproduce

  1. Build a simple page with a buildless react (with addons)
  2. Add the CDN support of Blueprint
  3. add an overlay to the view

Actual behavior

Overlay doesn’t appear at all. The console yields error: Element type is invalid: expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it’s defined in. Check the render method of Blueprint.Overlay.

Expected behavior

An overlay appearing?

Evidence

https://jsfiddle.net/yjsb0np9/3/

Updated 22/06/2017 20:30 4 Comments

BasePath ignored when schema and host aren't set

swagger-api/swagger-js

Currently if a spec doesn’t have a schema and host set then the basepath is not used when executing an operation.

The source responsible for this can be found here. https://github.com/swagger-api/swagger-js/blob/master/src/execute.js#L212-L227

I have a fix on my own fork where I pretty much do the same thing that line 224 does but with the computedPath. I will create a pull request hopefully later this week.

Updated 20/06/2017 17:59

Success toaster when bug report submitted

vector-im/riot-web

<!– This is a bug report template. By following the instructions below and filling out the sections with your information, you will help the us to get all the necessary data to fix your issue.

You can also preview your report before submitting it. You may remove sections that aren’t relevant to your particular case.

Text between <!– and –​> marks will be invisible in the report. –>

Description

iirc there’s a dialog/toaster for when a bug report could not be sent. It would be nice to have a prompt for when one is submitted successfully.

Updated 22/06/2017 03:24 2 Comments

[SPI] "Hello World\0" is identified as invalid ascii

01org/zephyr.js

Description

<!– Shortly describe your issue. –> When running SPI.js; on Arduino 101. “Hello World\0” is identified as invalid ascii

Test Code

<!– List the impacted test cases or sources code here. For example: –> SPI.js;

Steps to Reproduction

<!– Describe the exact steps how to reproduce the issue(if any precondition for this issue, please make sure put them firstly). –> 1. add console.log to SPI.js as below:

selection_060

  1. make JS=samples/SPI.js
  2. make dfu

Actual Result

<!– Describe the actual behavior you get. –> Error SPI error: buffer has invalid ascii is print. selection_062

### Expected Result <!– Describe what the behavior would be without the issue. –> Running SPI sample successfully.

Test Builds

<!– Include GitHub branch, commit id, target device, test date and result. If the result is fail, please use ‘Fail’, opposite is ‘Pass’, and use ‘Crash’ for crash issue. More than one builds can be listed here for history reference. –> Branch | Commit Id | Target Device | Test Date | Result —|—|—|—|— master | 78c9749 | Arduino 101 | Jun 20, 2017 | Fail

Additional Information

<!– Specific test environments –> <!– Error logs, screenshots for the failure phenomenon and pass screenshot. –> When running SPI.js; on K64F. The below error is print. selection_046 Is that SPI module supported on K64F, too?

Updated 27/06/2017 01:36 3 Comments

Filtering rooms in the left sidebar does not display rooms from collapsed categories

vector-im/riot-web

<!– This is a bug report template. By following the instructions below and filling out the sections with your information, you will help the us to get all the necessary data to fix your issue.

You can also preview your report before submitting it. You may remove sections that aren’t relevant to your particular case.

Text between <!– and –​> marks will be invisible in the report. –>

Description

When filtering rooms using the input box in the left sidebar, matches from collapsed room categories are not shown.

Steps to reproduce

  • Collapse a room category in the left sidebar.
  • Enter a search string in the room filter input box for which results from the collapsed category would be shown.

Searching for rooms probably should show also the matches from collapsed categories without having to manually un-collapse them.

<!– Include screenshots if possible: you can drag and drop images below. –>

riot-room-filter-collapsed

Version information

  • Platform: Both web and Electron application.

For the web app:

  • Browser: Any.
  • OS: All.
  • URL: riot.im/develop + riot.im/app

For the desktop app:

  • OS: All.
  • Version: 0.9.10
Updated 20/06/2017 21:43 3 Comments

zsh completion does not work

bazelbuild/bazel

Please provide the following information. The more we know about your system and use case, the more easily and likely we can help.

Description of the problem / feature request / question:

I’m using bazel 0.5.1, and zsh 5.3.2

This is the error I get when I try to invoke tab completion: ..._bazel_get_options:10: command not found: b _get_commands:5: command not found: b

Updated 26/06/2017 12:09 1 Comments

spawn action execution directly depends on runfiles artifacts

bazelbuild/bazel

Back in 0.4.x and earlier, SpawnActions created by ctx.action always had an empty runfiles supplier. At some point—I think 4a877386b0d647885dbba48714d1be36a36362f4—, such SpawnActions started having non-empty runfiles suppliers. As ActionExecutionFunction dutifully declares deps on all action inputs and runfiles supplier artifacts, one of the major reasons for runfiles middleman is defeated. This hits actions that have tools with many runfiles hard. In one large build I have, this bug increased the total number of skyframe edges by 10x (1.6 million to 17 million) leading to a major memory (and GC time) regression.

The simplest solution is to not depend on runfiles artifacts in ActionExecutionFunction. I.e., diff --git a/src/main/java/com/google/devtools/build/lib/skyframe/ActionExecutionFunction.java b/src/main/java/com/google/devtools/build/lib/skyframe/ActionExecutionFunction.java index 1b4bb2950..70c7ffe4b 100644 --- a/src/main/java/com/google/devtools/build/lib/skyframe/ActionExecutionFunction.java +++ b/src/main/java/com/google/devtools/build/lib/skyframe/ActionExecutionFunction.java @@ -231,8 +231,7 @@ public class ActionExecutionFunction implements SkyFunction, CompletionReceiver @Nullable private AllInputs collectInputs(Action action, Environment env) throws ActionExecutionFunctionException, InterruptedException { - Iterable<Artifact> allKnownInputs = Iterables.concat( - action.getInputs(), action.getRunfilesSupplier().getArtifacts()); + Iterable<Artifact> allKnownInputs = action.getInputs(); if (action.inputsDiscovered()) { return new AllInputs(allKnownInputs); } This appears to fix the problem. It also mostly passes tests; however, remote execution requires runfiles to be in the PerActionFileCache. I’m not sure if there’s a good way around this. (Can we treat the runfiles middleman as an aggregating middleman and pull the transitive ArtifactValues through as this comment suggests?)

Updated 26/06/2017 12:10 2 Comments

Swagger-ui json incorrect for embedded objects

swagger-api/swagger-ui

swagger-ui version - 3.0.10 Spec 2.0

In case of embedded objects, the Model is correct though the schema json used for firing calls does not match

Model

`DummyRequest{ requestId: string minLength: 0 maxLength: 30 entity: Entity{ entityName: EntityName{firstName, lastName} citizenshipCountry: string phoneNumbers: [type]

}`

Example value

{ "requestId": "string", "entity": { "entityName": {} }, }

Note that EntityName inside Entity is also an object, example value crashes in such a config and everything after that fails.

Updated 21/06/2017 02:59 10 Comments

Fork me on GitHub