Contribute to Open Source. Search issue labels to find the right project for you!

`@turf/concave` improvements

Turfjs/turf
  • [ ] set maxEdge parameter optional and by default to Infinity. A common expected output of a concave function (like concaveman) is always to return a single concave hull Polygon. @turf/concave has the capability to return a multipolygon (with holes! 👍 ), but this should not be the default behaviour.

  • [ ] remove Error output if there is no hull to return, it should return an empty Feature (helpers.feature(null)). Currently the Error “too few polygons found to compute concave hull” is likely due to a small maxEdge. This shouldn’t be considered an error, because an error should mean a misuse of the module (which isn’t the case here).

Ref. #907.

Updated 20/08/2017 07:48

Add optional Dark theme

StratusNetwork/XML-Documentation

XML is painstaking, and doing it at night with a dark theme would make it easier on the eyes and give a better user experience, myself included. Dark themes are incredibly hot and accommodating for users that need it, so I would love an option to have a dark theme.

Updated 20/08/2017 07:14

in release-safe mode, in functions that return an error, generate an error return instead of crash

zig-lang/zig

When you create a release build of software, you don’t want it to crash. Some people even argue in favor of not asserting in release builds, because they’d rather have the program in an undefined state than crash.

In release-fast mode, we don’t do safety checks. Full optimization + undefined behavior. However you can override this default in a per block basis by using @setDebugSafety(this, true);. Then you’ll have some release-safe parts of your code even in release-fast mode.

In release-safe mode, we do optimization but we turn on debug safety checks to prevent undefined behavior, such as integer overflow checking. We don’t really know what else we could do when an integer overflow occurs besides crash the program.

…or do we?

In a function whose return value is error or %T, we could have that function return an error code, such as error.UndefinedBehavior. Assuming the programmer correctly uses the cleanup idioms, this error will be handled correctly, like the other possible errors in the function, even though the programmer never explicitly considered that this particular error was possible.

If this sounds like hidden control flow, I want to point out that we’re talking about how to handle undefined behavior, and crashing the program is certainly also an example of hidden control flow.

When this situation happens, it’s certainly a bug. But because the programmer (presumably) has a path to handle errors in these cases, it can be a bug that does not bring down the entire program or pose a security threat or instability to defined behavior of the program.

In the same way that you can override the default panic behavior by making pub fn panic(msg: []const u8) -> noreturn in the root source file, we can have another optional function such as pub fn warn(err: error) which defaults to printing a stack trace to stderr, but can be used to log these bugs. Obtaining a stack trace from the warn function would yield exactly the location that caused the problem.

Because Zig is so adamant about edge cases - for example insisting that memory allocation can fail - it is very common for functions to return a possible error. This makes the surface area that this covers fairly large.

Updated 20/08/2017 07:19

Deprecate the start.sh script in favor of using the underlying tools

phase2/generator-outrigger-drupal

The start.sh script was created early in establishing the environment sub-generator’s original code. The intention:

  • Work out all the details necessary to get a new local development environment from 0 to running as a single step.
  • Guarantee all the steps used to set up an environment on a local machine match the steps used by Jenkins for CI and deploying QA environments.
  • Create a single place to capture all the wacky gyrations needed to make containerized Drupal work the way we want it to.
  • Demonstrate to developers the “core path” to interacting with the containers to perform the basic operations of working with this setup.

However, over time, a few more things came up, sophisticated error handling, options to do different things, extra comments to further explain different steps and what their output meant… until the start.sh script became something developers felt they had to learn to use, instead of a collection of steps they could feel comfortable with and use the script as a sort of convoluted alias.

This has had a number of downsides:

  • Many developers are using the start.sh script instead of using docker-compose, grunt, composer, and drush when the specific tool is called for. This means the helper script is getting in the way of the learnability of standard tools and how they play together.
  • The start.sh script is large & convoluted looking, but is really fairly simple. This misleads developers into thinking setup is complicated than it is.
  • Some of the complexity that does exist came from a time when Docker, Outrigger, or Outrigger Drupal were less stable.
  • Some of the complexity that does exist came from a simpler time, when local environments were always a simpler case than QA environments. With the addition of tools such as Unison to Outrigger, that is no longer necessarily the case.

As a breaking change, it’s time to remove start.sh.

  • Replace start.sh contents with a few nice deprecation warnings.
  • Make sure the generated outrigger commands cover all the essential pieces of the start.sh script. This should adequately take over the discoverability/learnability function the start.sh script had.
  • Be comfortable with ~5 clear steps to deploy an environment instead of 1 multi-functional step, and incorporate those into the necessary Jenkins job templates.

What about existing projects?

Projects can continue to use the start.sh script they have, or not, as they like. As with any decision to run the generator with –replay to collect optional updates, there are things any project will need to skip. This will be a bigger, more obvious one, but the worst case scenario is that a project ends up with 3 sources of truth: the start.sh script, the rig project setup script, and whatever happens in Jenkins. That seems easy enough to rectify for a project willing to be “progressive” about updating code from the generator.

Updated 20/08/2017 06:55

Updates kube-dns to latest (1.14.4)

kubernetes-digitalocean-terraform/kubernetes-digitalocean-terraform

kube-dns was pretty out of date… latest version includes CVE fixes [1].

Slightly modified from [2] to work here. Removed serviceAccountName and strategy block.

Tested with the following command: kubectl run -i --tty testdns --rm --restart=Never --image=busybox --command -- nslookup kubernetes.default

[1] https://github.com/kubernetes/kubernetes/pull/47877 [2] https://github.com/kubernetes/kubernetes/tree/e633a1604f00908a1dcc898b206c3404db4d82ed/cluster/addons/dns

Updated 20/08/2017 06:39

Poor worker management

Still34/PsvDecryptCore

At the current state of the program, all decryption/file related tasks get queued up in the _taskQueue and will all execute simultaneously. In other words, hundreds of workers will get congested by the I/O.

Ideally, we’d want to maximize the output by limiting the number of concurrent queue, instead of queuing all tasks and waiting for all of them to finish.

I have no idea how we might go about implementing this, any help is welcomed.

Updated 20/08/2017 06:30

implement nodemon as devDependency

ideahacks/ideahacks.la

Steps

  1. Create new branch with naming convention

  2. Run npm install --save-dev nodemon. Yes, npm install requires internet.

  3. Within package.json, within the scripts section, create a new scripts called devServer that uses nodemon to run server.js. The script will basically look like this: "devServer": "./node_modules/.bin/nodemon server.js"

  4. Keep the scripts in alphabetical order. This means “devServer” goes first before all the other scripts. Don’t forget to add a comma after the script to separate fields.

  5. Test to make sure you did everything right. You should be able to run npm run devServer will will starts the server.

Updated 20/08/2017 06:27

Check matrix properties

genetics-statistics/GEMMA

GEMMA should give hints if matrix properties are invalid, such as discussed in https://github.com/genetics-statistics/GEMMA/issues/45#issuecomment-323524276. E.g.

  1. Fail if K has negative eigen values
  2. Fail if K is not symmetric
  3. Fail if K is not positive definite
  4. Warn in eigen values are very small
  5. Warn if K is ill conditioned
  6. Warn on related pairs
  7. Warn on MAF problems

@xiangzhou @pcarbo anything else we can think of?

Failures and warnings should be reported in the log file. These checks can be disabled with the –no-checks switch (i.e., dangerous mode but faster).

Updated 20/08/2017 06:20

Some endpoints still use singular resources

openEHR/specifications-ITS
GET /ehr/{ehrId}/versioned_ehr_access/version{?versionSelector}
POST /ehr/{ehrId}/versioned_ehr_access/version
POST /ehr/{ehrId}/versioned_compositions/{uid}/version{?format}
GET /ehr/{ehrId}/versioned_compositions/{uid}/version/{versionUid}{?format}
GET /ehr/{ehrId}/versioned_compositions/{uid}/version{?versionSelector,format}
GET /ehr/{ehrId}/versioned_ehr_status/version?{versionSelector}
GET /ehr/{ehrId}/versioned_ehr_status/version/{versionUid}

“version” should be “versions”.

From https://github.com/openEHR/specifications-ITS/blob/master/apiary.apib

Updated 20/08/2017 06:18

Failure to send mail for deactivation requests kills the entire call

cloudveiltech/CitadelManager

Recommend isolating the mail subsystem calls when deactivation requests are modified by the administrator and requested by the user. If mailing fails (like in a local sandbox) then the action partially fails. Admin will get an error when editing the record, and the user will get an error when trying to consume a granted deactivation request, but the granted request will be treated as consumed by the user and deleted. So,

  • Admin gets in error state but record will look good.
  • User requests deactivation again, record is consumed on server.
  • User gets stuck not being permitted to deactivate because the HTTP response was an error response.

I haven’t looked at the code but hopefully the mail calls can just be wrapped in a try/catch that gets logged.

Updated 20/08/2017 06:17 1 Comments

MGK-006 Spell- and mini-game packages

Magikcraft/product-board

From @jwulf on July 22, 2017 6:0

User Story

As a user I can add a module to my package.json, save it, then run spells from it directly in Minecraft with a command like /call <module_name> <exported_entrypoint> - eg: /call mct1 start.

I can also install and run mini-games from packages without writing any additional spell code to load them.

Background

At the moment you can import packages, but you still have to write a spell to use anything from the package. Developers currently cannot write packages of reusable spells that require nothing more than importing. With this feature, developers can write packages of spells and mini-games that end-users can import and cast “out of the box”.

Feature Description

Spell packages

Package interface definition

This is a specification for exporting spells from a package, and a wrapper method to call those exported entry points. A package of reusable spells must export spells:

module.exports.spells = {
    _default: require('./lib/lightning.js'),
    lightning : require('./lib/lightning.js'),
    fireball: require('./lib/fireball.js')
};

Calling spells from a spell package

A prototype call spell can work like this:

function call(module, spell) {
    require(module)[spell]();
}

It is used like this: /cast call mct1 start. An alternative syntax is: /cast call mct1:start or /cast call mct1.start.

The implementation for this is:

function call(module, spell = '_default') {
    let char;
    [':', '.'].forEach(_char => {
        if (module.indexOf(_char) != -1) {
            const _args = module.split(char);
            module = _args[0];
            spell = _args[1];
        }
    });
    require(module).spells[spell]();
}

[TODO - In or out of scope?] A further consideration is passing further arguments through to the exported spell, for example:

/cast call sitapati lstrike <playername>

Inspecting a spell (or mini-game) package

Packages can be inspected for the spells they contain like this:

const magik = magikcraft.io;

function inspect(module) {
    const spells = require(_module).spells;
    magik.dixit(`Module ${module} contains the following spells:`);
    magik.dixit(Object.keys(spells));
}

Mini-game packages

This is a specification for mini-game packages.

Naming convention

They should be named magikcraft-minigame-*.

Mini-game package interface definition

A mini-game package must export:

module.exports = { spells: { _default: require('./entrypoint.js` } };

It can optionally export additional entrypoints, but the spells._default export is mandatory, and should start the game.

Loading a mini-game package

The following spell will load a mini-game package that conforms to this specification:

function game(module, spell = '_default') {
    const _module = `magikcraft-minigame-${module}`;
    require(_module).spells[spell]();
}

Out of Scope

Out of scope for the initial implementation are:

  • A Java plugin command /call to call the spell - it will be /cast call mct1 start in the first cut.
  • Fusing the spells into the users' spells, so they don’t need namespacing to cast them.

Acceptance Criteria

A module with a conforming spells export can be added to the package.json, and then /cast call <modulename>:<spell> and /cast call <modulename> <spell> execute the spell.

/cast call <modulename> <spell> <arg1> <arg2> passes the arguments through to the exported spell.

User Acceptance Test Plan

Here is the process for testing this feature:

End-User Documentation

[Docs that can be copypasta to the user docs]

Copied from original issue: Magikcraft/product-board#6

Updated 20/08/2017 06:05

Async HTTP methods

Magikcraft/product-board

From @jwulf on July 21, 2017 10:47

User Story

As a user, in my Magikcraft code I can do non-blocking, asynchronous HTTP calls with GET and POST methods that take a callback argument that receives the response to the HTTP method call.

Background

Currently: HTTP calls from inside Nashorn are blocking (synchronous) if they use the Java plugin http helpers. This halts the entire server while the call executes, and if the call does not return it causes the server to panic and threadlock. HTTP calls from inside Nashorn that use the endpoint helper methods are asynchronous, but there is no return method. These methods are dispatched by delegating them over HTTP to the endpoint, which responds immediately with “OK”.

Feature Description

This feature is for asynchronous HTTP methods in the Java plugin. These methods will use the Apache Async HTTP library. They will take a JavaScript callback as an argument, and call that callback with the response.

This allows Magikcraft code running in Nashorn to make asynchronous calls to HTTP endpoints and receive and process the results of these calls in a callback.

The following should be implemented: * Async HTTP GET and POST methods in the Java Plugin * A wrapper library that provides API-compatible implementations for Node and Java, returning decoded responses to the callback.

Out of Scope

Great things that can be done in version 2:

Acceptance Criteria

The following need to function to consider this feature complete:

User Acceptance Test Plan

Here is the process for testing this feature:

End-User Documentation

[Docs that can be copypasta to the user docs]

Copied from original issue: Magikcraft/product-board#4

Updated 20/08/2017 06:05

Snipcart Button for Servers

Magikcraft/product-board

From @triyuga on July 21, 2017 8:51

User Story

As a user, so I can have my own awesome Magikcraft Server, and give money to Magikcraft, I want to read/watch vid about having my own Magikcraft Server, and purchase a Subscription to a Server Management Console, including requesting my own subdomain, from the https://play.magikcraft.io/servers page of the play app

Feature Description

  • Add “Buy a Server” panel to the Server List page. Contains info about buying a server, and a “Buy Now” Snipcart subscription button. https://docs.snipcart.com/configuration/recurring-and-subscription-plans-definition.
  • Add a basic form collecting requested sub domain (defaults to minecraftUsername) and optionally some info about why the user wants the server.
  • Add ability for Admins to Create new servers and add users to servers.
  • Everything else is Wizard of Oz

Acceptance Criteria / User Acceptance Test Plan

I can learn about and buy a Server Console Subscription from the Servers page via a Snipcart Subscription button.

End-User Documentation

[Docs that can be copypasta to the user docs]

Copied from original issue: Magikcraft/product-board#3

Updated 20/08/2017 06:05

Snipcart Button for Membership

Magikcraft/product-board

From @triyuga on July 21, 2017 8:38

User Story

As a user… So I can give money to Magikcraft, I can click on a “Become a Member” Snipcart button to buy a monthly recurring subscription as a Magikcraft Member.

Feature Description

Use Snipcart Subcriptions https://docs.snipcart.com/configuration/recurring-and-subscription-plans-definition Configure a basic membership subscription product and put the button in the play app.

Acceptance Criteria / User Acceptance Test Plan

On every page I can see/access a “Become a Member” Snipcart button, enter purchasing details, and start paying a monthly subscription to Magikcraft. Josh and Tim to both subscribe upfront.

End-User Documentation

[Docs that can be copypasta to the user docs]

Copied from original issue: Magikcraft/product-board#2

Updated 20/08/2017 06:04

Modular lore - core

Magikcraft/product-board

From @jwulf on July 21, 2017 7:54

User Story

As a developer, I can write lore that is loaded into the core magik. namespace on engine initialisation. I can do this in my own repository with no dependency on the bootloader.

Feature Description

  • A new repository with a new package - magikcraft-lore-core. This repo and package contain the magik. lore.
  • This package is specified as a dependency in the server’s root package.json.
  • magikcraft-api loads all dependencies in the server root package.json that have a package name of the pattern magikcraft-lore-*.
  • The magikcraft-lore-core main entry in its own package.json specifies a file that exports a loreToLoad object that is an array of lore.
  • the lore items are an object of this shape: { name: string, cost: number, code: (ICanon) => () => any }

Out of Scope

Explicitly out of scope for this feature are the following:

  • Updating the server root package.json file to add the dependency. This should be exposed to end-users, but if necessary it can be done by logging in via ssh and docker to prove this feature.

Acceptance Criteria

  • Lore are dynamically loaded at initialisation
  • Lore are dynamically loaded when the engine is reloaded via /npm reload or /npm update or /enginereloador /spellreload`.

User Acceptance Test Plan

  1. Add the magikcraft-lore-core package to the server root package.json as a dependency.
  2. Add a new lore to the appropriate branch of magik-lore-core, and git push to that branch on GitHub. This new lore should have a visible side effect, like printing to the console.
  3. Run /npm update root on the testing server to cause the npm installation of dependencies.
  4. Write a test spell that calls the new lore.
  5. /cast the test spell.
  6. Observe the side-effect.

End-User Documentation

Copied from original issue: Magikcraft/product-board#1

Updated 20/08/2017 06:04 2 Comments

mix of a reordering ∧ and ∨.

jonaprieto/agda-metis

One of the problems trying to justify clausify and canonicalize is the ordering of the subformulas after applying one of these inferences rules. Since we recently added support for reordering a conjunction and a disjunction. We want that the next step supports a mix of those cases.

Here a problem sample.

Original:
p ∨ (q ∨ (r ∧ (s ∨ t)))

One of many reordering:
((t ∨ s) ∧ r) ∨ p) ∨ q
Updated 20/08/2017 06:02

Replace regex with BeatifulSoup in fetches

Galarzaa90/NabBot

Regex patterns are a hassle to mantain and are prone to errors. BeatifulSoup explores through the page’s html tags and has built in methods to fetch the element’s text, making it a lot easier to parse information.

However, it seems to be a bit slower than regex in some cases, for instance, when it was replaced in get_character, parsing was taking around 40ms, compared to the 2ms it takes using regex. But since the fetching part already takes 400-500ms, the difference is negligible.

Functions to update: - [x] get_character - [ ] get_guild_online - [ ] get_highscores - [ ] get_world_info - [ ] get_world_online

Updated 20/08/2017 05:58

Investigate making Formation usable for Node-based server-side validation

ozzyogkush/formation

Since lots of websites nowadays use node and node-based server software, Formation could be quite useful on that end of the spectrum. Server-side validation is arguably more important than client-side.

This is to investigate what changes would be necessary in order to make it work.

My initial thought on implementing it is to create 3 new repositories:

  • formation-core : the core rule validation engine API
  • formation-client : the client-side process and event system API
  • formation-server : the server-side process and event system API

This repository would be a shorthand for using both formation-client and formation-server.

Updated 20/08/2017 05:44

UI for API role based permissions in API Publisher

wso2/carbon-apimgt

Proposed changes in this pull request

  • UI component for specifying role based permissions for an API.
  • Disabling API edit, delete options based on role permissions.
  • API delete option is moved into the API’s overview page.

When should this PR be merged

ASAP

Follow up actions

Checklist (for reviewing)

General

  • [x] Is this PR explained thoroughly? All code changes must be accounted for in the PR description.
  • [ ] Is the PR labeled correctly?

Functionality

  • [ ] Are all requirements met? Compare implemented functionality with the requirements specification.
  • [ ] Does the UI work as expected? There should be no Javascript errors in the console; all resources should load. There should be no unexpected errors. Deliberately try to break the feature to find out if there are corner cases that are not handled.

Code

  • [ ] Do you fully understand the introduced changes to the code? If not ask for clarification, it might uncover ways to solve a problem in a more elegant and efficient way.
  • [ ] Does the PR introduce any inefficient database requests? Use the debug server to check for duplicate requests.
  • [ ] Are all necessary strings marked for translation? All strings that are exposed to users via the UI must be marked for translation.

Tests

  • [ ] Are there sufficient test cases? Ensure that all components are tested individually; models, forms, and serializers should be tested in isolation even if a test for a view covers these components.
  • [ ] If this is a bug fix, are tests for the issue in place? There must be a test case for the bug to ensure the issue won’t regress. Make sure that the tests break without the new code to fix the issue.
  • [ ] If this is a new feature or a significant change to an existing feature? has the manual testing spreadsheet been updated with instructions for manual testing?

Security

  • [ ] Confirm this PR doesn’t commit any keys, passwords, tokens, usernames, or other secrets.
  • [ ] Are all UI and API inputs run through forms or serializers?
  • [ ] Are all external inputs validated and sanitized appropriately?
  • [ ] Does all branching logic have a default case?
  • [ ] Does this solution handle outliers and edge cases gracefully?
  • [ ] Are all external communications secured and restricted to SSL?

Documentation

  • [ ] Are changes to the UI documented in the platform docs? If this PR introduces new platform site functionality or changes existing ones, the changes should be documented.
  • [ ] Are changes to the API documented in the API docs? If this PR introduces new API functionality or changes existing ones, the changes must be documented.
  • [ ] Are reusable components documented? If this PR introduces components that are relevant to other developers (for instance a mixin for a view or a generic form) they should be documented in the Wiki.
Updated 20/08/2017 05:48 1 Comments

Validate Markdown content

bernardodiasc/filestojson

Looks like broken markdown can cause trouble with gh-pages and probably on data generation as well. Need to include a step somewhere somehow to avoid invalid stuff to be published.

Resources:

  • https://github.com/markdownlint/markdownlint
  • https://github.com/wooorm/remark-lint
Updated 20/08/2017 06:08

Scottish Deancross Sector Boundary

VATSIM-UK/UK-Sector-File

Summary of issue/change

Map out the Deancross sector in the format demonstrated in _data\Old\Static Boundaries.txt

Lines should be drawn for the surrounding lateral boundaries, and then any lines within the main sector where the vertical boundary changes. The best source of this will be the agreed levels diagram for Deancross and the sector lines drawn out for the sector in EuroScope.

There’s also a sector diagram too - which may be useful.

Reference (amendment doc, official source) including page number(s)

<a href=“http://www.vatsim-uk.co.uk/download/fetch/?downloadID=00344” target=“_blank”>Scottish Agreed Levels Diagrams</a>

<a href=“http://www.vatsim-uk.co.uk/download/fetch/?downloadID=00343” target=“_blank”>Scottish Sector Diagrams</a>

Affected areas of the sector file (if known)

Sectors\Static\SCO/Deancross.txt

Updated 20/08/2017 05:02

Scottish Rathlin Sector Boundary

VATSIM-UK/UK-Sector-File

Summary of issue/change

Map out the Rathlin sector in the format demonstrated in _data\Old\Static Boundaries.txt

Lines should be drawn for the surrounding lateral boundaries, and then any lines within the main sector where the vertical boundary changes. The best source of this will be the agreed levels diagram for Rathlin and the sector lines drawn out for the sector in EuroScope.

There’s also a sector diagram too - which may be useful.

Reference (amendment doc, official source) including page number(s)

<a href=“http://www.vatsim-uk.co.uk/download/fetch/?downloadID=00344” target=“_blank”>Scottish Agreed Levels Diagrams</a>

<a href=“http://www.vatsim-uk.co.uk/download/fetch/?downloadID=00343” target=“_blank”>Scottish Sector Diagrams</a>

Affected areas of the sector file (if known)

Sectors\Static\SCO\Rathlin.txt

Updated 20/08/2017 05:00

Maintenance of the client samples

OpenTouryoProject/OpenTouryo

Requirement

  • Implement ServiceForSb and JsonControllerand consume it from UWP and SPA sample.
  • In this case, use B and D layers of WSServerSample that is used in other WSClientSample.
  • In this case, use the token obtained by authentication of #248.

Reference information

  • WebAPI設計のポイント - Open 棟梁 Wiki https://opentouryo.osscons.jp/index.php?WebAPI%E8%A8%AD%E8%A8%88%E3%81%AE%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88
Updated 20/08/2017 04:59

some querys that has `DISTINCT` and `ORDER BY` should be invalid

pingcap/tidb

1. What did you do?

drop table if exists t;
create table t(a bigint, b bigint, c bigint);
insert into t values(1, 2, 1), (1, 2, 2), (1, 3, 1), (1, 3, 2);
select distinct a, b from t order by c;

To order the result, duplicates must be eliminated first. But to do so, which row should we keep ? This choice influences the retained value of c, which in turn influences ordering and makes it arbitrary as well.

In MySQL, a query that has DISTINCT and ORDER BY is rejected as invalid if any ORDER BY expression does not satisfy at least one of these conditions: - The expression is equal to one in the select list - All columns referenced by the expression and belonging to the query’s selected tables are elements of the select list

2. What did you expect to see?

MySQL > select distinct a, b from t order by c;
ERROR 3065 (HY000): Expression #1 of ORDER BY clause is not in SELECT list, references column 'test.t.c' which is not in SELECT list; this is incompatible with DISTINCT

3. What did you see instead?

TiDB > select distinct a, b from t order by c;
+------+------+
| a    | b    |
+------+------+
|    1 |    2 |
|    1 |    3 |
+------+------+
2 rows in set (0.00 sec)
TiDB > desc select distinct a, b from t order by c;
+---------------+--------------+---------------+------+--------------------------------------------------------------------------------------------------------------+-------+
| id            | parents      | children      | task | operator info                                                                                                | count |
+---------------+--------------+---------------+------+--------------------------------------------------------------------------------------------------------------+-------+
| TableScan_7   | HashAgg_6    |               | cop  | table:t, range:(-inf,+inf), keep order:false                                                                 |     4 |
| HashAgg_6     |              | TableScan_7   | cop  | type:complete, group by:test.t.a, test.t.b, funcs:firstrow(test.t.a), firstrow(test.t.b), firstrow(test.t.c) |     1 |
| TableReader_9 | HashAgg_8    |               | root | data:HashAgg_6                                                                                               |     1 |
| HashAgg_8     | Sort_4       | TableReader_9 | root | type:final, group by:, , funcs:firstrow(col_0), firstrow(col_1), firstrow(col_2)                             |     1 |
| Sort_4        | Projection_5 | HashAgg_8     | root | test.t.c:asc                                                                                                 |     1 |
| Projection_5  |              | Sort_4        | root | test.t.a, test.t.b                                                                                           |     1 |
+---------------+--------------+---------------+------+--------------------------------------------------------------------------------------------------------------+-------+
6 rows in set (0.00 sec)

4. What version of TiDB are you using (tidb-server -V)?

$./bin/tidb-server -V
Git Commit Hash: a0017eda04a1d48e9ec088457afe279a8cd064f4
UTC Build Time:  2017-08-19 01:35:40
Updated 20/08/2017 04:46

Improve method for refreshing travel time

w1res/should-i-leave-yet

I was having a hard time finding a way to request an updated travel time estimate for the selected route in the Bing Maps documentation.

I thought I could just call the ‘calculateDirections’ function on the ‘DirectionsManager’ instance, but it would complain that it doesn’t have any waypoints. I think this has something to do with the fact that the waypoints weren’t added manually, but we’re added using the directions input panel.

My next thought was to simply remove the waypoints added by the input panel, and then add them back manually, but that had an issue where the ‘removeWaypoint’ function wasn’t working.

I ended up having to basically reinitiate the entire ‘DirectionsManager’ instance. There has to be a better way of doing this.

Updated 20/08/2017 04:29

Fork me on GitHub