For long term stability, updates, and security
Will have to update to 1.10 as first incremental update: https://docs.djangoproject.com/en/1.11/howto/upgrade-version/
For long term stability, updates, and security
Will have to update to 1.10 as first incremental update: https://docs.djangoproject.com/en/1.11/howto/upgrade-version/
Created test suite for parsing module #986 -> tests Validator -> tests Digestor -> tests utils
Updated schools mappers to use new directory structure -> looks for courses.py, evals.py, textbooks.py -> Moved school settings to school local config files
Restructured logging infrastructure
-> Use logging config yaml files instead of Python dicts
-> Removed custom loggers written in parsing module
-> Use pythong logging module
-> load loggers in
-> Incorporated updates into makeschool command
-> TODO - email notification logger on parsing error #552
Minor API changes to parsing module
-> pass year_and_terms_filter dictionary to start method of parse
-> Load sensitive.py information inside
-> moved and refactored much of extractor into utils
-> Parsing command arguments use composable actions
-> Iterated on parsing inline documentation
-> removed yet iterated on extract_info
(partially) added uoft to parsing infrastructure
Added sensitive.py to .gitignore
fixed absolute file import errors to pass doc build
++ many more things that I dont want to enumerate :-)
To do not bother with manual releasing. Already implemented in https://github.com/Codearte/gradle-nexus-staging-plugin/.
composer create-project is broken !
Updating to version 1.5.1 (stable channel) produces :
cd /home/travis/build/SemanticMediaWiki/SemanticMediaWiki/../mw/extensions composer create-project mediawiki/semantic-media-wiki SemanticMediaWiki dev-master -s dev --prefer-dist --no-dev No composer.json in current directory, do you want to use the one at /home/travis/build/SemanticMediaWiki/mw? [Y,n]?
Getting this popup in Zendesk
Starting August 21, Zendesk is removing Non-Secure Help Center Domains, SLAs v1 in Zendesk Support, and Plain Text Editor in Zendesk Support. You are affected by one or more of these features.
This should build with Rust stable, beta and nightly.
As part of this ticket, Cargo.toml should be updated with Travis-CI badges.
Basically, we need to replicate https://github.com/gotham-rs/gotham/pull/21 here.
@ata2001 has pointed out two things:
* right now, we ignore the max. chars in one line limit (which is for legacy reasons - I couldn’t convert it with
autopep8, is there another tool to do it or did I use it wrong?)
* Python developers have reversed their position on operators at the end of line breaks. It used to be the recommended style, now they recommend the reverse. Our autopep8/Travis CI still suggests the reverse (do we have an old version on Travis CI? maybe we could use another tool instead?)
What does everyone think, would it be OK for you if we did the max. characters in one line enforcement, and tried to put all operators at the beginning of the next line?
blocks and rawgit are good resources for submitting debugger examples that can easily be debugged.
We should have docs for the best process and point to them in our ISSUE_TEMPLATE.md
I was working with a dev trying to create a custom command with confirmation. He set “confirmation=True” in the command registration but then the command failed with “<operation> received unexpected keyword argument ‘yes’.”
The confirmation mechanism is built into the command execution template, so it should never be required (or desired) within a custom command. Instead of being a
kwargs.get operation, it should be a
kwargs.pop operation. If the parameter should be accessible within the command, we should probably have less vague name than “yes”.
<!— Provide a general summary of your changes in the Title above –>
<!— Describe your changes in detail –>
<!— Why is this change required? What problem does it solve? –> <!— If it fixes an open issue, please link to the issue here. –>
This change adds the required methods to allow plugins to register migrations with guice, which are executed (once) like migrations from the core.
<!— Please describe in detail how you tested your changes. –> <!— Include details of your testing environment, and the tests you ran to –> <!— see how your change affects other areas of the code, etc. –>
<!— What types of changes does your code introduce? Put an
x in all the boxes that apply: –>
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
<!— Go over all the following points, and put an
x in all the boxes that apply. –>
<!— If you’re unsure about any of these, don’t hesitate to ask. We’re here to help! –>
- [x] My code follows the code style of this project.
- [ ] My change requires a change to the documentation.
- [ ] I have updated the documentation accordingly.
- [ ] I have read the CONTRIBUTING document.
- [ ] I have added tests to cover my changes.
- [x] All new and existing tests passed.
From their About page:
MicroNow is a digital platform that aggregates and curates the microbial sciences. A trusted, reliable resource with vetted information in real time. MicroNow distributes the most up-to-date news, journal articles, preprints, policy updates and other content relating to the microbial sciences. The site pulls relevant preprints from BioRxiv, a free online archive and distribution service for unpublished preprints in the life sciences.
Additionally, MicroNow houses themed communities, allowing users to connect with others of similar interests globally. Experts and users spark discussions relating to each theme. Importantly, MicroNow allows users to build a personal, professional profile, and publish articles, blogs and opinions on the site to enhance their profile over time.
As an extension of the American Society for Microbiology, MicroNow’s ultimate goal is to advance the microbial sciences. MicroNow will address the looming challenges of digital data overload, by leveraging and building upon the American Society for Microbiology’s intellectual resources, to curate and aggregate the most relevant and compelling information.
“The Chan Zuckerberg Initiative has acquired Meta to help bring its technologies to the entire scientific community.”
@r4j4h would you please take this one?
See also discussion in #1.
/dist. Assume node/npm/webpack are installed.
/HACKING.mdor some such.
For now, please do not minify, and please use non-minified libraries. I would like to introduce minification later, probably in v1.0 or v1.1.
After some more testing based in this, I am planning to manually update
/dist whenever I update the version on the Chrome Web Store. That way we won’t have to manually
skip-worktree new files in
/dist, but the actual running source will still be available to browse online.
I want to run a perf try-job with a catapult change, but this fails:
$ cd $SRC/third_party/catapult [... make my changes ...] $ git cl upload [ ... ] Upload server: https://codereview.chromium.org (change with -s/--server) Issue created. URL: https://codereview.chromium.org/2996023002 (patchset: 1) Uploading base file for telemetry/telemetry/page/shared_page_state.py $ $SRC/tools/perf/run_benchmark try linux loading.desktop.network_service --pageset-repeat=10 --story-filter=Mercadolivre --repo_path=$SRC/third_party/catapult (WARNING) 2017-08-16 11:29:33,439 trybot_command.IsBenchmarkDisabledOnTrybotPlatform:321 Benchmark loading.desktop.network_service has ShouldDisable() method defined. If your trybot run does not produce any results, it is possible that the benchmark is disabled on the target trybot platform. (ERROR) Perf Try Job: Failed to get branch.fix_cache_temperature.remote from git config.
Am I missing or doing something wrong?
There are many advantages by moving towards a JS run bundler: - Easy integration of new JS features from ES6 (modules, const, let, arrow functions, etc) - Easy integration of unit and integration JS test - More active community to solve problems or issues - Easy splitting of file bundlers, for instance per page if the current JS bundled (application.js) grows considerably - and many more…
Related to https://github.com/datatogether/archivertools/issues/5#issuecomment-320113956
I’m not able to see the linked issue, it’s giving me a 404. Is there text beyond the quote that I’m not seeing?
“Probot: Workflow” plugin could be enabled across the github org, and dropping a specific custom
.probot.js file could:
Alternatively, we could decide not to disable issue queues on existing repos.
In order that the AWS Batch terraform module is reusable both internally and by others outside Wellcome we should move it into it’s own repo and consume it as an external module.
In addition it should be switched to use the new CloudFormation provisioning for AWS Batch.
A (private) repo currently exists here:
When work begins it should be made public.
why is this try/except block needed to report errors in the code that it contains? seems wrong.
ideally we would fix how these commands are being defined so they don’t fail silently. if that’s not possible, then all of our commands need a try/except block.
Easier to understand and maintain
https://github.com/opendatakit/briefcase/blob/master/build.gradle#L36 notes that instead of
compile, the command should be
testCompile, and yet if
testCompile is used, it fails as shown below.
This is not desirable because we care about binary size and we don’t want to bundle junit as part of the jar if we don’t have to.
:generateBuildConfig UP-TO-DATE :compileBuildConfig UP-TO-DATE :compileJava warning: [options] bootstrap class path not set in conjunction with -source 1.7 /Users/yanokwa/odk-briefcase/src/test/java/org/opendatakit/briefcase/model/BriefcaseAnalyticsTest.java:3: error: package org.junit does not exist import org.junit.Test; ^ /Users/yanokwa/odk-briefcase/src/test/java/org/opendatakit/briefcase/model/BriefcaseAnalyticsTest.java:5: error: package org.junit does not exist import static org.junit.Assert.assertNotNull; ^ /Users/yanokwa/odk-briefcase/src/test/java/org/opendatakit/briefcase/model/BriefcaseAnalyticsTest.java:5: error: static import only from classes and interfaces import static org.junit.Assert.assertNotNull; ^ /Users/yanokwa/odk-briefcase/src/test/java/org/opendatakit/briefcase/model/BriefcaseAnalyticsTest.java:8: error: cannot find symbol @Test ^ symbol: class Test location: class BriefcaseAnalyticsTest /Users/yanokwa/odk-briefcase/src/test/java/org/opendatakit/briefcase/model/BriefcaseAnalyticsTest.java:11: error: cannot find symbol assertNotNull(ba); ^ symbol: method assertNotNull(BriefcaseAnalytics) location: class BriefcaseAnalyticsTest Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. 5 errors 1 warning :compileJava FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':compileJava'. > Compilation failed; see the compiler error output for details. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. BUILD FAILED
We have got a suggestion that we provide some IDE template projects for people to use with PSL.
I am not too keen on this idea since it will mean a lot of maintenance with different PSL and IDE versions. However, we still probably need more guidance on starting PSL projects. We currently tell people to copy and example and start from there. We use to use maven archetypes, but this was pretty messy and confusing for users to start projects with.
e.g. as per https://github.com/mapbox/workshops/tree/gh-pages/wikimania-mapathon-2017
Currently Telepresence lives in in
datawireio/telepresence when RPM’s and Debs are uploaded which is technically fine but as we build additional Datawire tool packages for Linux if we follow that pattern we will end up with
datawireio/kubernaut among others. This means people need to continually add and install new repository files for each project… which is unnecessary and slows down update processes as caches need to be pulled from all the different repositories.
We should consolidate everything under
datawireio/stable and mirror to
datawireio/telepresence for backward compatibility.
We currently only support up to Fedora 25. Unfortunately the change is a little more involved than just adding
26 to the respective package build and upload code as the
alanfranz/fwd-* Docker images have been superseded by
I’ve already made this change in Kubernaut. It just needs to be ported over to here.
For #946, we disabled large sums of tests.
Need env variables set and ejb client props file in config folder.
This has been requested: https://mail.google.com/mail/u/0/#inbox/15de6724651ce2ad
pytest-html recently switched over to using Build Stages in pytest-dev/pytest-html#127. Consider if this would be worthwhile for us.
We need to update our installer to our changed situation regarding libs and dependencies. I’ve created a new advanced installer project from scratch. Please check it out on my new branch (https://github.com/flokuep/Vocaluxe/tree/installer).
We need to think about
What needs to be done
Edit: It seems that currently installer doesn’t work properly… Will investigate what happens later this week; For now it’s too late.
After the discussion in the chat. We currently use a development model where we push everything to master and have that semi-stable and then use feature/bugfix branches for everything.
I propose we use gitflow which we already do for 80% and make a
develop branch and make that the default branch. The documentation for gitflow is http://nvie.com/posts/a-successful-git-branching-model/
The changes from the current process:
The major part of development; the PR’s would still be the same. We should also decide on what the release cycle for pmbootstrap would be to
One way would be to make a fully rolling release with a
unstable branch (develop) and a
stable branch (master) and longer supported versions which are the tags. The linux kernel is basically doing a similar release cycle with the weekly RC releases.
The basic rules for gitflow:
hotfix/[name]branch is made from master instead of develop and this is merged again to both
from @altavir https://github.com/twosigma/beakerx/issues/5836#issuecomment-322266351:
I just tried to use
Grape itself in groovy kernel and it does not work which is strange.
@Grab(group='commons-io', module='commons-io', version='2.5')
produces an error:
It worked in the old beaker.
Fix suggested by @jpallas:
I don’t know much about the old beaker, so I couldn’t say why Grape worked in it. The BeakerX Groovy kernel is built using the
groovy-all artifact, which has Ivy marked as an optional dependency, and the current kernel build does not explicitly add an Ivy dependency. So the failure above would be expected.
I usually spend the first 20 minutes of a hacknight intro explaining the history of EDGI / Guerrilla Archiving. It always seems to do well to ground the later specifics of projects.
As it stands, this quick fly-through only seems possible when an EDGI member is available in-person. I wonder whether we might benefit from having some visual timeline for new people (or media) to cruise.
cc: @kgunette @dcwalk
Dev Test Prod
Dev - DONE Test - DONE Prod
Moving dev stuff from #1380 for more targeted work…
We either need to finish #980 – which still needs some work, possibly on the antwar level – or port the site to a more stable build tool like phenomic or gatsby. Porting the site would also offer a few things, like full HMR, that #980 doesn’t address but will require some serious thought. Either way these are the infrastructure issues we need to solve:
<title>issues (e.g. #720).
The last one refers to outsourcing much of the custom dev-related stuff to more modular packages as we’ve done with the voting app and extracted banner (though neither are actually being consumed yet).
Other Dev-Related Issues
/contentstructure with less configuration.
cc @montogeek @probablyup @geoffdavis92 @MoOx @bebraw @hulkish. If I’ve missed anyone please let me know, but I’ve tried to include everyone who has shown interest in site dev or has previously worked on site dev regularly.
Typing a RG or a NSG that doesn’t exist will not show any error. As a user I think this is confusing.
az network nsg show --resource-group myResourceGroupxxxxx --name myNetworkSecurityGroupxxxx
Probably adding an message would be more helpful.
Install Method: CLI / Linux
CLI Version: What version of the CLI and modules are installed? (Use
az --version) :
azure-cli (2.0.13) acr (2.0.10) acs (2.0.12) appservice (0.1.12) batch (3.1.1) billing (0.1.3) cdn (0.0.6) cloud (2.0.7) cognitiveservices (0.1.6) command-modules-nspkg (2.0.1) component (2.0.7) configure (2.0.10) consumption (0.1.3) container (0.1.8) core (2.0.13) cosmosdb (0.1.11) dla (0.0.10) dls (0.0.12) eventgrid (0.1.1) feedback (2.0.6) find (0.2.6) interactive (0.3.7) iot (0.1.10) keyvault (2.0.8) lab (0.0.9) monitor (0.0.8) network (2.0.12) nspkg (3.0.1) profile (2.0.10) rdbms (0.0.5) redis (0.2.7) resource (2.0.12) role (2.0.10) sf (1.0.6) sql (2.0.9) storage (2.0.12) vm (2.0.12) Python (Linux) 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]
OS Version: What OS and version are you using?
``` cat /etc/lsb-release
DISTRIB_ID=LinuxMint DISTRIB_RELEASE=18.1 DISTRIB_CODENAME=serena DISTRIB_DESCRIPTION=“Linux Mint 18.1 Serena” ```
Shell Type: What shell are you using? (e.g. bash, cmd.exe, Bash on Windows) :
Design a system to profile the “goodness” of a parse. In its current idealization, the Profiler will be a set of metrics (potentially visual charts as well).
Some ideas: - [ ] Sections per unique course - [ ] unique professors - [ ] distribution of 12hr and 24hr time ranges - [ ] Splitting up name into first and last - [ ] Unique builidings - [ ] lists are not built up as the parse progress (i.e. failure to clear information)
The general principle is that the more detailed information we can get, the better we can harness information to develop features and affect the product at its core!
This Profiler should be subclassed from the
parsing.library.viewer.Viewer and be able to iteratively add on more metrics as more are developed.
Similar but not the same as #552
A few changes are required in the style guide due to the auto-formatting support added by @Others.
@Others do you remember any other changes that require style guide updates?
Setting CMAKE_INSTALL_PREFIX is very bad practice.
Especially since it sets it to a path relative to the build directory it breaks most packaging systems entirely since the result of
make DESTDIR=<dir> install starts to depend on the absolute path of the build directory.
Removing this shouldn’t break (sensible) workflows at all.
2xx MB currently.
masterand releasing on pypy.
“WITNESS makes it possible for anyone, anywhere to use video and technology to protect and defend human rights.”
https://github.com/cxong/cdogs-sdl/tree/master/build/linux has 48px max which isn’t big enough and therefore won’t get displayed on modern Linux menus.
Basic placeholder folder structure
It would be awesome if with each finished commit, Travis could look at the Strings folders (client and server), and re-activate any tracking-issue bugs for files that have the pattern
: englishSubstitutesForNotYetTranslated (note the colon; otherwise might match the comment at the top of the file).
The following table (also regex-able) contains the list of files to translate: https://github.com/OfficeDev/script-lab/blob/master/TRANSLATING.md/#incremental
Once #104, #105, #106 are done this should be relatively straightforward
slightly more finicky in c++, because it’d be nice to auto-register all nodes and not need to call into some sort of initialization routine.
We can do this with some cleverness around static initializers though.
<!– BUGS: Please use this template. –> <!– QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript –> <!– SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md –>
TypeScript Version: master, commit f64b8ad902087
We currently implement the ability to serialize types into “text” in two ways:
1) Generate an AST for the type and emit: this was added to support writing quickfixes in terms of manipulating AST’s in #14709 and #15790.
SymbolDisplayBuilder: this is used for quickinfo requests, where we need to annotate text with a kind so the editor can colorize the quickinfo.
(De-duplicating printing from AST’s via the emitter and declaration emitter is not part of this issue).
We could remove the second of these if we could emit display parts with the emitter, which involves making the emitter work with a
DisplayPartsSymbolWriter, possibly extending it to include write events for each
SymbolDisplayPartKind and cleaning up the emitter to be consistent in calling the correct
writeX methods on the writer.
Besides numeric units (#104), we’ll need Color and Range. And potentially others.
Roughly: template ranged-value class with automatic conversion to a canonicalized representation.
Validated going into VSO, no build was kicked off so PR’s are stuck in a pending state.
Does this also affect Dev?
Is this a useful amendment? I’m happy to fix this up with suggestions, too, as I intend to use it for my daily development.
Carried over from #852.
E.g. we should always test/mount STL before third-party libraries. Currently we do it in whatever way the includes are coming in.
It’s the case where the conductor fails in execution due to unexpected issues (system crash or file permission errors due to unexpected file system issues). When the conductor crashes, the user currently must manually restart the study by pointing the conductor to the pkl file in the output directory. If done soon enough after a crash, a scheduler most likely still contains job states and the conductor can resume.
However, in the usual case – the conductor crashes without the user’s knowledge. The scheduler usually loses the state of the jobs by the time the conductor is restarted meaning that the ExecutionGraph doesn’t recover because it cannot find the states of jobs that we previously running. The best case is that the jobs are long running and are still being managed by the scheduler. The worst case is that jobs have finished and the state is no longer kept.
The ExecutionGraph needs a more graceful way to handle these conditions – there are a couple of options here: * If a step in the graph is detected to have been running and the job isn’t detected – restart the simulation from scratch. * If a step was pending – restart it from scratch. * If a step had failed – either consider it failed (make sure all dependent steps are marked as such) OR attempt to restart the step from scratch. * Otherwise, treat the step normally.
Currently, Zoom calls must be manually added the the EDGI organizational calendar. This is prone to being forgotten about, for many valid reasons.
There was some conversation around having halpy’s call creation command (
zoom me), auto-add things to calendar, but that’s a little messy.
There might be a more unixy approach: Write a tiny heroku app that uses the Zoom API key to rewrite event listings into an ical feed. That feed could then be renders as an HTML google calendar, or also passed to one of these general ical hubot scripts: https://github.com/edgi-govdata-archiving/edgi-hubot/issues/10#issuecomment-299395603
Slack context: https://edgi.slack.com/archives/C4H23J40J/p1502400481729122?thread_ts=1502381075.026913&cid=C4H23J40J
For ease of updating, I have to text in one place in a hackmd notepad: https://hackmd.io/s/BJWOLQBrZ
We are currently build tests but not yet running them.
Currently blocked by buildtools issues found in… https://github.com/dotnet/buildtools/pull/1627
Currently UAP variations have been disabled due to missing resources in PRI files that is being fixed by dotnet/buildtools#1630, there the issue results in a build failure. Once the fix is merged we will need to update our Master branch to the buildtools version containing the fix and verify that our UAP tests pass.
@mconnew has more context on this issue.
Once it is verified fixed in Master update uwp6.0 branch.
Load the JSON blob description of a project and initialize all the relevant structures on the backend.
Presently, we define
PEGASUS_REPORTING_DB in various places throughout our codebase. Define them with
PEGASUS_DB, with another PR converting our codebase to use these.
Instead of just going through all supported libraries. Should speedup configuration
Currently IISHosted variation is enabled with roughly 30% of the UWP ILC tests run in VSTS failing with the following issue…
System.MissingMethodException : Method 'HttpBaseProtocolFilter.add_ServerCustomValidationRequested(TypedEventHandler<HttpBaseProtocolFilter, HttpServerCustomValidationRequestedEventArgs>)' was not included in compilation, but was referenced in HttpClientHandler.SetOperationStarted(). There may have been a missing assembly. Stack Trace: E:\A\_work\499\s\corefx\src\System.Net.Http\src\uap\System\Net\HttpClientHandler.cs(584,0): at System.Net.Http.HttpClientHandler.<SendAsync>d__109.MoveNext() --- End of stack trace from previous location where exception was thrown --- f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\ExceptionServices\ExceptionDispatchInfo.cs(61,0): at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(178,0): at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(147,0): at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(304,0): at System.Runtime.CompilerServices.ConfiguredTaskAwaitable$1<System.__Canon>.ConfiguredTaskAwaiter.GetResult() E:\A\_work\499\s\corefx\src\System.Net.Http\src\System\Net\Http\HttpClient.cs(489,0): at System.Net.Http.HttpClient.<FinishSendAsyncUnbuffered>d__59.MoveNext$catch$0() --- End of stack trace from previous location where exception was thrown --- f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\ExceptionServices\ExceptionDispatchInfo.cs(61,0): at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(178,0): at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(147,0): at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(304,0): at System.Runtime.CompilerServices.ConfiguredTaskAwaitable$1<System.__Canon>.ConfiguredTaskAwaiter.GetResult() at System.ServiceModel.Channels.HttpChannelFactory$1<System.__Canon>.HttpClientRequestChannel.HttpClientChannelAsyncRequest.<SendRequestAsync>d__13.MoveNext() --- End of stack trace from previous location where exception was thrown --- f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\ExceptionServices\ExceptionDispatchInfo.cs(61,0): at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(178,0): at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(147,0): at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.ServiceModel.Channels.RequestChannel.<RequestAsync>d__33.MoveNext() --- End of stack trace from previous location where exception was thrown --- f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\ExceptionServices\ExceptionDispatchInfo.cs(61,0): at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(178,0): at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(147,0): at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(304,0): at System.Runtime.CompilerServices.ConfiguredTaskAwaitable$1<System.__Canon>.ConfiguredTaskAwaiter.GetResult() at System.ServiceModel.Channels.RequestChannel.<RequestAsyncInternal>d__32.MoveNext() --- End of stack trace from previous location where exception was thrown --- f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\ExceptionServices\ExceptionDispatchInfo.cs(61,0): at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(178,0): at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\Runtime\CompilerServices\TaskAwaiter.cs(147,0): at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.TaskHelpers.WaitForCompletionNoSpin<System.__Canon>(Task$1<__Canon> task) at System.ServiceModel.Channels.RequestChannel.Request($Message message) E:\A\_work\353\s\wcf\src\System.Private.ServiceModel\tests\Scenarios\Client\ChannelLayer\RequestReplyChannelShapeTests.4.0.0.cs(105,0): at RequestReplyChannelShapeTests.IRequestChannel_Http_CustomBinding() at _$ILCT$.$ILT$ReflectionDynamicInvoke$.InvokeRetV(Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState, Boolean targetIsThisCall) at System.InvokeUtils.CalliIntrinsics.Call(IntPtr dynamicInvokeHelperMethod, Object thisPtrForDynamicInvokeHelperMethod, Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState) f:\dd\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\InvokeUtils.cs(400,0): at System.InvokeUtils.CallDynamicInvokeMethod(Object thisPtr, IntPtr methodToCall, Object thisPtrDynamicInvokeMethod, IntPtr dynamicInvokeHelperMethod, IntPtr dynamicInvokeHelperGenericDictionary, Object targetMethodOrDelegate, Object parameters, BinderBundle binderBundle, Boolean invokeMethodHelperIsThisCall, Boolean methodToCallIsThisCall)
In order for the site’s gems not to fall behind and get outdated, we need to unlock their versions at some point before MVP release.
In issue #24 the domain transfer to a new registrar and host is nearly complete.
Once complete, we should deploy the site of oslc.co instead of it’s current home at oslc.github.io.