waiting for review result
waiting for review result
OWIN is currently required to use the old SignalR and hence we are stuck with a windows only app for now. Once the project is upgraded to .net core 2.0 and signalr is updated to core version, this Owin dependency can be removed and we can build on linux using travis.
This is another stab at caching parsing operations. This comes at the expense of memory, but YOLO.
This cannot be opt-out, so the patch is just an example: it should only be handled when building source locators via dependency injection.
The results are as following:
Time: 19.22 seconds, Memory: 70.00MB OK, but incomplete, skipped, or risky tests! Tests: 1658, Assertions: 29353, Skipped: 7. real 0m19.263s user 0m19.104s sys 0m0.144s
Time: 17.56 seconds, Memory: 70.00MB OK, but incomplete, skipped, or risky tests! Tests: 1658, Assertions: 29353, Skipped: 7. real 0m17.593s user 0m17.436s sys 0m0.136s
I’ll check parsing large class trees in a sec.
Before eFolder is enabled for Reader, the RetrieveDocumentsForReader batch job needs to be adjusted to limit the number of requests made by eFolder to VBMS.
The batch job should run nightly to: 1. Retrieve all active appeals assigned to Reader users (this will need to be limited in the future as well when opened to 700 users) 1. Loop through the appeals 1. For each appeal, retrieve the list of documents for the case from eFolder. - This request triggers eFolder to cache these documents from VBMS into S3
Currently, the job has no way to tell if eFolder has to make requests to VBMS for the documents. It will not stop until all documents for all appeals have been retrieved.
This ticket may need a tech spec for how to implement the required changes to the eFolder API to return a field that shows if an appeal has to be fetched from VBMS or not
Everything can be expressed via typed props. Just as I had it in old Este. Relay Modern is going to replace cases for a local state anyway. Meanwhile, we can persist app state manually. The song explains it.
It was limited to flow 0.52. Wait for the update, current code should work.
needs testing as this is a direct port from the PR made against staging
Implement the API required to enable the dashboard to pull instance-level metrics from the GM-Fabric-JVM
This should follow the API pattern established by #318
The goal of this issue is to replicate all of the external dependencies that
gm-fabric-jvm uses in the production environment in local docker-compose infrastructure on the developer’s workstation.
This should include the following: - Developing an NGINX / Gatekeeper container that runs the dashboard being developed. This should be integrated into the WebPack build pipeline and maintain hot reloading. - Dockerizing multiple fake GM-Fabric-JVM microservices using the mock-json Node.js app in the project. - Using Node.js to mock out the API of the new microservice Josh is creating??? - Adding Zookeeper - ????
Note: This should be implemented in separately from the docker-compose infrastructure that Josh built containing Envoy, Graphite, statsd, etc.
Add support (meaning: continuous integration testing) for Ubuntu 18.04.
This is blocked on Ubuntu 18.04 actually existing.
Pending deployment container work.
Decide on weekly meeting times for Fall 2017.
ACM needs to appoint a Club Safety Officer(s). Traditionally this is just the same person as the President, but potentially a position we can offer to anyone in club.
Due September 14, 2017.
This is the email received from SA: ```Dear engineering club leaders,
With the start of the academic year approaching, your club may already be considering some of the exciting projects and activities you will be offering your membership. Due to the potential hazards of many hands-on club activities, each engineering student club is required to have at least one Club Safety Officer.
It is mandatory that each club select at least one Club Safety Officer for the year. Larger clubs should select two or three individuals to hold Club Safety Officer responsibilities. While your club’s Safety Officer(s) may be a member of your executive board, do consider offering this position to a non-elected member of your organization to more widely distribute opportunities for leadership and professional development.
Club Safety Officers play a critical role in the operations of any club that seeks to fabricate physical devices, tools, models, etc. They are expected to establish and maintain close working relationships with faculty advisors and UB Environment, Health & Safety (EH&S) officers to ensure that all club activities are pursued in safe and appropriate ways. Further, the Club Safety Officer plays an important role in the training and experiential learning of other club members. Holding the office of Club Safety Officer is a valuable addition to any student resume!
The responsibilities of a Club Safety Officer include: · Attend mandatory EH&S training arranged through the Dean’s Office · Presence at all times when club members are building or carrying out any potentially hazardous activity. (Hence why larger clubs with many planned activities may need more than one officer). · Promote safety awareness, ensure club members comply with UB safety polices · Write Standard Operating Procedures where necessary (using chemicals or hazardous operations) · Identify unsafe operations and practices, and act to stop/shut down operations if necessary · Ensure all club members attend applicable training sessions (EHS, department, etc.) and maintain training records · Work with EH&S to help identify risks and hazards associated with certain procedures, report problems/issues/incidents to faculty advisor and UB EH&S if necessary · Assure that the proper personal protective equipment (PPE) is obtained based on club projects and ensure that PPE is used when working with hazardous materials · Ensure that all students are aware of hazards and safe operating methods for club projects · Ensure that students and advisors know how to get help if needed
If you have already selected one or more Club Safety Officers for your organization, please reply to this e-mail with the name and e-mail address of the individual(s).
If you are still in the process of appointing a club member to this role, please make your selections by September 14.
The mandatory safety training session and a secondary make-up date are currently being organized for weekdays in late September or early October. Information regarding exact date, time, and location of the training sessions will be sent to Club Safety Officers and club leaders in the coming weeks.
Please reach out if you have any questions or concerns. We look forward to seeing you in the fall!
Chelsea Montrois Student Affairs Assistant 410 Bonner Hall 716-645-0958 engineering.buffalo.edu ```
It’s disabled for now because it can’t recognize connection directive. Otherwise, it’s super awesome because it enables live linting and auto-completion within Atom.
It’s blocked by Next.js, which depends on React 15 internals.
See #1435 for previous discussion
Based on further understanding of the EDS date range/facet, I’m replacing the original UI design. Intent is to later add a histogram, parallel to catalog. Interaction details, labels etc. should be the same in both.
<img width=“388” alt=“screen shot 2017-08-17 at 12 42 02 pm” src=“https://user-images.githubusercontent.com/6945691/29430650-061395ae-834a-11e7-8dde-40ef1430eb0d.png”>
<img width=“379” alt=“screen shot 2017-08-17 at 12 40 33 pm” src=“https://user-images.githubusercontent.com/6945691/29430661-15536828-834a-11e7-8093-f03987e8315f.png”>
<img width=“379” alt=“screen shot 2017-08-17 at 12 47 24 pm” src=“https://user-images.githubusercontent.com/6945691/29430700-418acb0c-834a-11e7-8fc8-081bc6cbff6d.png”>
<img width=“384” alt=“screen shot 2017-08-17 at 12 39 28 pm” src=“https://user-images.githubusercontent.com/6945691/29430710-4eaf2490-834a-11e7-8e32-50668fc1faf1.png”>
Hey, I tried using the plugin by adding the compiled jar to the plugins folder. It shows up in sonar no problem, and I just used the defaults, didn’t change any options. Is there anything you have to set to get this to work other than adding the plugin? When I run gradle build sonarqube, I get the following: ``` Execution failed for task ‘:sonarqube’.
ruleKey is mandatory on issue
full stack:* Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ‘:sonarqube’. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:100) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64) at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88) at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:242) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:317) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:309) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:185) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:95) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:235) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:224) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:121) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:77) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:102) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:96) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:612) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:567) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:96) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:46) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) Caused by: java.lang.NullPointerException: ruleKey is mandatory on issue at org.sonar.api.internal.google.common.base.Preconditions.checkNotNull(Preconditions.java:226) at org.sonar.api.batch.sensor.issue.internal.DefaultIssue.doSave(DefaultIssue.java:144) at org.sonar.api.batch.sensor.internal.DefaultStorable.save(DefaultStorable.java:43) at io.gitlab.arturbosch.detekt.sonar.sensor.DetektSensor$reportIssues$1.accept(DetektSensor.kt:57) at io.gitlab.arturbosch.detekt.sonar.sensor.DetektSensor$reportIssues$1.accept(DetektSensor.kt:21) at io.gitlab.arturbosch.detekt.sonar.sensor.DetektSensor.reportIssues(DetektSensor.kt:49) at io.gitlab.arturbosch.detekt.sonar.sensor.DetektSensor.execute(DetektSensor.kt:32) at org.sonar.scanner.sensor.SensorWrapper.analyse(SensorWrapper.java:53) at org.sonar.scanner.phases.SensorsExecutor.executeSensor(SensorsExecutor.java:57) at org.sonar.scanner.phases.SensorsExecutor.execute(SensorsExecutor.java:49) at org.sonar.scanner.phases.AbstractPhaseExecutor.execute(AbstractPhaseExecutor.java:78) at org.sonar.scanner.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:175) at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:143) at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:128) at org.sonar.scanner.scan.ProjectScanContainer.scan(ProjectScanContainer.java:262) at org.sonar.scanner.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:257) at org.sonar.scanner.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:247) at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:143) at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:128) at org.sonar.scanner.task.ScanTask.execute(ScanTask.java:47) at org.sonar.scanner.task.TaskContainer.doAfterStart(TaskContainer.java:86) at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:143) at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:128) at org.sonar.scanner.bootstrap.GlobalContainer.executeTask(GlobalContainer.java:118) at org.sonar.batch.bootstrapper.Batch.executeTask(Batch.java:117) at org.sonarsource.scanner.api.internal.batch.BatchIsolatedLauncher.execute(BatchIsolatedLauncher.java:63) at org.sonarsource.scanner.api.internal.IsolatedLauncherProxy.invoke(IsolatedLauncherProxy.java:60) at com.sun.proxy.$Proxy168.execute(Unknown Source) at org.sonarsource.scanner.api.EmbeddedScanner.doExecute(EmbeddedScanner.java:233) at org.sonarsource.scanner.api.EmbeddedScanner.runAnalysis(EmbeddedScanner.java:151) at org.sonarqube.gradle.SonarQubeTask.run(SonarQubeTask.java:99) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.doExecute(DefaultTaskClassInfoStore.java:141) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:711) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:694) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:317) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:309) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:185) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:95) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92) … 27 more ```
Now that the functions throw errors, the testing script needs to be rewritten to contain try/catch blocks
Setup CI to run
exo template test for each of the following repos:
https://github.com/Originate/exoservice-go https://github.com/Originate/exoservice-js https://github.com/Originate/exoservice-js-mongodb https://github.com/Originate/exosphere-htmlserver-express
Blocked on 0.23.0.alpha.3 being released with this command
Blocked by #157
Added support for enable/disable includes per repository. Added functions for removing and reseting of includes and excludes. Added functions for getting includes and excludes.
Expose all functions to Python.
Blocked by #149
I reworked a couple of things in util inspect for performance. This is based on #14880 as the bug fix will likely land earlier and that is independent from the performance changes. #14790 should also land before this one.
Please have a close look at the
EDIT: The code is fully covered. EDIT2: #14790 landed
util/inspect.js showHidden=0 method="boxed_string" n=200000 52.97 % *** 2.364127e-06 util/inspect.js showHidden=0 method="buffer" n=200000 6.74 % 5.193261e-02 util/inspect.js showHidden=0 method="date" n=200000 3.37 % * 3.455711e-02 util/inspect.js showHidden=0 method="empty_object" n=200000 23.81 % ** 2.800295e-03 util/inspect.js showHidden=0 method="error" n=200000 0.86 % 4.565854e-01 util/inspect.js showHidden=0 method="object" n=200000 144.15 % *** 2.092473e-06 util/inspect.js showHidden=0 method="set" n=200000 54.05 % *** 8.493353e-07 util/inspect.js showHidden=0 method="string" n=200000 29.99 % ** 1.564397e-03 util/inspect.js showHidden=1 method="boxed_string" n=200000 67.66 % *** 7.416857e-08 util/inspect.js showHidden=1 method="buffer" n=200000 6.31 % * 4.634528e-02 util/inspect.js showHidden=1 method="date" n=200000 7.48 % *** 2.100219e-05 util/inspect.js showHidden=1 method="empty_object" n=200000 16.49 % *** 5.941544e-05 util/inspect.js showHidden=1 method="error" n=200000 47.59 % *** 2.244601e-10 util/inspect.js showHidden=1 method="object" n=200000 119.68 % *** 3.111998e-06 util/inspect.js showHidden=1 method="set" n=200000 71.64 % *** 2.485425e-08 util/inspect.js showHidden=1 method="string" n=200000 37.75 % *** 1.472355e-07 util/inspect-array.js type="denseArray showHidden" len=100000 n=500 11.63 % *** 5.295949e-08 util/inspect-array.js type="denseArray" len=100000 n=500 79.84 % *** 1.625585e-23 util/inspect-array.js type="mixedArray" len=100000 n=500 10.48 % ** 7.362620e-03 util/inspect-array.js type="sparseArray" len=100000 n=500 5.31 % 1.864732e-01 util/inspect-proxy.js n=100000 v=1 66.10 % *** 6.020332e-06 util/inspect-proxy.js n=100000 v=2 59.75 % *** 6.662180e-04
The following changes got in: - Only use try catch if necessary - Use if else to prevent checking for things when not necessary - Prevent using arguments for legacy usage - Use a lazy inititated set instead of an array to reduce complexity - Prevent checking for RegExp and others twice - Improve primitive detection - Move boxed detection for the common case - Optimize formatProperty code paths - Use const if possible - Do not reassign nextRecurseTimes - Improve recurseTimes handling - Use plain for loops instead of built-ins - Use faster number detection (no RegExp) - Do not concat symbols - Inline key length check - Add object fast path - Only calculate str length if not definitely out of range - Improve string format fast path - Only check extra array keys if necessary - Directly expose isRegExp and isDate instead of wrapping them The circular reference check is a bit special for backwards compatibility reasons. As the `ctx` is passed to the customInspect function it has to stay a array in that case. Internally a set is always used
<!– Remove items that do not apply. For completed items, change [ ] to [x]. –>
make -j4 test(UNIX), or
vcbuild test(Windows) passes
<!– Provide affected core subsystem(s) (like doc, cluster, crypto, etc). –> util, benchmark
Once Relay compiler will work without watchman. 1.2.0 was supposed to fix it, still…
Then we can remove generated from the repo and build them in
blocked by #8 somewhat blocked by needing to know which storage option #10 (but perhaps we could start with postgres or mysql and change it later?)
One possibility is to set up honeybadger with a slack integration to dlss-infrastructure channel for -stage and -prod deploys ….
obviously, need VMs to deploy to first, so blocked by sul-dlss/preservation2017#35
Once https://github.com/DoSomething/gambit/issues/946 is in place, we’ll want to use the values from the Campaigns API instead of the virtual properties used for Campaign Menu / Continue messaging.
This may be split into multiple tickets – one to get the druids, and another to present them via a ReST call?
Blocked by #2
Once there is a settings.yml file used by PCC (and perhaps once we have a VM? – See sul-dlss/preservation2017#34) we should start using shared_configs.
Note that we should also have the capistrano deploy (#2) grab the latest configs from the correct branch. This is done with a line or two in deploy.rb, I think … as Jessie’s gem for this is now part of dlss-capistrano.
blocked until we have VMs (sul-dlss/preservation2017#34 - how settled does our VM recipe need to be?)
blocked by sul-dlss/preservation2017/issues/32
Currently the dashboard begins polling a statically configured endpoint upon browser load. This feeds a single microservice instance’s metrics into Redux as timeseries data (stored in memory).
This approach works fine for a single microservice, but it likely does not scale well for multiple microservices because: - Each microservice pulls a fairly large metrics.json file (~98kb for the ESS service) pretty often (now every 5 seconds by default). This means approximately 20kbps per microservice - Each poll must be parsed and refactored into timeseries metrics, some of which happens on the main UI thread. - Because the timeseries metrics are stored in Redux, this information is held in memory. The memory footprint for a single microservice grows substantially over time (at a theoretical rate of ~4KBps for ESS, likely less than half that due to JS runtime optimizations). This amounts to somewhere between 175 - 350 MB after collecting 24 hours of metrics.
In order to support multiple microservices before a full TSDB implementation, it makes sense to limit the app to polling a single microservice at a time. This should be whatever microservice the user is actively looking at. Initially, metrics should just be purged and replaced when switching between active microservices ( bringing us to feature parity with current implementation).
However, in a future PR, Redux can namespace metrics by service name and instance IDs to preserve previously captured metrics.
The dashboard currently only monitors a single instance of a microservice using static configs stored in
meta tags in their respective
Having a Fabric-wide view of microservices relies on the presence of the
gm-services-gateway as a landing page that links to the separate dashboards. The landing page is essentially a static website that has microservice information injected by ZooKeeper/NGINX via Mustache tags.
We intend to enhance
gm-fabric-dashboard to allow a single instance to monitor an entire microservice fabric.
To make this possible, the following UI changes must occur:
- The SummaryBar component must behave differently based on if it’s showing the entire Fabric, a microservice composed of a cluster of instances, or a single instance
- Routing must add parameters for microservices and instances like
/:serviceName/:instanceID/threads to support the ability to render different microservices
- New components must be created to show something in the main view area. Absent a TSDB, the mocks for dashboards based on metrics rollup across a cluster of instances is not possible, but intermediate content should be generated.
We only want to be able to install the experiment when ActivityStream is installed, so on 57+.
Currently nightly reports an addon with minVersion 57.0 as incompatible, so blocked until that changes.
This is blocked, I want to finalize our other library READMEs before embarking on this.
Please use the appropriate part of the template: “Bug” or “Feature Request”
A non ascii character in the password when creating a user via the admin interface shows a traceback.
The user is created.
<pre> 192.168.121.157 - - [16/Aug/2017 12:36:48] “POST /admin/add HTTP/1.1” 500 - Traceback (most recent call last): File “/usr/local/lib/python2.7/dist-packages/flask/app.py”, line 1997, in call return self.wsgi_app(environ, start_response) File “/usr/local/lib/python2.7/dist-packages/flask/app.py”, line 1985, in wsgi_app response = self.handle_exception(e) File “/usr/local/lib/python2.7/dist-packages/flask/app.py”, line 1540, in handle_exception reraise(exc_type, exc_value, tb) File “/usr/local/lib/python2.7/dist-packages/flask/app.py”, line 1982, in wsgi_app response = self.full_dispatch_request() File “/usr/local/lib/python2.7/dist-packages/flask/app.py”, line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File “/usr/local/lib/python2.7/dist-packages/flask/app.py”, line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File “/usr/local/lib/python2.7/dist-packages/flask/app.py”, line 1612, in full_dispatch_request rv = self.dispatch_request() File “/usr/local/lib/python2.7/dist-packages/flask/app.py”, line 1598, in dispatch_request return self.view_functionsrule.endpoint File “/vagrant/securedrop/journalist.py”, line 96, in wrapper return func(*args, **kwargs) File “/vagrant/securedrop/journalist.py”, line 188, in admin_add_user otp_secret=otp_secret) File “<string>”, line 4, in init
File “/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/state.py”, line 414, in initialize_instance manager.dispatch.init_failure(self, args, kwargs) File “/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py”, line 66, in exit compat.reraise(exc_type, exc_value, exc_tb) File “/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/state.py”, line 411, in initialize_instance return manager.original_init(*mixed[1:], kwargs) File “/vagrant/securedrop/db.py”, line 256, in init self.set_password(password) File “/vagrant/securedrop/db.py”, line 290, in set_password self.pw_hash = self.scrypt_hash(password, self.pw_salt) File “/vagrant/securedrop/db.py”, line 274, in scrypt_hash return scrypt.hash(str(password), salt, params) UnicodeEncodeError: ‘ascii’ codec can’t encode character u'\xe9' in position 15: ordinal not in range(128) </pre>
The passwords should accept non-ascii: non-english speaker will need them to have good passphrases that they can memorize.
F&M have rejected the approach of using illustrations in our case studies, so the images in this one need a rework. To be signed off by F&M before entering development
This work was already done in another ticket, before being reverted, so there should be some reuse value. This ticket cannot be done until we have the Webinar landing page in place
Existing PR - https://github.com/redbadger/website-honestly/pull/485
In an effort to make sure demonstrations of the EPUB reader at various meetings/presentations, are stable.
Screencast would include:
Please note: we are awaiting the director of NYUP to say it’s ok that we use NYUP’s Show Sold Separately as the subject of our screencast.
Update images on the F&M main case study to use website images, in line with the old case studies. This will go to F&M for pre-approval before moving into dev
Makes it easier to search for users.
It should be fixed soon.
We’re currently using just the fact that the user is logged in, we need to also ensure that they have the appropriate affiliation.
See https://github.com/sul-dlss/SearchWorks/issues/1348#issuecomment-311776066 for affiliations.
Creating this issue to track progress on fixing issues related to the recent Bootstrap move to beta.
Would be great to enjoy the benefits of sbt 1.0.
All other plugins are already compatible.
Upgrade to apollo 2.0 when released, with the aim to remove the fix put in for #55 with a more robust approach.
As a user I don’t want to type or select the name of my contact again after selecting the action
send transaction from his profile
Summary: Currently when selecting the
send transaction action from a contact profile I still have to type or select his name before being able to select an amount and send it
/send command is selected but you still have to input the user before being able to select an amount
Wait for issue #1617 to be resolved
Once HydroShare 1.12 is released, containing the fix for https://github.com/hydroshare/hydroshare/issues/2257, revert #2145.
This card should be ready once this URL resolves correctly: https://www.hydroshare.org/hsapi/resource/?full_text_search=water&north=47.599816005559326&west=-122.32020458774991&east=-122.30687060756253&south=47.608806399331975&coverage_type=box
With #15 almost merged, there are some other modules that need their colors updated.
<img width=“956” alt=“screen shot 2017-08-15 at 6 38 23 am” src=“https://user-images.githubusercontent.com/3325985/29314770-1cf85258-8185-11e7-89f7-c6179e022c26.png”>
specifically the borders on the modals and the active state for the control buttons.
Work on this should not start until #481 has been completed and merged
In the dataflow new project wizard the name template default to artifact ID. This should be shown in the dropdown menu when the wizard opens.
Of course, we may prefer to simply remove this field and ask the user to type a project name instead. Blocking until we decide that.
Compared with NVDA, Narrator is able to speak at faster rates with the OneCore Voices. It can also access a much wider pitch range. In addition, the rate set in Narrator is not affected by the rate set in Windows Settings, whereas NVDA is affected by this. This is because Narrator used an API which was previously private. That API has now been made public, so NVDA will be able to use it.
See the Options property on the SpeechSynthesizer class and the SpeechSynthesizerOptions class. Note that AudioPitch, AudioVolume and SpeakingRate (the properties we want) were only introduced in Windows 10 Insider 10.0.16257.0.
Unfortunately, we can’t use these just yet for a few reasons:
_OcSsmlConverterif the new API is supported instead of passing the cached user settings, but I’m not certain. Either way, this part of the code is going to get a bit ugly because we have to support these two cases.
part of #3467
When clicked, open the create channel modal, #3473
After talking with Frank we decided we wanted to add
appVersion to the Telemetry events.
This is going to be blocked by: - Adding the property to the schema - Adding support to the Telemetry library
Wenn eine Reklamation angelegt wird, wird die Reklamationsart und die Eingangsart nicht übernommen. Evtl. stimmt ein binding nicht richtig.
github.com/minio/minio contains the main.go (and some build files, etc.) and everything else is located under
This has the following consequences:
- It’s tough for external contributors to dive into minio and understand the layer-structure of the minio server/gateway - it took me quite a while and I’m working currently full time on the minio server.
- Build times are quite long - even tough the Go compiler is pretty fast.
- Currently /cmd exports stuff more or less randomly.
- It is very hard to test functionality properly because /cmd is one global namespace and you cannot rely on invariants. Somebody can modify every state from everywhere in the code.
Go natural approach to face these kind of problems is the package. Packages provide:
- A structured way to organize code. It’s easier to understand the architecture of a project if you can look at it’s components and see how they interact. This helps new (and old :wink: ) contributors to figure out which part of the code is responsible for what.
- Faster build and test times. The go compiler does incremental builds by default. It only compiles packages which have changed. This sames time during development.
- Go supports internal (private) packages. Currently somebody can import
github.com/minio/minio/cmd and may raise issues because we’re breaking the code after every release. We can prevent misuse of minio/cmd by using internal packages.
- A package exports a public API used by other packages to build minio’s functionality but we can also protect state and rely on certain invariants and assumptions. Packages will help us to prevent bugs by accidentally changing invariants.
I propose to introduce packages for minio to face the listed problems. The packages should mirror the architecture of minio. Minio consists of 3 independent layers: storage, object and handler. Further minio supports FS and XL as well as Gateway (GCS, Azure and AWS). Within XL erasure coding and bitrot protection is a special/complex code base.
We should discuss about the right structure of packages and keep the balance between useful abstractions and over-engineering but I think the general structure should look like this: ``` minio/ - main.go - … - storage/ - storage.go - storage_test.go - posix/ - posix.go - … - posix_test.go - rpc/ - rpc.go - … - object/ - object.go - fs/ - fs.go - xl/ - xl.go
…. ``` /cc @fwessels @harshavardhana @abperiasamy @krishnasrinivas @donatello @krisis
Assigning fixed IP addresses makes replacing resources harder if necessary. It also makes updating CIDR blocks more complicated.
Blocked until .net standard versions can line up properly, probably after moving to 2.0
I have issues with overly bright screens and I know I’m not the only one. Having a night mode would help people not only at night but with sensitive eyes.
(I can look at coding this, but thought I’d put this up here)
The functionality is already in place for this in the backend, so this will be quick and simple to implement.
We just need to decide where to put it on the settings page.
Blocked until we talk to @adachiu about this
Some people like different colored hair, skin, features. It shouldn’t be that hard to add atlas loading from file so users could modify how characters and enemies look. Let’s just make sure they want that and that there is a big enough audience to vote on it first.
At minimum the whole section can be turned off but the full feature is to be able to individually turn off bookmarks and visited sites.
<img width=“357” alt=“image” src=“https://user-images.githubusercontent.com/438537/29232281-b1370696-7e9f-11e7-90f4-b4c30b8c38eb.png”>
Strings: - Highlights - Find your way back to interesting things you’ve recently visited or bookmarked. - Bookmarks - Visited Sites
https://github.com/dotnet/roslyn/pull/21407 changes the BinaryOperatorKind enum public API, which breaks this analyzer at https://github.com/dotnet/roslyn-analyzers/blob/4e298f6afd156d5e958d21be8afe2c3cc99fdea3/src/Microsoft.NetCore.Analyzers/Core/Runtime/TestForNaNCorrectly.cs#L40
This analyzer should be fixed to consume the new API and re-enabled in Roslyn.
When we see a closed case, we need to update the classifier and tell it the final SR type.
When we need to update Redux, we need to mark what’s been updated so we can know whether or not to send a request to the backend.
Add ‘edited’ at same level as id, user_id, appeal_id, etc
Acceptance Criteria: - After editing a hearing’s field, add a redux field ‘edited’ on the hearing and set it to true
Fixed by #90
Acceptance Criteria: - Ensure addOn is initially saved to redux store on pageload - On user input change, update the redux store accordingly
Technical Notes: - Do not update any of the backend code - Do not update any backend calls
Fill as many cards on Highlights with Bookmarks (first order citizen) up to 5 days old Fill the rest with most recent history item (history items don’t show bookmarks) Looks like the total is 9 cards with no timestamp but “Bookmarked” or “Visited” with matching icon. The prefs from #3155 section allows for turning off of bookmarks and/or history.
<insert empty state string here> – see this link that should be updated https://mozilla.invisionapp.com/share/FUD0ADOYE#/screens/248319677_NewTab-Empty_States
CORS was enabled in https://github.com/DoSomething/gambit-conversations/pull/23 to quickly develop the Gambit Admin prototype. Once (https://github.com/DoSomething/aurora/issues/157) is done, we should disable it.
Natalie working on first draft, meet again on Monday the 21st.
This closes #3112.
Right now the search in the welcome page works the way the search in document list works (type ahead…). This is the wrong feature for this task because a user can type in (the wrong) valid ID for a shorter number without completely typing in an ID.
@bradyswenson commented on Thu Jun 01 2017
article: https://support.rackspace.com/how-to/use-pivot-tables-with-your-cloud-billing-invoice/ (aka rackspace-how-to/content/general/use-pivot-tables-with-your-cloud-billing-invoice.md)
This due Aug 15. I have email attachments with content suggestions in word & pdf and new images to include.
I’ll convert to markdown and submit PR for review.
var steps = [new TourStep("#selector", "Title of step", "Content of step")]; var tour = new Tour("sample", steps);
Implement user controls for historic range and percentile, for indicators that accept both parameters (and not just one or the other).
Allow user to modify percentile, on those indicators that use just percentile (and not historic range).
Allow user to modify historic range for those indicators that use just historic range (and not percentile).
For cooling and heating degree days, add controls above chart to modify the base temperature and its units.
reference to this is being added to a paper that needs to be submitted soon. Content is at https://github.com/ianfoster/MRDP2016/blob/master/website/page-ascii.adoc
numpy > 1.11 is now supported. We should change the
setup.py to reflect. The change. We should check
We need to update the changelog too (credits to @g-weatherill).
In addition to #139, we also need to output the value of
response.error when we receive an invalid response.
response.error.code is particularly interesting, as it sometimes shows up in cases like this.
See https://dareid.github.io/chakram/jsdoc/global.html#ChakramResponse so the API I’m talking about is clear.
ACCEPTANCE - [ ] on the batch status view page, give me a button / link that takes me to a search with results that consist of all the items in the batch - [ ] from the primary search box, give me an option to search by batch ID that returns all of the items in that batch