Contribute to Open Source. Search issue labels to find the right project for you!

AppImage do not work on Debian 9

mujx/nheko

<!– If you want to request a feature or ask a question, feel free to remove all the irrelevant text. –>

System:

  • Nheko commit/version: Nightly AppImage from Releases
  • Operating System: Debian 9
  • Desktop Environment: MATE

Actual behavior

10.341: [warning] - QSslSocket: cannot resolve CRYPTO_num_locks
    10.341: [warning] - QSslSocket: cannot resolve CRYPTO_set_id_callback
    10.341: [warning] - QSslSocket: cannot resolve CRYPTO_set_locking_callback
    10.341: [warning] - QSslSocket: cannot resolve ERR_free_strings
    10.341: [warning] - QSslSocket: cannot resolve EVP_CIPHER_CTX_cleanup
    10.341: [warning] - QSslSocket: cannot resolve EVP_CIPHER_CTX_init
    10.341: [warning] - QSslSocket: cannot resolve sk_new_null
    10.341: [warning] - QSslSocket: cannot resolve sk_push
    10.341: [warning] - QSslSocket: cannot resolve sk_free
    10.341: [warning] - QSslSocket: cannot resolve sk_num
    10.341: [warning] - QSslSocket: cannot resolve sk_pop_free
    10.341: [warning] - QSslSocket: cannot resolve sk_value
    10.341: [warning] - QSslSocket: cannot resolve SSL_library_init
    10.341: [warning] - QSslSocket: cannot resolve SSL_load_error_strings
    10.341: [warning] - QSslSocket: cannot resolve SSL_get_ex_new_index
    10.341: [warning] - QSslSocket: cannot resolve SSLv3_client_method
    10.341: [warning] - QSslSocket: cannot resolve SSLv23_client_method
    10.341: [warning] - QSslSocket: cannot resolve SSLv3_server_method
    10.341: [warning] - QSslSocket: cannot resolve SSLv23_server_method
    10.341: [warning] - QSslSocket: cannot resolve X509_STORE_CTX_get_chain
    10.341: [warning] - QSslSocket: cannot resolve OPENSSL_add_all_algorithms_noconf
    10.341: [warning] - QSslSocket: cannot resolve OPENSSL_add_all_algorithms_conf
    10.341: [warning] - QSslSocket: cannot resolve SSLeay
    10.341: [warning] - QSslSocket: cannot resolve SSLeay_version
    10.341: [warning] - Incompatible version of OpenSSL
    10.519: [warning] - QSslSocket: cannot call unresolved function SSLv23_client_method
    10.547: [warning] - QSslSocket: cannot call unresolved function SSL_library_init
    18.562: [warning] - QSslSocket: cannot call unresolved function SSLv23_client_method
    18.562: [warning] - QSslSocket: cannot call unresolved function SSL_library_init
    18.562: [warning] - Malformed JSON response parse error - unexpected end of input
    19.982: [warning] - QSslSocket: cannot call unresolved function SSLv23_client_method
    19.982: [warning] - QSslSocket: cannot call unresolved function SSL_library_init
    19.982: [warning] - Malformed JSON response parse error - unexpected end of input
    20.881: [warning] - QSslSocket: cannot call unresolved function SSLv23_client_method
    20.881: [warning] - QSslSocket: cannot call unresolved function SSL_library_init
    20.881: [warning] - Malformed JSON response parse error - unexpected end of input
    39.682: [warning] - QSslSocket: cannot call unresolved function SSLv23_client_method
    39.682: [warning] - QSslSocket: cannot call unresolved function SSL_library_init
    42.066: [warning] - Malformed JSON response parse error - unexpected end of input

Expected behavior

Launch and successfully use it.

Steps to reproduce

  1. Install Debian 9.
  2. Download and launch AppImage
Updated 11/12/2017 11:34 3 Comments

Bootstrap Automated Testing System

GreenMachineReloaded/great-scoutt

Concept

Now that the app is in a somewhat more stable state, writing integration and or unit tests for many of it’s core systems would be beneficial to preserve stability.

Goals

  1. Ability to run tests on service level systems and below without using an emulator.
  2. Ability to run tests on ui components without using an emulator.
Updated 08/12/2017 15:31

Move anatomical workflow out into niworkflows

poldracklab/fmriprep

The anatomical workflow is mature enough to be detached from fmriprep’s base code.

This would make the anatomical workflow more visible and will allow: 1) Move the anatomical workflow tests to niworkflows. 2) Cache all the anatomical workflow between circle builds (speed up build time of fmriprep).

WDYT?

(option 2 could happen before 1, that would also be okay)

Updated 11/12/2017 17:37 2 Comments

[CI] Integrate with Fastlane

RocketChat/Rocket.Chat.iOS
  • [ ] Integrate with Fastlane;
  • [ ] Generate TestFlight builds when merging to Develop;
  • [ ] Generate TestFlight builds when merging to Beta (send to external testers);
  • [ ] Notify Rocket.Chat channel (#ios-fastlane) on build success/failures;
Updated 05/12/2017 14:58 3 Comments

Cross-language interaction failed

RobotWebTools/rclnodejs

Recently when running script: node ./scripts/compile_cpp.js && node ./scripts/run_test.js

The appveyor ci fails consistently, please see pr #221, and I noticed that the failed cases are the ones which test the compatibility between rclnodejs and rclpy/rclcpp clients. We verify the messages by reading what has been written to the std out, which may cause some mistake when the console outputs more. e.g. If we enable the debug build, there will be a lot of debug messages which will also influence the test result.

@qiuzhong would you please change the strategy to verify the interaction related functions. Maybe we could create another publisher/subscription in rclpy/rclcpp to forward what has been received/published back to rclnodejs instead of reading from console output.

Updated 08/12/2017 05:49 9 Comments

Unknown command "serve" in MakeFile

xogeny/ModelicaBook

I came across the make rule “serve” which is supposed to launch a command serve but that does not exist (at least not on Linux). Just a typo or something Mac specific?

https://github.com/xogeny/ModelicaBook/blob/8c5a7db0d76309a846943899366d80b0ed0575af/Makefile#L53-L54

Updated 05/12/2017 13:49 3 Comments

Add static analysis to CI

espressomd/espresso

This should be done on gitlab, since this takes some time. Cmake has to be run first, to get the compile commands, so we can split the jobs in multiple parts, forwarding the buid dir as artifact between stages for each config, starting with cmake, then static anlysis, build and finaly running the tests. This would also give more detailed and potentially quicker feedback in case of failure. Tool-wise I’d use clang-tidy for this, as this seems to have the most momentum at the moment and has so MPI related analysis in newer releases.

Updated 12/12/2017 14:38 1 Comments

Problem: sampledata isn't a submodule

artefactual-labs/am

How to reproduce: - Run make bootstrap twice - You’ll see the following error:

docker-compose exec --user root archivematica-storage-service \
    git clone https://github.com/artefactual/archivematica-sampledata.git \
    /home/archivematica/archivematica-sampledata
fatal: destination path '/home/archivematica/archivematica-sampledata' already exists and is not an empty directory.
make: *** [Makefile:27: bootstrap-storage-service] Error 128

The sampledata repo should be part of src/ and tracked as a submodule, then shared with the containers using volumes.

Updated 07/12/2017 22:36

Automatic graphics testing environment

hbirchtree/coffeecutie

Should allow the application to automatically run in an isolated environment, with a controlled window manager/display server. Should support screenshots for testing purposes.

  • [x] Android (using adb-auto toolset, captures screenshots, built on top of adb+monkeyrunner)
  • [ ] Linux/X11 (VirtualGL, multi-GPU configuration possible)
  • [ ] Windows
  • [ ] macOS
  • [ ] iOS (simulator?)
Updated 28/11/2017 15:01

Build and tests more versions of the snapshots with different compilers, options...

mitls/hacl-star

We currently test with Low* tests and Unit tests for snapshots/hacl-c. Something like this should do…

snapshots/snapshot-gcc/libhacl.so: snapshots/snapshot-gcc
    $(MAKE) -C snapshots/snapshot-gcc CC="$(GCC) $(GCC_OPTS) -fPIC" libhacl.so

snapshots/snapshot-gcc/libhacl32.so: snapshots/snapshot-gcc
    $(MAKE) -C snapshots/snapshot-gcc CC="$(GCC) $(GCC_OPTS) -fPIC" libhacl32.so

snapshots/snapshot-gcc-unrolled/libhacl.so: snapshots/snapshot-gcc-unrolled
    $(MAKE) -C snapshots/snapshot-gcc-unrolled CC="$(GCC) $(GCC_OPTS) -fPIC" libhacl.so

snapshots/snapshot-gcc-unrolled/libhacl32.so: snapshots/snapshot-gcc-unrolled
    $(MAKE) -C snapshots/snapshot-gcc-unrolled CC="$(GCC) $(GCC_OPTS) -fPIC" libhacl32.so

# BB: The options for Compcert and MSVC must be adjusted here
snapshots/hacl-c-compcert/libhacl.so: snapshots/hacl-c-compcert
    $(MAKE) -C snapshots/hacl-c-compcert CC="$(CCOMP) $(CCOMP_OPTS) -fPIC" libhacl.so

snapshots/hacl-c-compcert/libhacl32.so: snapshots/hacl-c-compcert
    $(MAKE) -C snapshots/hacl-c-compcert CC="$(CCOMP) $(CCOMP_OPTS) -fPIC" libhacl32.so

snapshots/snapshot-msvc/libhacl.so: snapshots/snapshot-msvc
    $(MAKE) -C snapshots/snapshot-msvc CC="$(MSVC) $(MSVC_OPTS) -fPIC" libhacl.so

snapshots/snapshot-msvc/libhacl32.so: snapshots/snapshot-msvc
    $(MAKE) -C snapshots/snapshot-msvc CC="$(MSVC) $(MSVC_OPTS) -fPIC" libhacl32.so
Updated 28/11/2017 08:53

LE CI

espressomd/espresso

Fixes #1657. Closes #1608.

Description of changes: - Added a testing stage with Lees-Edwards to gitlab-ci.

There is still a lot to do from the looks of it. But the LE test works.

Updated 04/12/2017 13:13 3 Comments

"Failed to connect to bus: No such file or directory" on Ubuntu CI

dj-wasabi/ansible-zabbix-server

Issue when Ubuntu is used during CI build:

    TASK [ansible-zabbix-server : Zabbix-server started] ***************************
    fatal: [zabbix-server-mysql-ubuntu]: FAILED! => {"changed": false, "cmd": "/bin/systemctl", "failed": true, "msg": "Failed to connect to bus: No such file or directory", "rc": 1, "stderr": "Failed to connect to bus: No such file or directory\n", "stderr_lines": ["Failed to connect to bus: No such file or directory"], "stdout": "", "stdout_lines": []}
    fatal: [zabbix-server-pgsql-ubuntu]: FAILED! => {"changed": false, "cmd": "/bin/systemctl", "failed": true, "msg": "Failed to connect to bus: No such file or directory", "rc": 1, "stderr": "Failed to connect to bus: No such file or directory\n", "stderr_lines": ["Failed to connect to bus: No such file or directory"], "stdout": "", "stdout_lines": []}
Updated 23/11/2017 19:59

Failing Test: BrokerLeaderChangeTest

zeebe-io/zeebe

IDEA 2017.2, Java8, maven 3.3.9

[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running io.zeebe.broker.it.clustering.BrokerLeaderChangeTest
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.027 s - in io.zeebe.broker.it.clustering.BrokerLeaderChangeTest
[INFO] Running io.zeebe.broker.it.clustering.CreateTopicClusteredTest
[ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 26.655 s <<< FAILURE! - in io.zeebe.broker.it.clustering.CreateTopicClusteredTest
[ERROR] shouldReplicateNewTopic(io.zeebe.broker.it.clustering.CreateTopicClusteredTest)  Time elapsed: 19.349 s  <<< FAILURE!
java.lang.AssertionError: Failed to wait for [localhost:51015, localhost:41015] become leader of partition 1
    at io.zeebe.test.util.TestUtil$Invocation.until(TestUtil.java:118)
    at io.zeebe.test.util.TestUtil$Invocation.until(TestUtil.java:80)
    at io.zeebe.broker.it.clustering.TopologyObserver.waitForLeader(TopologyObserver.java:57)
    at io.zeebe.broker.it.clustering.CreateTopicClusteredTest.shouldReplicateNewTopic(CreateTopicClusteredTest.java:113)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
    at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.lang.Thread.run(Thread.java:748)

[INFO] Running io.zeebe.broker.it.clustering.DeploymentClusteredTest
[WARNING] Tests run: 5, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 11.056 s - in io.zeebe.broker.it.clustering.DeploymentClusteredTest
[INFO] Running io.zeebe.broker.it.incident.IncidentTest
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.115 s - in io.zeebe.broker.it.incident.IncidentTest
[INFO] Running io.zeebe.broker.it.network.ClientReconnectTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.511 s - in io.zeebe.broker.it.network.ClientReconnectTest
[INFO] Running io.zeebe.broker.it.network.MultipleClientTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.77 s - in io.zeebe.broker.it.network.MultipleClientTest
[INFO] Running io.zeebe.broker.it.startup.BrokerRecoveryTest
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.904 s - in io.zeebe.broker.it.startup.BrokerRecoveryTest
[INFO] Running io.zeebe.broker.it.startup.BrokerRestartTest
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.035 s - in io.zeebe.broker.it.startup.BrokerRestartTest
[INFO] Running io.zeebe.broker.it.subscription.IncidentTopicSubscriptionTest
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.613 s - in io.zeebe.broker.it.subscription.IncidentTopicSubscriptionTest
[INFO] Running io.zeebe.broker.it.subscription.PersistentTopicSubscriptionTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.731 s - in io.zeebe.broker.it.subscription.PersistentTopicSubscriptionTest
[INFO] Running io.zeebe.broker.it.subscription.TaskTopicSubscriptionTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.347 s - in io.zeebe.broker.it.subscription.TaskTopicSubscriptionTest
[INFO] Running io.zeebe.broker.it.subscription.TopicSubscriptionRaftEventTest
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.002 s - in io.zeebe.broker.it.subscription.TopicSubscriptionRaftEventTest
[INFO] Running io.zeebe.broker.it.subscription.TopicSubscriptionTest
[WARNING] Tests run: 17, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 33.75 s - in io.zeebe.broker.it.subscription.TopicSubscriptionTest
[INFO] Running io.zeebe.broker.it.subscription.WorkflowInstanceTopicSubscriptionTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.256 s - in io.zeebe.broker.it.subscription.WorkflowInstanceTopicSubscriptionTest
[INFO] Running io.zeebe.broker.it.subscription.WorkflowTopicSubscriptionTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.107 s - in io.zeebe.broker.it.subscription.WorkflowTopicSubscriptionTest
[INFO] Running io.zeebe.broker.it.task.TaskQueueTest
[WARNING] Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.154 s - in io.zeebe.broker.it.task.TaskQueueTest
[INFO] Running io.zeebe.broker.it.task.TaskSubscriptionTest
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.606 s - in io.zeebe.broker.it.task.TaskSubscriptionTest
[INFO] Running io.zeebe.broker.it.topic.CreateTopicTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.425 s - in io.zeebe.broker.it.topic.CreateTopicTest
[INFO] Running io.zeebe.broker.it.workflow.CancelWorkflowInstanceTest
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.632 s - in io.zeebe.broker.it.workflow.CancelWorkflowInstanceTest
[INFO] Running io.zeebe.broker.it.workflow.CreateDeploymentTest
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.849 s - in io.zeebe.broker.it.workflow.CreateDeploymentTest
[INFO] Running io.zeebe.broker.it.workflow.CreateWorkflowInstanceTest
[WARNING] Tests run: 7, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.651 s - in io.zeebe.broker.it.workflow.CreateWorkflowInstanceTest
[INFO] Running io.zeebe.broker.it.workflow.ExclusiveGatewayTest
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.001 s - in io.zeebe.broker.it.workflow.ExclusiveGatewayTest
[INFO] Running io.zeebe.broker.it.workflow.ServiceTaskTest
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.09 s - in io.zeebe.broker.it.workflow.ServiceTaskTest
[INFO] Running io.zeebe.broker.it.workflow.UpdatePayloadTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.698 s - in io.zeebe.broker.it.workflow.UpdatePayloadTest
[INFO] Running io.zeebe.broker.it.workflow.YamlWorkflowTest
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.491 s - in io.zeebe.broker.it.workflow.YamlWorkflowTest
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   CreateTopicClusteredTest.shouldReplicateNewTopic:113 Failed to wait for [localhost:51015, localhost:41015] become leader of partition 1
[INFO] 
[ERROR] Tests run: 124, Failures: 1, Errors: 0, Skipped: 10
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Zeebe Root ......................................... SUCCESS [ 15.931 s]
[INFO] Zeebe Core Parent .................................. SUCCESS [  1.257 s]
[INFO] Zeebe Protocol Test Util ........................... SUCCESS [ 12.834 s]
[INFO] Zeebe Broker Core .................................. SUCCESS [05:07 min]
[INFO] Zeebe Client Java .................................. SUCCESS [01:00 min]
[INFO] Zeebe QA ........................................... SUCCESS [  0.493 s]
[INFO] Zeebe QA Integration Tests ......................... FAILURE [05:20 min]
Updated 22/11/2017 14:06

DeploymentClusteredTest is instable

zeebe-io/zeebe
[ERROR] shouldDeployWorkflowAndCreateInstances(io.zeebe.broker.it.clustering.DeploymentClusteredTest)  Time elapsed: 9.867 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[tru]e> but was:<[fals]e>
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at io.zeebe.test.util.TestUtil$Invocation.until(TestUtil.java:87)
        at io.zeebe.test.util.TestUtil$Invocation.until(TestUtil.java:65)
        at io.zeebe.test.util.TestUtil.waitUntil(TestUtil.java:36)
        at io.zeebe.broker.it.clustering.DeploymentClusteredTest.shouldDeployWorkflowAndCreateInstances(DeploymentClusteredTest.java:89)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
        at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
        at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:748)
Updated 14/12/2017 17:57 3 Comments

Benefits of Continuous Delivery

tieubao/til

https://henrikwarne.com/2017/11/19/benefits-of-continuous-delivery/


During my career as a software developer, I have seen the release frequency increasing steadily. When I started, it would take 12 to 18 months for new features to reach the customer. Years later the frequency increased, so deployment to production happened every three weeks. For the past two years, we have been using continuous delivery at work. This means that as soon as a feature is ready (implemented, code-reviewed and tested), it is deployed to production. Continuous delivery is by far the best way in my opinion, and here is why: Benefits

Lower risk. The number one reason I like to deploy each new feature as soon as it is done is that it lowers the risk. Every time you change the software there is a risk that bugs are introduced. If the change deployed is small, there is less code to look through in case of a problem. If you only deploy new software every three weeks, there is a lot more code that could be causing a problem. If a problem can’t be found or fixed quickly, it is also a lot easier to revert a small deploy than a large deploy.

Fresh in my mind. If I deploy a new feature as soon as it is ready, everything about it is fresh in my mind. So if there is a problem, trouble shooting is easier than if I have worked on other features in between. It is also frees up mental energy to be completely done with a feature (including deployed to production). I can concentrate on one thing at a time, instead of multitasking.

Features reach customers faster. All things being equal, the faster a feature reaches the customer, the better. Having a feature ready for production, but not deploying it, is wasteful.

Faster feedback. The sooner the customer starts using the new feature, the sooner you hear what works, what doesn’t work, and what improvements they would like. Very often, the next features to be developed is not known until the customer has tried out the current feature and provided feedback. Furthermore, as valuable as testing is, it is never as good as running new code in production. The configuration and data in the production environment will reveal problems that you would never find in testing. Therefore, the sooner it is deployed, the sooner you can find and work out remaining problems in the code. Prerequisites

For continuous deployment to work, there are many preconditions that have to be met. Without these, it is hard or impossible to deploy new code continuously.

Central servers. The system must be cloud based or run from central servers. If the system is run on the customer premises and under their control, it will obviously not work to deploy new versions many times a day.

DevOps culture. Continuous delivery works best when the developers creating the new features are the ones deploying them. There are no hand-offs – the same person writes the code, tests, deploys and debugs if necessary. This quote (from Werner Vogels, CTO of Amazon) sums it up perfectly: “You built it, you run it.”

Automation. When you deploy to production many times a day, deploys must be quick and easy. This means that almost all of the mechanics of building a new release and deploying it must be automated to avoid manual steps. The rest of the system must also support rapid builds and deploys. At work we use docker and kubernetes, which works very well.

Rolling upgrades. If the system is unavailable during a software deploy, you will think twice before deploying. To avoid this, the system should be set up so you can deploy new features server by server, without service interruption.

Revertible. It should be easy to go back to the previous version of the software in case there are problems with the new deploy. If it is easy to deploy a new version, then this is usually not a problem – you just use the same system to deploy the old version again.

Knowing what is running. When the software version running changes several times a day, it is important to be able to tell what it is. This means both knowing what is currently running, and knowing when changes were made. At work we use version numbers in combination with the git hashes of the software. Also, each software deploy is committed in a separate versions file. But…

Bugs. Some people, especially when used to scheduled releases, feel uneasy about continuous deliveries. Aren’t there more bugs as a result of these frequent releases? In my experience, no, there are not more bugs now. There were occasional bugs in the scheduled releases, and there are occasional bugs when deploying continuously. The difference now is that when there are bugs, they are easier and faster to find and remedy.

Soaking. But what about letting new features “soak” in the test environment for a while before releasing them? Doesn’t that uncover hidden bugs? This is an argument that works better in theory than in practice. Before you consider a feature done, you perform all the tests you can think of to convince yourself that it works as expected. It is possible that there are bugs that could be found in a test environment even when you aren’t looking for them, but in practice this almost never happens. Much more likely is that any remaining bugs will only be found when the feature is used in production, with real configuration, data and traffic patterns. Thus the other advantages of frequent deploys outweigh the potential of finding lurking bugs by delaying deployment. Conclusion

As a developer, I want to do everything I can to make sure I create bug-free features as fast as possible. Continuous delivery is a way of working that helps in this respect. Deploying one feature at a time lowers the risk of each deploy significantly. If there is a problem, the code is fresh in my mind, and the changes compared to the previous deploy are small, so trouble shooting is much easier. In addition, it gets features to the customer faster, enabling faster feedback as well. Comparing scheduled releases to continuous delivery, I much prefer continuous delivery.

Updated 21/11/2017 07:32

Restore CI targets

mitls/hacl-star
  • [x] Restore make package in the top-level Makefile target (was killing Hacl-Windows-CI) https://github.com/mitls/hacl-star/commit/b52380fbdce6f6344d63b6564146728d8d73cbb3

  • [ ] Restore the OpenSSL engine target (was killing Hacl-Nigthly-Linux) https://github.com/mitls/hacl-star/commit/66050943ba3c457e6b55c48bc4d8a5ffba6096bf

Updated 27/11/2017 21:02

port chart-logging to gitlab

samsung-cnct/chart-logging
  • [x] Follow instructions to create and edit.gitlab-ci.yml from template
  • [x] also create build/build.sh and build/test.sh according to instructions
  • [x] make build.sh and test.sh executable
  • [x] move Chart.yaml.in into build/
  • [x] create secret variable in gitlab
  • [x] add pipeline status badge to README
  • [x] Remove the Jenkinsfile in the current repo.
  • [x] When PR has merged, make sure that new image is being created on quay. Afterwards, please head back to the umbrella ticket and check off the chart-logging
Updated 13/12/2017 19:10

XCTest Performance Testing

JohnSundell/ImagineEngine

We should investigate if we can use XCTest’s performance testing features to implement automatic performance tests on CI. This to prevent regressions in high performance-sensitive code when adding new features.

APIs that would be good to performance test:

  • [ ] Timeline, since it’s the backbone of all updates, including actions & animations.
  • [ ] Action, since it is the base class for all actions.
  • [ ] TextureManager, since it loads all textures, which is a very common operation.

Questions to discuss

  • [ ] Can we run these type of performance tests on CI without causing flakiness?
Updated 18/11/2017 21:31 2 Comments

Turn parallel_tests back on for CI

github/octocatalog-diff

Recently the CI for octocatalog-diff has been pretty flaky due to random tests getting killed. From travis CI documents, I concluded that the individual tests may be exhausting resources of the container, and as such in https://github.com/github/octocatalog-diff/pull/161 I disabled the parallel_tests gem for CI. Since then, no problems. :crossed_fingers: But, unfortunately the CI build time per ruby version increased by about 5 minutes (20 to 25) as a result of this change.

I’m entering this issue to keep track turning this back on. Things that would probably need to be done here would be to evaluate whether the tests can be effectively parallelized (perhaps the spec tests) or if the number of simultaneous processes can be reduced to keep from bumping up against any limits that exist.

Right now nobody is actively working on this (there are higher priorities). However if test parallelization is something that interests you and you’d like to have a go at making the CI faster, please comment in the issue! 😸

Updated 17/11/2017 05:25

Incremental check for coding standards in pull requests

doctrine/dbal

The existing approach of running phpcs over the entire codebase on pull requests is not really helpful because:

  1. The build stage always fails due to existing standard violations.
  2. It’s unlikely that they all will be fixed any time soon due to the number of them (looks like thousands).
  3. Fixing some violations may be BC-breaking-ish (e.g. renaming protected methods/properties starting with an underscore).
  4. There’s a waste of time of developers and reviewers on manually finding and additionally fixing violations.
  5. If not found/fixed, new violations are introduced. It’s a vicious circle.
  6. Without proper automation, many new contributors (including myself until recently) don’t even know that the project has its own custom coding standards.

Incremental checks could be done using morozov/diff-sniffer-pull-request like following: ```

assuming we’re in the dbal repo with installed dependencies

$ wget https://github.com/morozov/diff-sniffer-pull-request/releases/download/3.1.1/pull-request.phar $ php pull-request.phar doctrine dbal 2494

FILE: lib/Doctrine/DBAL/Driver/SQLSrv/SQLSrvStatement.php

FOUND 4 ERRORS AFFECTING 2 LINES

203 | ERROR | [x] There must be a single space before a NOT operator; 0 found 203 | ERROR | [x] There must be a single space after a NOT operator; 0 found 210 | ERROR | [x] There must be a single space before a NOT operator; 0 found

210 | ERROR | [x] There must be a single space after a NOT operator; 0 found

PHPCBF CAN FIX THE 4 MARKED SNIFF VIOLATIONS AUTOMATICALLY

–More– ```

On Travis CI, it could be a conditional build job which is only executed for pull requests. Some pull requests will inevitably fail (e.g. if they override a method starting with an underscore). Even in this case, I’d prefer to not allow this stage to fail since a pull request with a failed build still can be merged.

Anyone volunteers to implement?

Updated 08/12/2017 20:57 5 Comments

Travis build fails for PHP 7.2

yriveiro/dot

Travis build for PHP 7.2 fails with a segmentation fault.

/home/travis/.travis/job_stages: line 57:  5464 Segmentation fault      (core dumped) vendor/bin/phpunit --coverage-clover build/logs/clover.xml tests/
Updated 16/11/2017 10:44

Testing examples

opws/opws-schemata

I just checked and I’m pretty sure the examples listed for v0.1 and v0.2 are invalid.

While the fixes involved to address this should be minor, the fact remains that each schema should be tested against its example profiles / legacies.

Updated 15/11/2017 12:10

circleci 연동

legshort/django-demo
  • [x] 파일 추가 .circleci/config.yml
  • [x] 파일 내용 추가 ```yml

    Python CircleCI 2.0 configuration file

    #

    Check https://circleci.com/docs/2.0/language-python/ for more details

    # version: 2 jobs: build: docker: # specify the version you desire here # use -browsers prefix for selenium tests, e.g. 3.6.1-browsers
    - image: circleci/python:3.6.2
    
    # Specify service dependencies here if necessary
    # CircleCI maintains a library of pre-built images
    # documented at https://circleci.com/docs/2.0/circleci-images/
    # - image: circleci/postgres:9.4
    

    working_directory: ~/repo

    steps:

    - checkout
    
    # Download and cache dependencies
    - restore_cache:
        keys:
        - v1-dependencies-{{ checksum "requirements.txt" }}
        # fallback to using the latest cache if no exact match is found
        - v1-dependencies-
    
    - run:
        name: install dependencies
        command: |
          python3 -m venv venv
          . venv/bin/activate
          pip install -r requirements.txt
    
    - save_cache:
        paths:
          - ./venv
        key: v1-dependencies-{{ checksum "requirements.txt" }}
    
    # run tests!
    - run:
        name: run tests
        command: |
          . venv/bin/activate
          python django_demo/manage.py test
    
    - store_artifacts:
        path: test-reports
        destination: test-reports
    

```

Updated 14/11/2017 13:55 1 Comments

Cleanup script for multi node automation

bigchaindb/bigchaindb
  • BigchainDB version: N/A
  • Operating System: N/A
  • Deployment Type: Ansible/Vagrant
  • BigchainDB driver: N/A

Description

Currently the multi node automation using Vagrant and Ansible does not have a clean up script/playbook. We need to introduce a script which removes/cleans up all the processes/containers spawned by the playbook.

Updated 14/11/2017 10:12

Add zbctl to Zeebe distribution

zeebe-io/zeebe
  • zbctl for linux and windows (64-bit) is included in zeebe-distribution bin folder
  • requires zbctl to be release before Zeebe release
  • options for packaging
    • add zbctl download/unpacking to maven build, may require a zbc SNAPSHOT build for Zeebe SNAPSHOTS
    • add zbctl download/unpacking to jenkins release pipeline, therefore the release of zbc should also be part of the pipeline https://github.com/zeebe-io/zbc-go/blob/master/RELEASE.md
Updated 15/11/2017 12:24

TypeDoc導入

Takumon/mean-blog

目的

  • ソースコードの仕様明確化
  • ドキュメント自作はめんどくさいのでツールで最小限に抑える

TaskList

  • [ ] TypeDoc概要理解
  • [ ] TypeDocで1クラス書いてみる
  • [ ] 残りのクラスも書く
  • [ ] CIでドキュメントを自動生成する仕組みを構築

参考

Qiita記事

留意事項

本タスクはいっぺんにやらずに、テストを書く時やリファクタリングする時に随時TypeDocを書いていくようにする

Updated 14/11/2017 06:09

Keeping .copld.yaml and .json synchronized

opws/opws-schemata

What I’m thinking here is to make it possible to propose schema changes by just modifying the .copld.yaml in a branch, and then if the JSON doesn’t match, CI will rebuild the .copld.yaml and rewrite the commit accordingly. (CI will also reject any change to the JSON that doesn’t correspond to bringing it up-to-date with the YAML.)

That might be a little too brutal - maybe CI should just propose pull requests against branches when this happens, I don’t know. (And, of course, this can’t be dones for branches outside the OPWS org.)

Updated 13/11/2017 03:02

Добавить сборку на TeaCI

Metrolog/marks

Ещё один build server на Linux, но для сборки windows приложений. Надеюсь - быстрее AppVeyor. https://docs.tea-ci.org/usage/overview/

MSYS2 предустановлена.

Можно тестировать сборку на Cygwin, msys2, mingw32, как для x64, так и для x86.

Updated 11/11/2017 12:07 1 Comments

Fix building SDL2DisplayPlugin on Linux

OpenSmalltalk/opensmalltalk-vm

SDL2DisplayPlugin on Linux CI fails with:

libtool: link: gcc -m64 -shared -fPIC -DPIC .libs/SDL2DisplayPlugin.o -lSDL2 -m64 -O2 -msse2 -Wl,-z -Wl,now -Wl,-soname -Wl,SDL2DisplayPlugin.so -o .libs/SDL2DisplayPlugin.so

/usr/bin/ld: cannot find -lSDL2

TODO: Find the proper place and way for adding -L<placeto the thirdparty cache/lib> so that the plugin can be built.

Updated 15/11/2017 10:32

Staged protoDC2 image generation

LSSTDESC/DC2_Repo

Since we will not be able to generate the full protoDC2 data set before the Sprint Week, we would like to produce an initial subset that would still be useful for the Working Groups in the near term.

We would like to do the full 25 sq degrees, so downscoping for the initial stage would mean fewer bands and a shorter observation time frame.

Questions: * How many visits (or sensor-visits) seem feasible to have done by Sprint Week? @TomGlanzman? * How many and which bands? @cwwalter? * What depth? 1 year? @cwwalter?

Updated 14/12/2017 16:14 271 Comments

HttpConnectorTest intermittent CI failure

Porter-connectors/HttpConnector

From @Bilge on March 16, 2017 16:37

Travis occasionally fails to pass HttpConnectorTest with an error similar to the following.

There was 1 error:

1) ScriptFUSIONTest\Functional\Porter\Net\Http\HttpConnectorTest::testConnectionToLocalWebserver
ScriptFUSION\Retry\FailingTooHardException: Operation failed after 5 attempt(s).

/home/travis/build/ScriptFUSION/Porter/vendor/scriptfusion/retry/src/retry.php:29
/home/travis/build/ScriptFUSION/Porter/test/Functional/Porter/Net/Http/HttpConnectorTest.php:96
/home/travis/build/ScriptFUSION/Porter/test/Functional/Porter/Net/Http/HttpConnectorTest.php:34

Caused by
ScriptFUSION\Porter\Net\Http\HttpConnectionException: file_get_contents(http://[::1]:12345/test?baz=qux): failed to open stream: Connection refused

/home/travis/build/ScriptFUSION/Porter/src/Net/Http/HttpConnector.php:65
/home/travis/build/ScriptFUSION/Porter/src/Connector/CachingConnector.php:62
/home/travis/build/ScriptFUSION/Porter/test/Functional/Porter/Net/Http/HttpConnectorTest.php:110
/home/travis/build/ScriptFUSION/Porter/test/Functional/Porter/Net/Http/HttpConnectorTest.php:86
/home/travis/build/ScriptFUSION/Porter/vendor/scriptfusion/retry/src/retry.php:26
/home/travis/build/ScriptFUSION/Porter/test/Functional/Porter/Net/Http/HttpConnectorTest.php:96
/home/travis/build/ScriptFUSION/Porter/test/Functional/Porter/Net/Http/HttpConnectorTest.php:34

This never used to be a problem, and thanks to the five retries it should have plenty of time to spin up the server. However, this is also the first test in the suite so it may have something to do with PHPUnit start-up time. We should consider moving slower tests to the end of the suite, and if that doesn’t work, we’ll have to increase the retry delay coefficient.

Copied from original issue: ScriptFUSION/Porter#34

Updated 06/11/2017 02:09

Uniform option specification

babeloff/fql

Every expression should be annotatable with options in a uniform way. I would like to associate a set of allowable options to each AQL expression, but I’m not sure how to best do this and I’ve stopped keeping careful track of how options are used - options are proliferating and interacting with each other as we continue to deploy AQL to more realistic projects.
In fact, I’m not even sure how to figure out which options can be used where, because options get passed around a lot.
In the interim the ANTLR grammar should probably allow all options everywhere.

Updated 03/11/2017 17:04

Change lambda binding notation delimiter

babeloff/fql

Change from . to , in binding notation (e.g., “lambda x, ”) See AqlMapping.g4

mappingLambda
  : LAMBDA mappingGen (COMMA mappingGen)* DOT evalMappingFn ;

mappingGen : symbol (COLON mappingGenType)? ;
mappingGenType : symbol ;

The DOT should be a COMMA.

Updated 03/11/2017 17:42 3 Comments

Allow reordering of sections

babeloff/fql

This is easy enough to do in antlr. Rather than a : b c d e ; we say this a : (b | c | d | e)* ; This second form allows whatever order. It also allows repeats of the sections. Specifically which sections should allow any order?

Updated 03/11/2017 17:07

How to get the package reference after conan create

conan-io/conan

As part of an automated build process, I am trying to build a conan package and then upload only that package to an artifact repository (specifically I don’t want to use upload all, because I don’t want to redundantly push dependencies that I just downloaded).

If I don’t (redundantly) store the package name and version in the pipeline, is there any way to parse the package name and version from the conanfile.py? I couldn’t find any option to conan info that only displayed the package name, although with the --only None option comes close as the reference with the @PROJECT annotation has what I need. However, there isn’t a particularly easy way to parse the output of a conan command using the Artifactory plugin (that I am aware of), since it just echoes it and returns a buildInfo object for Artifactory.

Any suggestions are welcome.

Updated 03/11/2017 18:12 5 Comments

Fix IL Linker

Kentico/cloud-generators-net

The current publishing script:

dotnet publish --framework netcoreapp2.0 --runtime win7-x64 -o ..\..\artifacts /p:ShowLinkerSizeComparison=true /p:LinkDuringPublish=false

-> /p:LinkDuringPublish=false The linking had to be disabled because of exceptions being thrown during runtime. Most likely it was not possible to load some types that are being accessed via reflection. E.g. Configuration - loaded via DI.

It should be possible to name assemblies that shouldn’t be linked. <ItemGroup> <LinkerRootAssemblies Include="MyAssembly" /> </ItemGroup> For more info please refer to the documentation: https://github.com/dotnet/announcements/issues/30

Updated 01/11/2017 21:43

Action required: Greenkeeper could not be activated 🚨

TECLIB/react-winjs

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper integration’s white list on Github. You’ll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

Updated 21/11/2017 02:53

Стабилизируй тесты

fidals/stroyprombeton

Иногда тесты падают из-за того, что Selenium не успевает выполнить запрошенные действия, а django-тесты пытаются получить результат:

======================================================================
FAIL: test_new_entity_creation (stroyprombeton.tests.tests_selenium_admin.TableEditor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/unittest/case.py", line 59, in testPartExecutor
    yield
  File "/usr/local/lib/python3.6/unittest/case.py", line 601, in run
    testMethod()
  File "/drone/src/github.com/fidals/stroyprombeton/pull/135/stroyprombeton/tests/tests_selenium_admin.py", line 531, in test_new_entity_creation
    self.assertEqual(name_cell.get_attribute('title'), new_entity_text)
  File "/usr/local/lib/python3.6/unittest/case.py", line 821, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/usr/local/lib/python3.6/unittest/case.py", line 1194, in assertMultiLineEqual
    self.fail(self._formatMessage(msg, standardMsg))
  File "/usr/local/lib/python3.6/unittest/case.py", line 666, in fail
    raise self.failureException(msg)
AssertionError: 'A' != 'A New stuff'
- A
+ A New stuff

Артемий сделал тесты SE более-менее стабильными в https://github.com/fidals/shopelectro/pull/167, многие из них имеют общие части с STB. Поэтому можно посмотреть, что было сделано в SE для стабилизации тестов.

Updated 01/11/2017 07:24

[Do Not Merge Yet] Add gitlab builder support

samsung-cnct/kraken-tools

This PR will add a gitlab-ci support.

It changes the test runner slightly to no longer pull credentials from an S3 bucket (which access is granted through an IAM role on nodes running the job) in favor of environmental variables in the job itself.

It can also remove images generated on the local container registry if you provide valid credentials

Please see the instructions in the .gitlab-ci.yml file

I marked this do not merge as i commented out the aws s3 commands, which is still needed until fail-fast integration is complete, but i guess i could leave them in there with them failing, but unsure.

LMK

Updated 28/11/2017 23:28 1 Comments

Add a means of skipping CI on Circle

JuliaLang/julia

This PR has two distinct commits:

  1. Add a name attribute to most of the steps that run on Circle. This improves readability. To see the exact command, one need only expand the step in the Circle web UI. Further, remove unnecessary &&s in the commands. These are presumably there to ensure that each step stops on the first encountered failure, but Circle steps run in /bin/bash -eo pipefail, so this is unnecessary. This also improves readability.

  2. Add recognition for the standard [ci skip] and [skip ci] available from Travis, AppVeyor, and our custom FreeBSD CI. While this doesn’t avoid running the build at all like it does with the other services, it does stop it early, before building Julia and running the tests. This should free up Circle workers for other PRs. Also add [circle skip] and [skip circle] akin to that available by AppVeyor and our FreeBSD CI.

For ease of review it might help to view the diffs of the two commits separately.

I can remove the first commit if it’s for some reason controversial, but I really think it would be helpful for improving readability both in the web UI and in simply reading the YAML config.

Updated 03/11/2017 22:06 2 Comments

Le spese del team devono essere lette da un google spreadsheet in fase di build

teamdigitale/teamdigitale.governo.it

Expected behavior

Le spese del team devono essere lette da un google spreadsheet in fase di build

Actual behavior

Le spese vengono lette dal file expenses.yml

Notes

Va predisposto un google spreadsheet con due fogli, uno per le aree di spesa e l'altro per i contratti, le colonne devono riflettere la struttura del file expenses.yml.

Durante la build del sito, le spese vanno lette da quello spreadsheet.

Updated 22/11/2017 09:17

Testing

RIOT-OS/RIOT

Automated unit tests with hardware in the loop (SAMR21 plugged on CI server?) - Related issues - [ ] #3363 - [ ] #3392 - [ ] #7871 - Related PRs - [ ] #7653 - [x] #7845 - [x] #7906

Automated network functionality tests (e.g. RPL + UDP/PING tests through border router, multi-hop) in IoTLAB dev sites? leverage PiFleet more? - Related issues - [ ] #3252

On-board CI testing in IoT-LAB (as it will provide soon the possibility to add custom nodes) - Related issues - [ ]

General CI testing - Related issues - [ ] #2143 - [ ] #5319 - Related PRs - [ ] #7258 - [ ] #7786

Updated 30/11/2017 16:57

Fork me on GitHub