Contribute to Open Source. Search issue labels to find the right project for you!

start building helm nightly to cover gaps in helm release cycle

samsung-cnct/k2

helm does not cut official releases at the same time as kubernetes. we rely heavily on helm for providing ancillary services on kubernetes (monitoring, logging, CI that targets kubernetes, etc). as such we should be doing our best to be: - testing against alpha and beta versions of kubernetes - papering over the release lag between kubernetes and helm

this is a result of a discussion raised in #222

Updated 22/03/2017 23:14

Create static definition for kubedns

samsung-cnct/k2

due to issues raised in #222 we need to create a pod definition for deploying a DNS solution to our kubernetes clusters. as a first pass this solution should have the following traits: - use the newest version of KubeDNS (which is just called DNS now, which is kind of annoying) - be manually tested against v1.4, v1.5 and v1.6 - be installed via an ansible role before helm is installed

Updated 22/03/2017 22:52 1 Comments

Migrate fabric stanza

samsung-cnct/k2

This ticket is to remove the old stanza, “#/deployment/fabric”, from ansible/roles/kraken.config/files/config.yaml and update the dependent Ansible tasks to access #/deployment/clusters/fabricConfig. instead. When doing this the k2 schema, ansible/roles/kraken.config/files/schema.json , should be updated as well.

Updated 22/03/2017 20:23 2 Comments

Migrate clusterServices to helmConfig

samsung-cnct/k2

clusterServices stanza

This ticket is to remove the old stanza, "#/deployment/clusterServices”, from ansible/roles/kraken.config/files/config.yaml and update the dependent Ansible tasks to access #/deployment/clusters/{{clusterIndex}}/helmConfig instead. When doing this the k2 schema, ansible/roles/kraken.config/files/schema.json , should be updated as well.

Updated 22/03/2017 20:24 1 Comments

Migrate keyPair stanza

samsung-cnct/k2

keyPair

This ticket is to remove the old stanza, "#/deployment/keyPair”, from ansible/roles/kraken.config/files/config.yaml and update the dependent Ansible tasks to access #/deployment/clusters/{{clusterIndex}}/nodes/{{nodeIndex}}/keyPair. instead. When doing this the k2 schema, ansible/roles/kraken.config/files/schema.json , should be updated as well.

Updated 22/03/2017 20:24 2 Comments

Migrate containerConfig stanza

samsung-cnct/k2

containerConfig stanza

This ticket is to remove the old stanza, "#/deployment/containerConfig”, from ansible/roles/kraken.config/files/config.yaml and update the dependent Ansible tasks to access #/deployment/clusters/{{clusterIndex}}/nodes/{{nodeIndex}}/containerConfig. instead. When doing this the k2 schema, ansible/roles/kraken.config/files/schema.json , should be updated as well.

Updated 22/03/2017 20:24 1 Comments

Migrate kubeConfig stanza

samsung-cnct/k2

kubeConfig stanza

This ticket is to remove the old stanza, "#/deployment/kubeConfig”, from ansible/roles/kraken.config/files/config.yaml and update the dependent Ansible tasks to access #/deployment/clusters/{{clusterIndex}}/nodes/{{nodeIndex}}/kubeConfig. instead. When doing this the k2 schema, ansible/roles/kraken.config/files/schema.json , should be updated as well.

Updated 22/03/2017 20:25 1 Comments

Migrate osConfig stanza

samsung-cnct/k2

coreos stanza

This ticket is to remove the old stanza, "#/deployment/coreos”, from ansible/roles/kraken.config/files/config.yaml and update the dependent Ansible tasks to access #/deployment/clusters/{{clusterIndex}}/nodes/{{nodeIndex}}/osConfig. instead. When doing this the k2 schema, ansible/roles/kraken.config/files/schema.json , should be updated as well.

Updated 22/03/2017 20:25 1 Comments

Migrate providerConfig

samsung-cnct/k2

Boilerplate

The k2recon project involves creating a versioned schema to validate configuration files. As a result, some of configuration properties have been renamed, and stanzas have been reorganized. For backwards compatibility while we migrate to the new config, the old stanzas have been left in place alongside the new stanzas.

providerConfig stanza

This ticket is to remove the old stanza, "#/deployment/providerConfig", from ansible/roles/kraken.config/files/config.yaml and update the related Ansible tasks to match. When doing this the k2 schema, ansible/roles/kraken.config/files/schema.json , should be updated as well.

Updated 22/03/2017 20:25 2 Comments

K2-Down fails to bring down clusters with ELBs

samsung-cnct/k2

Currently k2-down is failing when we deploy an ELB with an external IP.

I can spin up a default cluster and bring it down successfully, but when I deploy the above service, k2 down fails at certain tasks (seem to be a bit different each time, see the above comment for a list of common ones I see).

Bug repro: 1. Spin up default k2 cluster 2. Deploy service: https://gist.github.com/leahnp/7240e1fc4c2a73a75d0403f734e8555a 3. Run k2-down (it should fail at some point)

More details on the issue here: https://github.com/samsung-cnct/kraken-ci-jobs/issues/166

Updated 23/03/2017 22:22 12 Comments

Add "control plane cluster update"

samsung-cnct/k2

Control plane cluster update should be a new ansible role that is not part of the default set and must be called explicitly and has a selector based on provider. The ansible role should perform the upgrade/downgrade (should be the same process) tasks as laid out in the following tickets:

gke: this is a no-op, gke controls its own control plane with no user input aws: #219 – this may be completed best by completing #60 first

Updated 17/03/2017 05:59

Validate format of properties

samsung-cnct/k2

Task

The k2 schema defines a format for some properties. Some of these formats (e.g. “ip4addr”) are part of the JSON Schema standard and some implementations should implement. The existing JSON schema validator should be extended to validate non standard formats (e.g. cidr, semver). Until this is done non standard formats should be ignored by all validators.

Updated 16/03/2017 04:14

Warn on depreciated k2 properties

samsung-cnct/k2

Task

Per #221 , before a property can be removed it must be depreciated first to give the user a chance to stop depending on it. Programmatically this is done by adding the property to the depreciated list within the k2 schema. For example [1] gives a hypothetical schema stanza. Since depreciated is an extension to JSON schema (see PR #173 for the current discussion) no standard validator will print warnings. This ticket is to create an Ansible task which will print warnings as part of the kraken.config role.

[1]

"kubeConfig": {
"title": "A Kubernetes configuration",
"description": "The location and version of a container containing the Kubernetes hyperkube binary.",
"properties": {
"name": {
"default": "defaultKubeConfig",
"description": "Name of the Kubernetes configuration.",
"type": "string"
},
"hyperkubeLocation": {
"default": "gcr.io/google_containers/hyperkube",
"description": "Location of the Kubernetes container.",
"format": "uri",
"type": "string"
},
"version": {
"default": "v1.5.2",
"description": "Version of the hyperkube binary.",
"format": "symver",
"type": "string"
}
},
"required": [
"name"
],
"depreciated": [
"hyperkubeLocation"
],
"type": "object"
},
Updated 16/03/2017 04:15 1 Comments

Add checksums to Dockerfile to verify integrity

samsung-cnct/k2

The k2 Dockerfile currently contains lines like this:

wget http://storage.googleapis.com/kubernetes-helm/helm-${K8S_HELM_VERSION}-linux-amd64.tar.gz && tar -zxvf helm-${K8S_HELM_VERSION}-linux-amd64.tar.gz && mv linux-amd64/helm /usr/bin/ && rm -rf linux-amd64 helm-${K8S_HELM_VERSION}-linux-amd64.tar.gz

We should follow the best practices implemented in goglide and careen-goglide and verify the checksum of the tarball.

Updated 22/03/2017 20:15

create CI job that verifies non-supported kubernetes templates are removed

samsung-cnct/k2

as of https://github.com/samsung-cnct/k2/issues/222#issuecomment-284583580 we will be introducing template directories and ansible checks that should be removed when we no longer support that version.

create a CI job that checks the source code for no longer supported templates and version checks. at this moment I am unsure how we are tracking what the supported versions of kubernetes are so that may also be something that needs to be determined here.

Updated 20/03/2017 17:30 2 Comments

Documentation should match code

samsung-cnct/k2

The current documentation is out of date and also includes features which have never been implemented. For example: https://github.com/samsung-cnct/k2/blob/master/Documentation/kraken-configs/kubelabels.md

We can fix this by reviewing the documentation, or depending on the results of #221, we can generate documentation from the config schema.

Updated 22/03/2017 20:17

Consolidate apiserver readiness checks

samsung-cnct/k2

We currently wait for the API server to be read in 3 different places:

rodin:k2 dwat$ grep -r 'Wait for api server' ansible
ansible/roles/kraken.fabric/kraken.fabric.flannel/tasks/main.yml:- name: Wait for api server to become available in case it's not
ansible/roles/kraken.readiness/tasks/do-wait.yaml:- name: Wait for api server to become available in case it's not
ansible/roles/kraken.services/tasks/run-services.yaml:- name: Wait for api server to become available in case it's not
rodin:k2 dwat$ 

One of these has diverged from the rest which suggests that it may be wrong. We should remove duplications like these from the code to make it is easier to maintain.

Updated 03/03/2017 00:25

tpc provider: terraform templates

samsung-cnct/k2

Attempt to follow the design of the aws terraform templates (see aws-template.yaml) to build a template to terraform using the triton resources.

This is a tracking issue for the following: - [ ] vpc #193 - [ ] keypair (= key) #194 - [ ] subnet (= fabric) #195 - [ ] cluster_secgroup (= firewall) #196 - [ ] etcd (= machine) #197 - [ ] master (= machine) #197 - [ ] node (= machine) #197

Updated 15/03/2017 21:36 1 Comments

create jenkins-ci project plan

samsung-cnct/k2

our current jenkins CI system (kraken-ci) is unmaintained, based on jenkins v1 and the instructions to rebuild no longer work. we are going to abandon this project in favor of using a kubernetes-based jenkins build system built on jenkins v2.

this ticket will cover the basic planning and initial tickets for the new ci system

Updated 27/03/2017 05:29 12 Comments

Look into CoreRoller for managing CoreOS based installs

samsung-cnct/k2

CoreRoller ( https://github.com/coreroller/coreroller ) is a project that will allow you to manage the packages installed on a CoreOS machine as well as what version of CoreOS (ContainerLinux) is deployed where.

This service could be deployed to a kubernetes cluster (in a self-hosted sort of way) to help manage nodes that the cluster are running. Could also be used to keep a specific version of docker running on the fleet

Updated 08/03/2017 22:05 1 Comments

Determine plan for K8S core compatability

samsung-cnct/k2

As K2 uses a larger breadth of options for kubelet, apiserver, etc., compatability across versions of kubernetes can become problematic.

For example, in 1.6, --register-schedulable is deprecated in favor of taints (see https://github.com/kubernetes/kubernetes/pull/31647 ) - so how we register our nodes, and use taints (see #151 ) will change in 1.6 that is not compatible with 1.5 and below

What is our plan? Do we identify that some features (like taints) are incompatible with the version of K8S you want to run; do we keep versions of templates based on the minor version of K8S that will be run. And how do we handle upgrades (going from K8S 1.5 to 1.6 - new ASG with new launchconfig in AWS? for example)

Updated 22/03/2017 23:17 17 Comments

Add cluster autoscaling support

samsung-cnct/k2

On cloud environments (AWS, GCE, Azure) adding additional nodes can be done through APIs.

Having a right-sized kubernetes environment would reduce costs while at the same to be responsive to the cluster’s need for additional hardware when appropriate.

Kubernetes supports cluster autoscaling for GCE, GKE and AWS. We should have this be an option in K2. https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler for reference

Updated 24/01/2017 21:43

config file usability failure - empty fields required

samsung-cnct/k2

I removed part of the ‘authentication’ section of the aws provider and received an error complaining about it not being present. this was, probably, wrong.

the authentication section allows for specifying keys either directly (hard coded or environment variables) or through a credentialsFile. when I commented out the credentialsFile key and had supplied valid values to the accessKey and accessSecret keys, I received an error complaining about the missing credentialsFile key.

Updated 24/01/2017 21:43

Canal install pulls directly from canal

samsung-cnct/k2

the line: https://github.com/samsung-cnct/k2/blob/6d8a7b43a7ccf2a49f331ca239a418d88e27b2ca/ansible/roles/kraken.master/kraken.master.docker/templates/units.kubelet.part.jinja2#L13 pulls directly from an external source. this means we can not have an airgapped deploy with this process.

low priority but when air gapping comes up this will need to be addressed.

why is this there: https://github.com/tigera/canal/issues/14

Updated 24/01/2017 21:43

canal networking questions

samsung-cnct/k2

There are a number of questions unresolved after the Canal PR. They were deemed large enough to follow up on but small enough to not impede the PR. - ask Canal team what the appropriate resource restrictions are. these applications run inside of k8s so we should have some accounting for them ahead of time - why is the tool install not an init container? one less container running forever

Updated 25/01/2017 22:03

update user's default kubectl with cluster context

samsung-cnct/k2

Currently we create a separate kubeconfig file for each cluster under cluster’s folder, along with a bunch of other cluster state stuff. It would be most convenient for users of kubectl and helm to have that information (optionally) exported into their default kubectl unde ~/.kube/config

This might be more easily accomplished in https://github.com/samsung-cnct/k2cli

Updated 24/01/2017 21:43 3 Comments

pre-allocate ips for etcd nodes

samsung-cnct/k2

pre-allocating ips for etcd nodes will allow us to create DNS entries for the etcd before they are finished spinning up. This will lower total time for a cluster to start and will be give us more control over when services become visible to the rest of the cluster.

This is a follow on ticket from https://github.com/samsung-cnct/k2/issues/99

Updated 24/01/2017 21:43

Kubernetes Nodes Are Not Labelled By Type

samsung-cnct/k2

There is no label to allow determination of special nodes, regular cluster nodes. Previous the “special nodes” had specific names: e.g. node-001, node-002 and “regular” just had “autoscale”.

mikeln-bender:kubernetes Mikel_Nelson$ kubectl --kubeconfig=/Users/Mikel_Nelson/.kraken/mikeln-k2/admin.kubeconfig get node --show-labels
NAME                                         STATUS                     AGE       LABELS
ip-10-0-100-135.us-west-2.compute.internal   Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2b,kubernetes.io/hostname=10.0.100.135
ip-10-0-122-137.us-west-2.compute.internal   Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2b,kubernetes.io/hostname=10.0.122.137
ip-10-0-122-138.us-west-2.compute.internal   Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2b,kubernetes.io/hostname=10.0.122.138
ip-10-0-122-139.us-west-2.compute.internal   Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2b,kubernetes.io/hostname=10.0.122.139
ip-10-0-13-86.us-west-2.compute.internal     Ready,SchedulingDisabled   5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2a,kubernetes.io/hostname=10.0.13.86
ip-10-0-180-246.us-west-2.compute.internal   Ready,SchedulingDisabled   5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2c,kubernetes.io/hostname=10.0.180.246
ip-10-0-251-38.us-west-2.compute.internal    Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2c,kubernetes.io/hostname=10.0.251.38
ip-10-0-251-39.us-west-2.compute.internal    Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2c,kubernetes.io/hostname=10.0.251.39
ip-10-0-251-40.us-west-2.compute.internal    Ready                      3m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2c,kubernetes.io/hostname=10.0.251.40
ip-10-0-34-92.us-west-2.compute.internal     Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2a,kubernetes.io/hostname=10.0.34.92
ip-10-0-34-93.us-west-2.compute.internal     Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2a,kubernetes.io/hostname=10.0.34.93
ip-10-0-34-94.us-west-2.compute.internal     Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2a,kubernetes.io/hostname=10.0.34.94
ip-10-0-34-95.us-west-2.compute.internal     Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2a,kubernetes.io/hostname=10.0.34.95
ip-10-0-62-131.us-west-2.compute.internal    Ready                      5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2a,kubernetes.io/hostname=10.0.62.131
ip-10-0-87-69.us-west-2.compute.internal     Ready,SchedulingDisabled   5m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2b,kubernetes.io/hostname=10.0.87.69

Need a Label on each to allow determination of k2 node type.

Updated 24/01/2017 21:43 2 Comments

K2 Up Ending Usage Could Be Clearer

samsung-cnct/k2

K2 should be able to customize this output with the values for the system that just started:

LAY RECAP *********************************************************************
localhost                  : ok=123  changed=86   unreachable=0    failed=0

To use kubectl:
kubectl --kubeconfig=~/.kraken/<cluster name>/admin.kubeconfig <kubctl command>

For example:
kubectl --kubeconfig=~/.kraken/krakenCluster/admin.kubeconfig get services --all-namespaces
To use helm:
KUBECONFIG=~/.kraken/<cluster name>/admin.kubeconfig; helm <helm command> --home ~/.kraken/<cluster name>/.helm
For example:
KUBECONFIG=~/.kraken/krakenCluster/admin.kubeconfig; helm list --home ~/.kraken/krakenCluster/.helm

To ssh:
ssh <node pool name>-<number> -F ~/.kraken/<cluster name>/ssh_config
For example:
ssh masterNodes-3 -F ~/.kraken/krakenCluster/ssh_config

e.g. < cluster name > and ~/

~/ in –kubeconfig does not work (on some systems). It needs to be the full pathname.

Updated 24/01/2017 21:43 1 Comments

Add "kubelet cluster update"

samsung-cnct/k2

Kubelet cluster update should be a new ansible role that is not part of the default set and must be called explicitly and has a selector based on provider. The ansible role should perform the upgrade/downgrade (should be the same process) tasks as laid out in the following tickets: - gke: https://github.com/samsung-cnct/k2/issues/217#issuecomment-285467380 - aws: https://github.com/samsung-cnct/k2/issues/218 (see last comment for additional detail. if there is confusion, please speak to @coffeepac) – this may be completed best by completing #60 first

Updated 17/03/2017 06:01 2 Comments

ssh to intances

samsung-cnct/k2

I’m not sure whether it is intended working. and I use the image “quay.io/samsung_cnct/k2:latest”

there is no ssh client in image, so I use laptop host’s ssh client.

problem is … 1) owner of RSA key is root, not not mine. 2) IdentityFile path in ssh_config is under root’s home, not my home.

So I just manually edit the ssh_config under .kraken and try ssh with root account.

is there other convenient way for it ?

below is my host info

Linux keyolk-book 4.4.2-6ph #1 SMP Mon Jun 13 20:01:22 KST 2016 x86_64 GNU/Linux
Containers: 24
 Running: 2
 Paused: 0
 Stopped: 22
Images: 1
Server Version: 1.12.2
Storage Driver: overlay
 Backing Filesystem: xfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.2-6ph
Operating System: Apricity OS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.61 GiB
Name: keyolk-book
ID: AKMT:LO4B:TAXU:M6K3:RYEQ:GE5B:GLZ6:ISBQ:WBN7:FYKF:CMN6:TN2W
Docker Root Dir: /home/keyolk/mount/flash/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8
Updated 25/01/2017 20:56 1 Comments

Expose debug logs

samsung-cnct/k2

I have been having trouble for about a week bringing down a k2 cluster using down.sh

I usually get stuck here: 👍 TASK [/kraken/ansible/roles/kraken.provider/kraken.provider.aws : Run terraform destroy] *** FAILED - RETRYING: TASK: /kraken/ansible/roles/kraken.provider/kraken.provider.aws : Run terraform destroy (10 retries left). FAILED - RETRYING: TASK: /kraken/ansible/roles/kraken.provider/kraken.provider.aws : Run terraform destroy (9 retries left). FAILED - RETRYING: TASK: /kraken/ansible/roles/kraken.provider/kraken.provider.aws : Run terraform destroy (8 retries left).

At this point in the down.sh process, I always have things are alive from the cluster on AWS, usually Load Balancers and a VPC, sometimes a subnet.

Updated 24/01/2017 21:43

Get Sysdig-Agent Working Correctly.

samsung-cnct/k2

Get K2 working similar to original kraken for sysdigcloud. 1) get sysdig-agent installed and running if the user specifies a sysdigcloud_access_key, otherwise do not run the agent. 2) assume each user will have their own unique sysdigcloud_access_key associated with their sysdigcloud.com account. 3) Should run on all nodes and all control plane machines (master, apiserver, etcd, … )

Note: originally the agent HAD to be installed in cloudconfig for it to work correctly. Also, the key for seeing kubernetes data was having it running on the APIServer.

Updated 25/01/2017 21:28 4 Comments

provide method for supplying additional arguments to k8s daemons

samsung-cnct/k2

currently in kraken v1 there doesn’t seem to be a way to supply additional flags to the k8s daemons. I would like to be able to, at execution time, specify things such as ‘allow-privileged’ or ‘cloud-provider’ or any other available option. These options may not be a good idea as defaults for the reference stack but are necessary for certain applications.

Updated 24/01/2017 21:43

Fork me on GitHub