Kops

From Federal Burro of Information
Jump to navigationJump to search

Dump a cluster config

there are two types of configs that make up a kops created cluster:

1. the cluster config

2. an instance group config, one for each instance group.

THe easiest way to save a copy of this is to

kops edit cluster

then write the config to a file.

kops edit ig <instance group name>

and save that to a file.

The Shiva (AWS)

# /bin/sh

# Source:
# https://kubernetes.io/docs/setup/custom-cloud/kops/
# route 53 domain registered and hosted: 

# check existing status:

echo subdomain: ${subdomain}
echo bucketconfig: ${bucketconfig}
echo prefix ${prefix}
echo cn: ${cn}
echo pubkey ${pubkey} 

export subdomain="dev.thedomain.com"
export bucketconfig="dthornton-clusters"
export prefix="lab001"
export cn="${prefix}.${subdomain}" # clustername
export pubkey="~/.ssh/dthornton.pub"

# check again:

echo subdomain: ${subdomain}
echo bucketconfig: ${bucketconfig}
echo prefix ${prefix}
echo cn: ${cn}
echo pubkey ${pubkey} 


# does the bucket exist?
aws s3api list-buckets --output table | grep ${bucketconfig}.${subdomain}
# if not make a bucket:
echo aws s3 mb s3://${bucketconfig}.${subdomain}
# aws s3 mb s3://${bucketconfig}.${subdomain}

# sync a local copy
# aws s3 sync s3://${bucketconfig}.${subdomain} s3bucket

export KOPS_STATE_STORE="s3://${bucketconfig}.${subdomain}"

echo KOPS_STATE_STORE
echo ${KOPS_STATE_STORE}

# example:
# kops create cluster --zones=us-east-1c useast1.dev.quadratic.net

# cluster creation , chicken and egg:
# this command makes the kops cluster object but fails to make all the aws cloud objects because there is no key to give the instances.
kops create cluster --zones="ca-central-1a,ca-central-1b" "${cn}"

kops create cluster \
--zones ca-central-1a,ca-central-1b \
--master-zones ca-central-1a \
--image ami-9526abf1 \ # latest ca-central-1 ubuntu Tue 18 Sep 2018 10:52:50 EDT 
--ssh-public-key ${pubkey} \
--node-size t2.medium \
--node-count 2 \ 
--master-size t2.medium \
--network-cidr 10.10.0.0/16 \
--dns-zone ${subdomain} \
--cloud-labels "owner=dthornton,managedby=kops" \
--name  ${cn} \
--yes

kube config is :  /Users/david/.kube/config

api url: https://api.${cn}/api/v1/nodes?limit=500

optional:

kops update cluster --name $cn --yes

Note that above we explicity tell it what public key to use at the outset.

# now make a kube secret of type public key. This assumes you already have a private public pair and you are giving kops the public part so that it can give it to kubrnetes and AWS.

# kops create secret --name ${cn} sshpublickey admin -i ~/.ssh/${pubkey}
# kops edit cluster ${cn}

# ig = instance group

kops edit ig --name=${cn} nodes
kops edit ig --name=${cn} master-ca-central-1a

#Suggestions:
# * validate cluster: kops validate cluster
# * list nodes: kubectl get nodes --show-labels
# * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.cacentral.dev.quadratic.net
# * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
# * read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.

kops update cluster ${cn} --yes

kops rolling-update cluster

# specify some stuff at creation: 

#kops create cluster \
#--master-zones=us-east-1a,us-east-1b,us-east-1c \
#--zones=us-east-1a,us-east-1b,us-east-1c \
#--node-count=2 \
#--image ami-32705b49 \
#${cn}

kubectl -n kube-system get po

set name spae to kube-system
get pods

#  Note that the kops destroy cleans out the kubectl config file: "/Users/david/.kube/config"
kops delete cluster ${cn} --yes

Updaing apiserver

at this itme kops update cluster will not detect changes to pi serfer things that need cluster updates, you need --force.


Tricks for making small updates to a host

In a kops environment you might want to make change to the cluster spec that implies a change to a node ( Master or Node ).

normally you would:

kops edit cluster # edit the text description of the cluster as per kops' spec ( see https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md )
kops update cluster # review your change
kops update cluster–yes # Apply your change to the files in the KOPS bucket.
kops rolling-update cluster # review the list of hosts that will be updated.
kops rolling-update cluster --yes # actually reboot, recreate, restart the nodes that need it.

This can be a long process, especially when kops waits a long time ( 5m ) between per-host changes, to be sure that a given host has had enough time to reboot for example.

There is another way.

( I found it here: https://github.com/kubernetes/kops/issues/3645 )

you can "test" a change easily on a per host basis by logging into the host and telling it to update itself manually ( do this after the "kops update cluster–yes" ).

sudo SKIP_PACKAGE_UPDATE=1 /var/cache/kubernetes-install/nodeup --conf=/var/cache/kubernetes-install/kube_env.yaml --v=8

nodeup is the tool that kops uses to get kubernetes configured on the host.

note that in the example I found the path to the nodeup binary and the kube_env file was different than what we had in uk-blue.

I used locate to find the correct binary and config file.

It's worth looking at the config file to get a better idea of what it is that the nodeup command is doing.

I figure this out in the proces of getting auditing to work. I wanted to define an audit file n the host file system for kube api server to use.

I was able to make this change to one host by hand and examine the effect without interrupting service otherwise.

Also see