The cluster has to interact with a cloud provider.
This installer locally renders assets, copies them to the corresponding machines, installs dependencies on the machines and runs scripts. For this purpose it uses SSH to connect to the machines and requires passwordless
It works in two phases:
Having a working etcd ring allows us to bootstrap all other control-plane components, in HA mode.
On second pass a script will run
kubeadm init --config=OUR_MASTER_CONFIG.yaml on every master node. During that phase kubeadm will show warning like this:
[preflight] Running pre-flight checks. [WARNING Port-10250]: Port 10250 is in use [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [WARNING FileExisting-crictl]: crictl not found in system path
Which is totally normal and expected. We generated
etcd.yaml by ourselves and boot up the kubelet before
kubeadm init (port is in use warning). Those warnings are actually fatal errors in normal kubeadm operations, but for our use-case (kubeadm-based HA setup) they can be neglected.
And in the end the script will run
kubeadm join on every worker node.
cloudconfig-<providername>.sample.conffiles for a reference
config-example.sh script to
config.sh, edit the variables and run
To add worker nodes simply update to
config.sh nodes and execute
First drain the node you want to update.
kubectl drain <node name>
/etc/kubernetes/kubeadm-config.yaml and set the Kubernetes version.
Now you can simply initialize this node with the new Kubernetes version like:
sudo kubeadm init --config /etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors all
Once that’s done you should see the apiserver, node-controller and scheduler restarting. These components are now running in the new version.
Don’t forget to undrain the node again.
kubectl uncordon <node name>
Repeat for all other nodes one by one.