Kubernetes 1.21.0 has been officially released, and highly available clusters can also be upgraded directly (hub.docker.com has been stopped, and registry.cn-hangzhou.aliyuncs.com/google_containers is used). Fast upgrade (including domestic image quick download link) includes three main steps: upgrading kubedm / kubectl / kubelet version, pulling image and upgrading kubernetes cluster. Reference< The software locked version on Ubuntu is not updated >Install a specific version of DockerCE.
- K8s 1.21.x and 1.20 Image version change of X:
- k8s.gcr.io/kube-apiserver:v1.21.0
- k8s.gcr.io/kube-controller-manager:v1.21.0
- k8s.gcr.io/kube-scheduler:v1.21.0
- k8s.gcr.io/kube-proxy:v1.21.0
- k8s.gcr.io/pause:3.4.1
- k8s.gcr.io/etcd:3.4.13-0
- k8s.gcr.io/coredns/coredns:v1.8.0
If you encounter problems, you can refer to:
Transfer the image to the corresponding node in advance, and then run the following command on any master node to complete the upgrade.
kubeadm upgrade apply v1.21.0
Add new nodes in the current cluster:
- Step 1: recreate the certificate key and token:
sudo kubeadm init phase upload-certs --upload-certs ### Got: # [upload-certs] Using certificate key: # 2ffe5bbf7d2e670d5bcfb03dac194e2f21eb9715f2099c5f8574e4ba7679ff78 # Add certificate-key for Multi Master Node. kubeadm token create --print-join-command --certificate-key 2ffe5bbf7d2e670d5bcfb03dac194e2f21eb9715f2099c5f8574e4ba7679ff78
- Step 2: add a Worker node:
kubeadm join 192.168.199.173:6443 --token rlxvkn.2ine1loolri50tzt --discovery-token-ca-cert-hash sha256:86e68de8febb844ab8f015f6af4526d78a980d9cdcf7863eebb05b17c24b9383
- Step 3: add a master node:
kubeadm join 192.168.199.173:6443 --token rlxvkn.2ine1loolri50tzt --discovery-token-ca-cert-hash sha256:86e68de8febb844ab8f015f6af4526d78a980d9cdcf7863eebb05b17c24b9383 --control-plane --certificate-key 440a880086e7e9cbbcebbd7924e6a9562d77ee8de7e0ec63511436f2467f7dde
Deploying kubernetes on arm reference:
Some minor errors occurred during the upgrade, which were later solved:
- etcd error of kubernetes high availability cluster upgrade
- Troubleshooting of Ubuntu cross version upgrade errors
- Solution of "no_pubkey" error in Ubuntu apt upgrade
1. Upgrade version of kubedm / kubectl / kubelet
Set the software source in China, refer to: kubernetes for china
sudo apt install kubeadm=1.21.0-00 kubectl=1.21.0-00 kubelet=1.21.0-00
To view the container image version of this version:
kubeadm config images list
The output is as follows:
~# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.0
k8s.gcr.io/kube-controller-manager:v1.21.0
k8s.gcr.io/kube-scheduler:v1.21.0
k8s.gcr.io/kube-proxy:v1.21.0
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
2. Pull container image
The original kubernetes image file is on gcr and cannot be downloaded directly. I originally mapped the image to the container warehouse of Alibaba cloud's Hangzhou computer room, and the pull is still relatively fast. Now hub docker. Com has no problem. Some students put the image into the material, and the update is very timely, which can be used directly.
#mirrorgcrio has not been updated #MY_REGISTRY=mirrorgcrio MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/google_containers K8S_VERSION="1.21.0" echo "" echo "==========================================================" echo "Pull Kubernetes for x64 v$K8S_VERSION Images from docker.io ......" echo "==========================================================" echo "" ## Pull image docker pull ${MY_REGISTRY}/kube-apiserver:v$K8S_VERSION docker pull ${MY_REGISTRY}/kube-controller-manager:v$K8S_VERSION docker pull ${MY_REGISTRY}/kube-scheduler:v$K8S_VERSION docker pull ${MY_REGISTRY}/kube-proxy:v$K8S_VERSION docker pull ${MY_REGISTRY}/etcd:3.4.13-0 docker pull ${MY_REGISTRY}/pause:3.4.1 #docker pull ${MY_REGISTRY}/coredns-arm64:1.8.0 docker pull coredns/coredns:1.8.0 ## Add Tag docker tag ${MY_REGISTRY}/kube-apiserver:v$K8S_VERSION k8s.gcr.io/kube-apiserver:v$K8S_VERSION docker tag ${MY_REGISTRY}/kube-scheduler:v$K8S_VERSION k8s.gcr.io/kube-scheduler:v$K8S_VERSION docker tag ${MY_REGISTRY}/kube-controller-manager:v$K8S_VERSION k8s.gcr.io/kube-controller-manager:v$K8S_VERSION docker tag ${MY_REGISTRY}/kube-proxy:v$K8S_VERSION k8s.gcr.io/kube-proxy:v$K8S_VERSION docker tag ${MY_REGISTRY}/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0 docker tag ${MY_REGISTRY}/pause:3.4.1 k8s.gcr.io/pause:3.4.1 #docker tag ${MY_REGISTRY}/coredns-arm64:1.8.0 k8s.gcr.io/coredns:1.8.0 docker tag coredns/coredns:1.8.0 k8s.gcr.io/coredns:1.8.0 echo "" echo "==========================================================" echo "Pull Kubernetes for x64 v$K8S_VERSION Images FINISHED." echo "into docker.io/mirrorgcrio, " echo " by openthings@https://my.oschina.net/u/2306127." echo "==========================================================" echo ""
Save as a shell script and execute.
- Alternatively, download the script: https://github.com/openthings/kubernetes-tools/blob/master/kubeadm/2-images/
3. Upgrade Kubernetes cluster
New installation:
#Specify IP address, version 1.21.0: sudo kubeadm init --kubernetes-version=v1.21.0 --apiserver-advertise-address=192.168.90.100 --pod-network-cidr=10.244.0.0/16
High availability installation (multiple master nodes):
sudo kubeadm init --kubernetes-version=v1.21.0 \ --apiserver-advertise-address=192.168.90.100 \ --control-plane-endpoint=192.168.90.100:6443 \ --pod-network-cidr=10.244.0.0/16 \ --upload-certs
First check the version of each component that needs to be upgraded.
Using kubedm upgrade plan, the output version upgrade information is as follows:
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 1 x v1.20.4 v1.21.0 8 x v1.20.4 v1.21.0 Upgrade to the latest version in the v1.20 series: COMPONENT CURRENT AVAILABLE API Server v1.20.4 v1.21.0 Controller Manager v1.20.4 v1.21.0 Scheduler v1.20.4 v1.21.0 Kube Proxy v1.20.4 v1.21.0 CoreDNS 1.7.0 1.8.0 Etcd 3.4.13-0 3.4.13-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.21.0
Ensure that the above container image has been downloaded (if not downloaded in advance, it may be blocked by the network and suspended), and then perform the upgrade:
kubeadm upgrade apply v1.21.0
When you see the following information, it's OK.
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.0". Enjoy!
Then, configure the current user environment:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can use kubectl version to view the status and kubectl cluster info to view the service address.
- If the service doesn't work, test it:
- Check the service version, kubectl version
- View cluster information, kubectl cluster info
- Check the service status, sudo systemctl status kubelet
- View the service log, journalctl -xefu kubelet
4. Upgrade of work node
Each work node needs to pull the image of the corresponding version above and install the corresponding version of kubelet.
Check version:
~$ kubectl version
To view Pod information:
kubectl get pod --all-namespaces
Done.
⚠️ Note: after version 1.17, if kubedm is installed in high availability mode, all master nodes can be upgraded to the latest version (you need to put the k8s container image on the node in advance).
More references:
- Kubernetes 1.17.4 quick upgrade
- Kubernetes 1.17.2 quick upgrade
- Kubernetes 1.17.1 quick upgrade
- Kubernetes 1.17.0 released
- Deploying highly available Kubernetes 1.17.0 using kubedm
- Kubernetes 1.17.0 management interface Dashboard 2
- Set the Master node of Kubernetes to run the application pod
- Failure of systemctl status probe in Kubernetes pod
- Use Jupyter Notebook for system management
- Run Jupyter/JupyterHub/JupyterLab as system service
- Quick setup JupyterHub for K8s
- Using GlusterFS storage in JupyterHub for K8s