kubernetes version upgrade strategy

Posted by phphead on Thu, 10 Oct 2019 11:51:56 +0200

Kubbernets Version Compatibility

Before upgrading, you need to understand the relationship between versions:

  1. The kubernetes version is named XYZ, where X is the primary version, Y is the secondary version, and Z is the patched version.
    For example, 1.16.0
  2. The version numbers of all K8s components kube-controller, kube-scheduler and kubelet shall not be higher than those of kube-apiserver.
  3. The version number of these components can be lower than one minor version of kube-apiserver, such as kube-apierver 1.16.0, and other components can be 1.16.x and 1.15.x.
  4. In a HA cluster, the version numbers between multiple kube-apiserver s can only differ by one minor version number at most, such as 1.16 and 1.15.
  5. It's better that all components are exactly the same as the kube-apiserver version number.
  6. Therefore, when upgrading the Kubernetes cluster, the first core component to be upgraded is the kube-apiserver.
  7. It can only be upgraded to a minor version.
  8. The kubectl version can only be one sub-version number higher or lower than the kube-apiserver version.

Macro Upgrading Process

  1. Upgrade the master control plane node.
  2. Upgrade other control plane nodes.
  3. Upgrade Node node.

Micro-upgrading steps

  1. Upgrade the kubeadm version first
  2. Upgrade the first master control plane node Master component.
  3. Upgrade kubelet and kubectl on the first master control plane node.
  4. Upgrade other control plane nodes.
  5. Upgrade Node Node
  6. Verify the cluster.

Notices for upgrading

  1. Determine the pre-upgrade kubeadm cluster version.
  2. kubeadm upgrade does not affect the workload, only involves components within k8s, but backing up etcd database is the best practice.
  3. After upgrading, all containers will restart because the hash value of the container has changed.
  4. Due to version compatibility, it can only be upgraded from one minor version to another minor version, and can not jump upgrade.
  5. The cluster control plane should use static Pod and etcd pod or external etcd.

Explanation of kubeadm upgrade cluster upgrade command

By querying command line help:

$ kubeadm upgrade -h

Upgrade your cluster smoothly to a newer version with this command.

Usage:
  kubeadm upgrade [flags]
  kubeadm upgrade [command]
`

Available Commands:
  apply       Upgrade your Kubernetes cluster to the specified version.
  diff        Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
  node        Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.
  plan        Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter.

Command parsing:

  • apply: Upgrade the Kubernetes cluster to the specified version.
  • diff: The difference between the list of static Pod files that are about to run and the list of static Pod files that are currently running.
  • node: Upgrading nodes in the cluster, currently (v1.16) only supports upgrading the configuration file of kubelet (/var/lib/kubelet/config.yaml), not kubelet itself.
  • plan: Check whether the current cluster can be upgraded and support which versions to upgrade to.

The node subcommand supports the following subcommands and options:

$ kubeadm upgrade node  -h
Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.

Usage:
  kubeadm upgrade node [flags]
  kubeadm upgrade node [command]

Available Commands:
  config                     Downloads the kubelet configuration from the cluster ConfigMap kubelet-config-1.X, where X is the minor version of the kubelet.
  experimental-control-plane Upgrades the control plane instance deployed on this node. IMPORTANT. This command should be executed after executing `kubeadm upgrade apply` on another control plane instance

Flags:
  -h, --help   help for node

Global Flags:
      --log-file string   If non-empty, use this log file
      --rootfs string     [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers      If true, avoid header prefixes in the log messages
  -v, --v Level           number for the log level verbosity

Command parsing:

  • Config: Download the configuration file kubelet-config-1.x from cluster configmap, where x is the secondary version of kubelet.
  • experimental-control-plane: Upgrade the control plane components deployed on this node, usually after executing "kubeadm upgrade apply" on the first control plane instance, this command should be executed.

Operating environment description:

  • OS: Ubuntu16.04
  • k8s: a Master, a Node

kubernetes upgraded from 1.13.x to 1.14.x

Since the cluster in the current environment is created by kubeadm, and its version is 1.13.1, this experiment upgraded it to 1.14.0.

Execute the upgrade process

Upgrade the first control plane node

Firstly, it operates on the first control plane node, that is, the main control plane:

1. Determine the pre-upgrade cluster version:

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

2. Find upgradable versions:

apt update
apt-cache policy kubeadm
# find the latest 1.14 version in the list
# it should look like 1.14.x-00, where x is the latest patch
1.14.0-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

3. Upgrade kubeadm to 1.14.0

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \
apt-get update && apt-get install -y kubeadm=1.14.0-00 && \
apt-mark hold kubeadm

If upgrade kubeadm to 1.14, the Ubuntu system may automatically upgrade kubelet to 1.16.0 of the latest version, so upgrade kubelet as well:

apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00

If this happens, the versions of kubeadm and kubelet are inconsistent, and eventually the subsequent upgrade cluster operation fails, then the kubeadm and kubelet can be deleted.

Delete:

apt-get remove kubelet kubeadm

Install the expected version again:

apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00

Make sure that kubeadm has been upgraded to the expected version:

root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master:~# 

4. Run the Upgrade Planning Command: Check whether the cluster can upgrade and the upgraded version obtained.

kubeadm upgrade plan

The output is as follows:

root@k8s-master:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0

Awesome, you're up-to-date! Enjoy!

Tell you that the cluster can be upgraded.

5. Upgrade control plane components, including etcd.

root@k8s-master:~# kubeadm upgrade apply v1.14.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.14.0"
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0
//After the output y is confirmed, the upgrade begins.
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.0"...
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests696355120"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: bb799a8d323c1577bf9e10ede7914b30
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-controller-manager-k8s-master hash: 54146492ed90bfa147f56609eee8005a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
Static pod: kube-scheduler-k8s-master hash: 58272442e226c838b193bbba4c44091e
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.3.1.20]
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
root@k8s-master:~# 

In the last two lines, you can see that the cluster upgrade was successful.

Kubeadm upgrade application performs the following operations:

  • Check whether the cluster can be upgraded:
    Is API Service Available?
    Are all Node nodes in Ready?
    Control plane is healthy.
  • Mandatory version of skew policies.
  • Ensure that the control plane image is available and pulled onto the machine.
  • Upgrade the control plane component by updating the manifest file under / etc/kubernetes/manifests, and restore the manifest file if the upgrade fails.
  • Apply new kube-dns and kube-proxy configuration manifest files and create relevant RBAC rules.
  • Create new certificates and key s for API Server and back up the old ones (if they expire in 180 days).

Up to version v1.16, kubeadm upgrade application must be executed on the master control plane node.

6. Verify the cluster version after running:

root@k8s-master:~# kubectl version 
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

As you can see, although the kubectl version is 1.13.1, the control plane of the server has been upgraded to 1.14.0.

Master components are working properly:

root@k8s-master:~# kubectl get componentstatuses 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

Up to this point, the Master component of the first control plane has been upgraded. There are usually kubelet and kubectl on the control plane nodes, so they also need to be upgraded.

7. Upgrade CNI plug-ins.

This step is optional to query whether the CNI plug-in can be upgraded.

8. Upgrade kubelet and kubectl on the control plane

It is now possible to upgrade kubelet without affecting the operation of business Pod during the upgrade process.

8.1. Upgrade kubelet, kubectl

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.14.0-00 kubectl=1.14.0-00 && \
apt-mark hold kubelet kubectl 

8.2. Restart kubelet:

sudo systemctl restart kubelet

9. Check out the kubectl version, which is in line with expectations.

root@k8s-master:~# kubectl version 
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master:~# 

The first control plane node has been upgraded.

Upgrade other control plane nodes

10. Upgrade other control plane nodes.

Executed on other control planes, the same as the first control plane node, but using:

sudo kubeadm upgrade node experimental-control-plane

Instead:

sudo kubeadm upgrade apply

sudo kubeadm upgrade plan is unnecessary.

Kubeadm upgrade node experiment-control-plane performs the following operations:

  • Collector Configuration to retrieve kubeadm from the cluster.
  • Back up the kube-apiserver certificate (optional).
  • Upgrade the static Pod manifest file of the three core components on the control plane.

Upgrade Node

Now start upgrading the components on Node: kubeadm, kubelet, kube-proxy.

Without affecting cluster access, the execution of one node per node.

1. Mark Node as a maintenance state.

Node now restores 1.13:

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   292d   v1.14.0
k8s-node01   Ready    node     292d   v1.13.1

Before upgrading Node, mark Node as unavailable and evict all Pod s:

kubectl drain $NODE --ignore-daemonsets

2. Upgrade kubeadm and kubelet

Now install kubeadm and kubelet equally on each Node, because kubeadm is used to upgrade kubelet.

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \
apt-get update && apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00 && \
apt-mark hold kubeadm kubelet

3. Upgrade the configuration file of kubelet

$ kubeadm upgrade node config --kubelet-version v1.14.0
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
root@k8s-master:~# 

4. Restart kubelet

$ sudo systemctl restart kubelet

5. Finally, the node is marked as schedulable to rejoin the cluster.

kubectl uncordon $NODE

Now that the Node has been upgraded, you can see that the versions of kubelet and kube-proxy have changed to the expected version v1.14.0.

Verify Cluster Version

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   292d   v1.14.0
k8s-node01   Ready    node     292d   v1.14.0

The STATUS column should display Ready for all nodes, and the version number has been updated.

At this point, all the upgrade processes have been perfectly conquered.

Recovery from failure

If kubeadm upgrade fails and cannot be rolled back (for example, due to an unexpected shutdown during execution), you can run kubeadm upgrade again. This command is idempotent and ensures that the actual state is ultimately the state you declare.

To recover from bad state, you can run without changing the running version of the cluster:

kubeadm upgrade --force. 

See Official for more upgrade information Upgrade Documents

kubernetes upgraded from 1.14.x to 1.15.x

The upgrade process from 1.14.0 to 1.15.0 is similar, except that the upgrade commands are slightly different.

Upgrading master control plane node

The upgrade process is the same as from 1.13 to 1.14.0.

1. Query upgradable version, install kubeadm to expected version v1.15.0

apt-cache policy kubeadm
apt-mark unhold kubeadm kubelet
apt-get install -y kubeadm=1.15.0-00

kubeadm has reached the expected version:

root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

2. Implement upgrade plan

Due to the automatic renewal of certificates due in v1.15, kubeadm upgrades all certificates during the control plane upgrade, i.e. the kubeadm upgrade issued in v1.15, automatically renews the certificates it manages on the node. If you don't want to update the certificate automatically, you can add a parameter: -- certificate-renewal=false.

Upgrade plan:

kubeadm upgrade plan

You can see the following output:

root@k8s-master:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
I1005 20:45:04.474363   38108 version.go:248] remote version is much newer: v1.16.1; falling back to: stable-1.15
[upgrade/versions] Latest stable version: v1.15.4
[upgrade/versions] Latest version in the v1.14 series: v1.14.7

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.14.0   v1.14.7
            1 x v1.15.0   v1.14.7

Upgrade to the latest version in the v1.14 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.14.7
Controller Manager   v1.14.0   v1.14.7
Scheduler            v1.14.0   v1.14.7
Kube Proxy           v1.14.0   v1.14.7
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.14.7

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.14.0   v1.15.4
            1 x v1.15.0   v1.15.4

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.15.4
Controller Manager   v1.14.0   v1.15.4
Scheduler            v1.14.0   v1.15.4
Kube Proxy           v1.14.0   v1.15.4
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.15.4

Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.4.

_____________________________________________________________________

3. Upgrade control plane

Upgrade Control Plane according to Task Guidelines:

kubeadm upgrade apply v1.15.0

Since the version of kubeadm is v1.15.0, the cluster version can only be v1.15.0.

Output the following information:

root@k8s-master:~# kubeadm upgrade apply v1.15.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.0"
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
...
##Picking up the mirror
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
...
##Mirrors of all components have been pulled
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
...
...
##All certificates are automatically renewed as follows
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests353124264"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
...
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

4. Upgrade success, validation.

As you can see, the upgrade is successful, at this point, query the core component version of the cluster again:

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Check the node version:

NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   295d   v1.14.0
k8s-node01   Ready    node     295d   v1.14.0

5. Upgrade kubelet and kubectl on the control plane

The core component of the control plane has been upgraded to v1.15.0. Now the kubelet and kubectl on this node are upgraded. During the upgrade process, the operation of business Pod is not affected.

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.15.0-00 kubectl=1.15.0-00 && \
apt-mark hold kubelet kubectl 

6. Restart kubelet:

sudo systemctl restart kubelet

7. Verify the kubelet and kubectl versions, which are consistent with expectations.

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Check the node version:

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   295d   v1.15.0
k8s-node01   Ready    node     295d   v1.14.0

Upgrade other control planes

The commands to upgrade the three components on the other control plane are different, using:

1. Upgrade other control plane components, but use the following commands:

$ sudo kubeadm upgrade node

2. Then upgrade kubelet and kubectl.

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \
apt-mark hold kubelet kubectl

3. Restart kubelet

$ sudo systemctl restart kubelet

Upgrade Node

Upgrade Node is the same as before, abbreviated here.

Execute on all Node s.

1. Upgrade kubeadm:

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.15.x-00 && \
apt-mark hold kubeadm

Query the kubeadm version:

root@k8s-node01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

2. Set node as maintenance status:

kubectl cordon $NODE

3. Update the kubelet configuration file

$ sudo kubeadm upgrade node
upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

4. Upgrade kubelet components and kubectl.

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \
apt-mark hold kubelet kubectl

5. Restart kubelet

sudo systemctl restart kubelet

At this point, kube-proxy will also be automatically upgraded and restarted.

6. Cancel maintenance status

kubectl uncordon $NODE

Node upgrade completed.

Verify Cluster Version

root@k8s-master:~# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   295d   v1.15.0
k8s-node01   NotReady   node     295d   v1.15.0

kubeadm upgrade node

In this upgrade process, kubeadm upgrade node is used to upgrade other control planes and Node.

When the kubeadm upgrade node executes other control plane nodes:

  • Collector Configuration to retrieve kubeadm from the cluster.
  • Back up the kube-apiserver certificate (optional).
  • Upgrade the static Pod manifest file of the three core components on the control plane.
  • Upgrade the kubelet configuration on the control plane.

The kubeadm upgrade node performs the following operations on the Node node:

  • Collector Configuration to retrieve kubeadm from the cluster.
  • Upgrade the kubelet configuration of the Node node.

kubernetes upgraded from 1.15.x to 1.16.x

Upgrading from 1.15.x to 1.16.x is the same as the previous upgrade from 1.14.x to 1.15.x. The upgrade command is exactly the same, and will not be repeated here.

Topics: Linux kubelet Kubernetes sudo