ARM64 platform deploys Kubernetes based on openEuler + iSula environment

Posted by wendymelon on Thu, 16 Dec 2021 22:15:15 +0100

Why should Kubernetes be deployed on arm64 platform, and it is the architecture of Kunpeng 920. it's a long story... 5000 words are omitted here.

Introduce the system information;

• architecture: Kunpeng 920
•OS: openEuler 20.03 (LTS-SP1)
•CPU: 4c
• memory: 16G
• hard disk: Several

Although the whole process refers to the post of Kunpeng forum [1], it still takes a lot of twists and turns.

TL;DR

In the whole process, pay attention to the installation of Kubernetes and network components on the arm64 platform. You need to use the image of the arm64 version.

Environment configuration

1. Close selinux

#Temporarily Closed
setenforce 0
#Permanently close SELINUX=disabled
vim /etc/sysconfig/selinux

2. Close the swap partition

#Temporarily Closed
swapoff -a
#Permanently close comment swap lines
vim /etc/fstab

3. Turn off the firewall

systemctl stop firewalld
ssystemctl disable firewalld

4. Network configuration

The endogenous bridging function that needs to be turned on for NF call inside iptables

vim /etc/sysctl.d/k8s.conf

Amend the following:

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm_swappiness=0

Execute after modification:

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

5. Add Kubernetes source

In the file / etc / yum repos. d/openEuler. The following contents are added to repo:

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

Installing and configuring iSula

yum install -y iSulad

Modify the iSula configuration and open the file / etc / iSula D / daemon JSON, as follows:

{
  "registry-mirrors": [
    "docker.io"
  ],
  "insecure-registries": [
    "rnd-dockerhub.huawei.com"
  ],
  "pod-sandbox-image": "k8s.gcr.io/pause:3.2", // Modify according to the corresponding Kubernetes version, which will be described later
  "network-plugin": "cni",
  "cni-bin-dir": "",
  "cni-conf-dir": "",
  "hosts": [
    "unix:///var/run/isulad.sock"
  ]
}

Restart isulad after modification

systemctl restart isulad
systemctl enable isulad

Kubernetes deployment

1. Install kubelet, kubedm and kubectl

yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

2. Prepare the image

Due to some unknown network problem, it will lead to pulling k8s gcr. Image of IO failed. You need to download it in advance.

Through kubedm config images list -- kubernetes version 1.20 0 command to obtain the image required for initialization. Note that the version number is specified through the -- kubernetes version parameter. Otherwise, kubedm prints the highest version of 1.20 The initial image of version x (for example, the maximum version of 1.20.x is 1.20.4).

k8s.gcr.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
k8s.gcr.io/kube-proxy:v1.20.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.

The corresponding arm64 version image is:

k8s.gcr.io/kube-apiserver-arm64:v1.20.0
k8s.gcr.io/kube-controller-manager-arm64:v1.20.0
k8s.gcr.io/kube-scheduler-arm64:v1.20.0
k8s.gcr.io/kube-proxy-arm64:v1.20.0
k8s.gcr.io/pause-arm64:3.2
k8s.gcr.io/etcd-arm64:3.4.2-0 #Support arm64 3.4 Maximum version of X
k8s.gcr.io/coredns:1.7.0 #No special arm64 version is required

After downloading the image with "luck", modify it to what we need through the isula tag command:

isula tag k8s.gcr.io/kube-apiserver-arm64:v1.20.0 k8s.gcr.io/kube-apiserver:v1.20.0
isula tag k8s.gcr.io/kube-controller-manager-arm64:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0
isula tag k8s.gcr.io/kube-scheduler-arm64:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0
isula tag k8s.gcr.io/kube-proxy-arm64:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0
isula tag k8s.gcr.io/pause-arm64:3.2 k8s.gcr.io/pause:3.2
isula tag k8s.gcr.io/etcd-arm64:3.4.2-0 k8s.gcr.io/etcd:3.4.13-0
isula tag k8s.gcr.io/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0

3. Initialize the master node

Note that you need to specify the -- CRI socket parameter and use the isulad API.

kubeadm init --kubernetes-version v1.20.0 --cri-socket=/var/run/isulad.sock --pod-network-cidr=10.244.0.0/16

If the installation is successful, you will see the following

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 12.0.0.9:6443 --token 0110xl.lqzlegbduz2qkdhr \
    --discovery-token-ca-cert-hash sha256:42b13f5924a01128aac0d6e7b2487af990bc82701f233c8a6a4790187ea064af

4. Configure cluster environment

Then configure according to the above output

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

5. Add a Node to the cluster

Repeat the previous steps: environment configuration, installation and configuration of iSula, and 1 and 2 of Kubernetes deployment.

Similarly, use the command output above, plus the -- CRI socket parameter:

kubeadm join 12.0.0.9:6443 --token 0110xl.lqzlegbduz2qkdhr \
    --discovery-token-ca-cert-hash \
    --cri-socket=/var/run/isulad.sock

Configure network plug-ins

After initializing the master node and configuring the cluster environment, you can execute the kubectl command.

kubectl get nodes
NAME            STATUS   ROLES                  AGE    VERSION
host-12-0-0-9   NotReady    control-plane,master   178m   v1.20.0

Look at the node and find that the node is NotReady because the network plug-in has not been installed. If you view the log of kubelet with the command journalctl -uf kubelet at this time, you will see the log prompt that the network plug-in is not ready.

kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:iSulad: network plugin is not ready: cni config uninitialized

Remember the configuration of isulad?

"network-plugin": "cni",
"cni-bin-dir": "", //Use default / opt/cni/bin
"cni-conf-dir": "", //Use the default / etc / CNI / net d

In fact, both directories are empty. If the directory does not exist, create it first:

mkdir -p /opt/cni/bin
mkdir -p /etc/cni/net.d

Here, calico is used as the network plug-in. First download the manifest.

wget https://docs.projectcalico.org/v3.14/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

Because it is arm64 hardware, you also need to use the image of the corresponding arm64 version. First check which images to use:

grep 'image:' calico.yaml | uniq
          image: calico/cni:v3.14.2
          image: calico/pod2daemon-flexvol:v3.14.2
          image: calico/node:v3.14.2
          image: calico/kube-controllers:v3.14.2

For the corresponding arm64 version, please refer to the above operation steps and will not be repeated.

calico/cni:v3.14.2-arm64
calico/pod2daemon-flexvol:v3.14.2-arm64
calico/node:v3.14.2-arm64
calico/kube-controllers:v3.14.2-arm64

After the image is finished, execute:

kubectl apply -f calico.yaml

After that, you can see that the node becomes Ready.

test

Usually, the pod is created with the image of nginx for testing, but nginx does not have the version of arm64. Here, we use the Hello world image officially provided by docker. Yes, arm64 is supported.

Note: the process in the container will exit after printing information, so the pod will restart continuously, but it is enough for testing.

kubectl run hello-world --image hello-world:latest

kubectl logs hello-world --previous

Can see

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm64v8)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

summary

So far, we have completed the deployment of Kubernetes based on openEuler + iSula on Kunpeng platform.

Reference link

[1] Kunpeng forum post: https://bbs.huaweicloud.com/forum/thread-94271-1-1.html

Topics: Kubernetes Cloud Native