WeChat Public Number: Operations Development Story by Joke
Preface
In December last year, when the Kubernetes community announced version 1.20, it gradually abandoned dockershis, and there were also many media campaigns promoting the abandonment of Docker s by Kubernetes. Actually, I think this is misleading, maybe just to calm down.
dockershim is a component of Kubernetes that operates on Dockers. Docker was born in 2013, and Kubernetes was in 2016, so Docker didn't think of orchestration at first and didn't know that Kubernetes would be a huge thing (it wouldn't lose so fast if it knew it). But when Kubernetes was created to run as a container with Docker, many of its operational logic was directed at Docker. As the community grows stronger, Docker-related logic becomes independent to make up the dockers him in order to be compatible with more container runtimes.
Because of this, dockers him must be maintained as long as any changes to Kubernetes or Dockers are made, so that adequate support can be guaranteed, but operating Dockers through dockershim is essentially the underlying runtime Containerd that operates Dockers, and Containers themselves support CRI (Container Runtime Interface). So why go around a Docker? Is it possible to interact directly with CRI and Containerd? That's one reason the community wants to start dockershim.
So what is Containerd?
Container is a project separated from Docker to provide Kubernetes with a container runtime that manages the mirror and the life cycle of the container. Container, however, can work independently without Docker. Its features are as follows:
-
Supports the OCI mirror specification, also known as runc
-
Support OCI Runtime Specification
-
pull to support mirroring
-
Supports container network management
-
Storage supports multiple tenants
-
Support container runtime and life cycle management
-
Support for managing network namespaces
Some of the differences between Containerd and Docker in command usage are as follows:
function | Docker | Containerd |
---|---|---|
Show local mirror list | docker images | crictl images |
Download Mirror | docker pull | crictl pull |
Upload Mirror | docker push | nothing |
Remove Local Mirrors | docker rmi | crictl rmi |
View mirror details | docker inspect IMAGE-ID | crictl inspecti IMAGE-ID |
Show Container List | docker ps | crictl ps |
Create Container | docker create | crictl create |
Start Container | docker start | crictl start |
Stop Container | docker stop | crictl stop |
Delete Container | docker rm | crictl rm |
View Container Details | docker inspect | crictl inspect |
attach | docker attach | crictl attach |
exec | docker exec | crictl exec |
logs | docker logs | crictl logs |
stats | docker stats | crictl stats |
You can see how they are used in much the same way.
The following describes the steps for installing the K8S cluster using kubeadm and using containerd as the container runtime.
Environmental description
Host Node
IP Address | system | kernel |
---|---|---|
192.168.0.5 | CentOS7.6 | 3.10 |
192.168.0.125 | CentOS7.6 | 3.10 |
software documentation
Software | Edition |
---|---|
kubernetes | 1.20.5 |
containerd | 1.4.4 |
Environmental preparation
(1) Add hosts information on each node:
$ cat /etc/hosts
192.168.0.5 k8s-master 192.168.0.125 k8s-node01
(2) Disable firewalls:
$ systemctl stop firewalld $ systemctl disable firewalld
(3) Disable SELINUX:
$ setenforce 0 $ cat /etc/selinux/config SELINUX=disabled
(4) Create/etc/sysctl.d/k8s.conf file, add the following:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
(5) Execute the following commands for the modification to take effect:
$ modprobe br_netfilter $ sysctl -p /etc/sysctl.d/k8s.conf
(6) Install ipvs
$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF $ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
/etc/sysconfig/modules/ipvs created by the above script. Modules file to ensure that the required modules are automatically loaded after a node restarts. Use lsmod | grep-e ip_ Vs-e nf_ Conntrack_ The IPv4 command checks to see if the required kernel module has been loaded correctly.
(7) Installed ipset package:
$ yum install ipset -y
To facilitate viewing the proxy rules for ipvs, it is best to install the management tool ipvsadm:
$ yum install ipvsadm -y
(8) Synchronize server time
$ yum install chrony -y $ systemctl enable chronyd $ systemctl start chronyd $ chronyc sources
(9) Turn off swap partitions:
$ swapoff -a
(10) Modify the / etc/fstab file, comment out the automatic mounting of SWAP, and use free -m to confirm that swap is off. swappiness parameter adjustment, modify/etc/sysctl.d/k8s.conf Add the following line:
vm.swappiness=0
Execute sysctl-p/etc/sysctl. D/k8s. Conf to make the change effective.
(11) Containerd can be installed next
$ yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 $ yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo $ yum list | grep containerd
You can choose to install a version, such as the latest one here:
$ yum install containerd.io-1.4.4 -y
(12) Create a containerd configuration file:
mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml # Replace Profile sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g" /etc/containerd/config.toml
(13) Start Containerd:
systemctl daemon-reload systemctl enable containerd systemctl restart containerd
After ensuring that the Containerd installation is complete, the above environment configuration is complete. Now we can install Kubeadm. Here we install it by specifying the source of yum, using the source of Ali Cloud:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
Then install kubeadm, kubelet, kubectl (I'm installing the latest version and have a version that requires me to set it):
$ yum install -y kubelet-1.20.5 kubeadm-1.20.5 kubectl-1.20.5
Set Runtime:
$ crictl config runtime-endpoint /run/containerd/containerd.sock
You can see that we have V1 installed here. Version 20.5, then set kubelet to boot up:
$ systemctl daemon-reload $ systemctl enable kubelet && systemctl start kubelet
"
All of the above operations up to this point require configuration to be performed on all nodes.
"
Initialize Cluster
Initialize Master
Next, configure the kubeadm initialization file on the master node to export the default initialization configuration with the following commands:
$ kubeadm config print init-defaults > kubeadm.yaml
Then modify the configuration to suit our own needs, such as modifying the value of imageRepository, the mode of kube-proxy is ipvs. Note that because we use containers as runtime, we need to specify cgroupDriver as systemd [1] when initializing the node
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.0.5 bindPort: 6443 nodeRegistration: criSocket: /run/containerd/containerd.sock name: k8s-master taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.5 networking: dnsDomain: cluster.local podSubnet: 172.16.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
Then initialize with the configuration file above:
$ kubeadm init --config=kubeadm.yaml [init] Using Kubernetes version: v1.20.5 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.5] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 70.001862 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.5:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec
Copy the kubeconfig file
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Add Node
Remember to initialize the configuration and operations on the cluster ahead of time by putting $HOME/ on the master node. Copy the kube/config file to the file corresponding to the node, install kubeadm, kubelet, kubectl, and then execute the join command prompted above after initialization is complete:
# kubeadm join 192.168.0.5:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
"
If you forget the join command above, you can retrieve it using the command kubeadm token create --print-join-command.
"
Run the get nodes command after successful execution:
$ kubectl get no NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 29m v1.20.5 k8s-node01 NotReady <none> 28m v1.20.5
You can see the NotReady status because the network plug-in has not been installed yet. Next, install the network plug-in, which can be found in the document https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ Choose our own network plug-in, here we install calio:
$ wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
#Since the node is a multi-network card, you need to specify the intranet card in the resource list file
$ vi calico.yaml
...... spec: containers: - env: - name: DATASTORE_TYPE value: kubernetes - name: IP_AUTODETECTION_METHOD # Add the environment variable to the DaemonSet value: interface=eth0 # Specify Intranet Network Card - name: WAIT_FOR_DATASTORE value: "true" - name: CALICO_IPV4POOL_CIDR # Since 172 segments were configured at init time, you need to modify them here value: "172.16.0.0/16" ......
Install calico network plug-in
$ kubectl apply -f calico.yaml
Check Pod's running status later:
# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-bcc6f659f-zmw8n 0/1 ContainerCreating 0 7m58s calico-node-c4vv7 1/1 Running 0 7m58s calico-node-dtw7g 0/1 PodInitializing 0 7m58s coredns-54d67798b7-mrj2b 1/1 Running 0 46m coredns-54d67798b7-p667d 1/1 Running 0 46m etcd-k8s-master 1/1 Running 0 46m kube-apiserver-k8s-master 1/1 Running 0 46m kube-controller-manager-k8s-master 1/1 Running 0 46m kube-proxy-clf4s 1/1 Running 0 45m kube-proxy-mt7tt 1/1 Running 0 46m kube-scheduler-k8s-master 1/1 Running 0 46m
The network plug-in ran successfully and the node status was normal:
# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 47m v1.20.5 k8s-node01 Ready <none> 46m v1.20.5
Add another node in the same way.
Configuration command autocompletion
yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
Reference Documents
[1]: https://github.com/containerd/containerd/issues/4857
[2]: https://github.com/containerd/containerd
Public Number: Operations and Maintenance Development Story
github: https://github.com/orgs/sunsharing-note/dashboard
Love life, love operations
If you think the article is good, click on the top right corner to select Send to Friends or Forward to Friends Circle. Your support and encouragement are my greatest motivation. Please pay attention to me if you like it.
Scavenging 2D Code
Focus on me and maintain premium content irregularly
Reminder
If you like this article, please share it with your circle of friends and follow me for more information.
........................