Completing Kubernetes binary deployment step by step (I) -- etcd cluster building (single node)
Preface
The basic theory and core components of Kubernetes are briefly introduced. The first step of deploying Kubernetes cluster with single node and binary system is to build etcd cluster experiment flow.
List of cluster planning (the next article will follow the planning step by step configuration)
1. Deploy etcd cluster on three nodes
2. Deploy docker environment and flannel on two node nodes (communication within the container depends on vxlan Technology)
3. Deploy Kube apiserver, Kube controller manager and Kube scheduler on the master node
4. Deployment of kubelet and Kube proxy on node
Server ip address planning
master01 address: 192.168.0.128
node01 address: 192.168.0.129
node02 address: 192.168.0.130
This paper will give the etcd cluster building configuration process.
Construction process
1. Environmental preparation
It is recommended to set the host name, bind the static ip and turn off the network management service for the three servers. Next, it is necessary to turn off the firewall and core protection and clear the iptables
Take master01 as an example. Other nodes can follow similar settings of this node
[root@localhost ~]# hostnamectl set-hostname master01 [root@localhost ~]# su [root@master01 ~]# systemctl stop firewalld [root@master01 ~]# setenforce 0 [root@master01 ~]# iptables -F
Set static ip address
[root@master01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 [root@master01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="static" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="7c933cbb-b29c-4a36-bb12-d1ac1c505524" DEVICE="ens33" ONBOOT="yes" IPADDR="192.168.0.128" NETMASK="255.255.255.0" GATEWAY="192.168.0.2" DNS1=192.168.0.2 [root@master01 ~]# systemctl restart network [root@master01 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:06:97:04 brd ff:ff:ff:ff:ff:ff inet 192.168.0.128/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::5bd9:44c7:cf2c:ef20/64 scope link valid_lft forever preferred_lft forever [root@master01 ~]# ping www.baidu.com #Test whether the Internet can be connected here PING www.a.shifen.com (180.101.49.12) 56(84) bytes of data. 64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=1 ttl=128 time=13.4 ms 64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=2 ttl=128 time=11.2 ms 64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=3 ttl=128 time=11.1 ms ^C --- www.a.shifen.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2008ms rtt min/avg/max/mdev = 11.135/11.967/13.476/1.068 ms #Turn off network management [root@master01 ~]# systemctl stop NetworkManager [root@master01 ~]# systemctl disable NetworkManager Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
2. Create, generate and view ca certificate by downloading related command tools
First of all, on the master node, install the tools related to making and viewing certificates, such as cfssl, cfssljson, cfssl certinfo
//Create and enter the working directory by yourself [root@master01 ~]# mkdir k8s [root@master01 ~]# cd k8s/ #Write a script to download the above three command tools to the specified directory [root@master01 k8s]# vim cfssl.sh #The script is shown separately in the next code block [root@master01 k8s]# bash cfssl.sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 9.8M 100 9.8M 0 0 131k 0 0:01:17 0:01:17 --:--:-- 47667 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2224k 100 2224k 0 0 823k 0 0:00:02 0:00:02 --:--:-- 823k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6440k 100 6440k 0 0 426k 0 0:00:15 0:00:15 --:--:-- 523k
The shell script of the download command is as follows:
[root@master01 k8s]# cat cfssl.sh curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo #Check whether the command is successfully installed to the specified directory [root@master01 k8s]# ls /usr/local/bin/ cfssl cfssl-certinfo cfssljson
There are already tools for creating ca certificates. Now you need materials (various files) to create certificates
Create a certificate generation directory to generate various required certificates
[root@master01 k8s]# mkdir etcd-cert [root@master01 k8s]# cd etcd-cert/ [root@master01 etcd-cert]# ls #Write the materials required for Certificate creation - various files, which can be executed by one click of shell script. The next code block will explain the main contents and relevant parameters of the script [root@master01 etcd-cert]# vim etcd-cert.sh #Execute script [root@master01 etcd-cert]# bash etcd-cert.sh 2020/05/03 20:05:57 [INFO] generating a new CA key and certificate from CSR 2020/05/03 20:05:57 [INFO] generate received request 2020/05/03 20:05:57 [INFO] received CSR 2020/05/03 20:05:57 [INFO] generating key: rsa-2048 2020/05/03 20:05:57 [INFO] encoded CSR 2020/05/03 20:05:57 [INFO] signed certificate with serial number 594719677485784071979843988457153533430072455164 2020/05/03 20:05:57 [INFO] generate received request 2020/05/03 20:05:57 [INFO] received CSR 2020/05/03 20:05:57 [INFO] generating key: rsa-2048 2020/05/03 20:05:58 [INFO] encoded CSR 2020/05/03 20:05:58 [INFO] signed certificate with serial number 603393813825663113730743133623713339445920555574 2020/05/03 20:05:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). #View the files generated by script execution, and the following will analyze the origin and related content explanation of these files [root@master01 etcd-cert]# ls ca-config.json ca-csr.json ca.pem server.csr server-key.pem ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem
shell script file and related explanations (cancel my comments after copying the script, otherwise an error will be reported)
#Write ca certificate configuration file, the first json file cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" #Expires 10 years }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", #The authorization verification of service and client indicates that the client can use the CA to verify the certificate provided by the server; "client auth" #Indicates that the server can use the CA to verify the certificate provided by the client; ] } } } } EOF #Write ca certificate signature file, the second json file cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa",#Asymmetric key format "size": 2048 #Byte length }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF #The download command relies on the json file to generate the CA certificate, thus generating two PEM certificate files: ca.pem and Ca key.pem cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- #To specify the communication verification between three nodes in etcd cluster, the third json file of the verification signature file on the server side needs to be written cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.0.128", "192.168.0.129", "192.168.0.130" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF #Use cfssl command to depend on ca certificate and configuration file, - profiles is to specify specific usage scenario, which will generate: # server-key.pem and server.pem Certificate (private key) cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
The script is a. sh file. Three json files are written in the script. Four certificate (private key) files (one for Ca, the other for server) and corresponding signature files (ca.csr and server.csr files) are generated by commands
The above certificates we need are created. Now we start to build the etcd cluster
3. Set up etcd cluster according to certificate, software package and script
First of all, we need to download etcd software package. Here, I use the third-party tool to download it from the official. You can use it for you. The package link
Link: https://pan.baidu.com/s/1jMeYk2oAYNE9woDBlgnZYA
Extraction code: wwz4
The ETCD package's website link is also posted: ETCD binary package address
https://github.com/etcd-io/etcd/releases
Now that the resources are out, let's start working! C
First extract the package
[root@master01 k8s]# ls cfssl.sh etcd-cert etcd-v3.3.10-linux-amd64.tar.gz [root@master01 k8s]# tar zxf etcd-v3.3.10-linux-amd64.tar.gz [root@master01 k8s]# cd etcd-v3.3.10-linux-amd64/ #Take a look at the files in the package [root@master01 etcd-v3.3.10-linux-amd64]# ls Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
#What we need are two command tools: etcd and etcdctl. Later, we need to move them to the cluster building directory we created. Now we create various commands, and then save or write the required files to the corresponding directory
First, create various directories for cluster building
[root@master01 ~]# mkdir /opt/etcd/{cfg,bin,ssl} -p #They are profile directory, command directory and certificate directory respectively [root@master01 ~]# ls -R /opt/etcd/ /opt/etcd/: bin cfg ssl /opt/etcd/bin: /opt/etcd/cfg: /opt/etcd/ssl: #There are no files in the current directory
Move related files
[root@master01 etcd-v3.3.10-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/ [root@master01 etcd-v3.3.10-linux-amd64]# cd /root/k8s/etcd-cert/ [root@master01 etcd-cert]# ls ca-config.json ca-csr.json ca.pem server.csr server-key.pem ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem [root@master01 etcd-cert]# cp *.pem /opt/etcd/ssl [root@master01 etcd-cert]# ls -R /opt/etcd/ /opt/etcd/: bin cfg ssl /opt/etcd/bin: etcd etcdctl /opt/etcd/cfg: /opt/etcd/ssl: ca-key.pem ca.pem server-key.pem server.pem
Now the configuration files (including configuration files and service startup files) are poor. You can still use shell scripts (you can really appreciate the charm of shell scripts) to write them. Be sure to remove the comments before executing the script
[root@master01 etcd-cert]# cd /opt/etcd/cfg/ [root@master01 cfg]# ls [root@master01 cfg]# vim etcd.sh #!/bin/bash # example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380 #Position variable ETCD_NAME=$1 ETCD_IP=$2 ETCD_CLUSTER=$3 #Work path WORK_DIR=/opt/etcd #Redirect write file cat <<EOF >$WORK_DIR/cfg/etcd #[Member] ETCD_NAME="${ETCD_NAME}" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://${etcd IP}: 2380 " communication port between etcd cluster servers ETCD_LISTEN_CLIENT_URLS="https://${etcd IP}: 2379 " port of external access node #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380" ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379" ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #Token name cluster node server needs to be consistent ETCD_INITIAL_CLUSTER_STATE="new" EOF cat <<EOF >/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=${WORK_DIR}/cfg/etcd ExecStart=${WORK_DIR}/bin/etcd \ --name=\${ETCD_NAME} \ --data-dir=\${ETCD_DATA_DIR} \ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=${WORK_DIR}/ssl/server.pem \ --key-file=${WORK_DIR}/ssl/server-key.pem \ --peer-cert-file=${WORK_DIR}/ssl/server.pem \ --peer-key-file=${WORK_DIR}/ssl/server-key.pem \ --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \ --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable etcd systemctl restart etcd
At this time, you can execute the script first. The above script gives an example of executing the script, which will enter the blocking waiting node joining state. If there is no node joining in a certain period of time, the blocking state will be terminated due to timeout
[root@master01 cfg]# ./etcd.sh etcd01 192.168.0.128 etcd02=https://192.168.0.129:2380,etcd03=https://192.168.0.130:2380
You can restart a terminal to check the status of etcd service
[root@master01 ~]# ps -ef | grep etcd root 50829 15209 0 20:46 pts/1 00:00:00 /bin/bash ./etcd.sh etcd01 192.168.0.128 etcd02=https://192.168.0.129:2380,etcd03=https://192.168.0.130:2380 root 50874 50829 0 20:46 pts/1 00:00:00 systemctl restart etcd root 50880 1 4 20:46 ? 00:00:01 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.0.128:2380 --listen-client-urls=https://192.168.0.128:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.0.128:2379 --initial-advertise-peer-urls=https://192.168.0.128:2380 --initial-clusteretcd01=https://192.168.0.128:2380,etcd02=https://192.168.0.129:2380,etcd03=https://192.168.0.130:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem root 50937 50896 0 20:46 pts/2 00:00:00 grep --color=auto etcd
#In a certain period of time, if no nodes join the cluster, the blocking status will be pushed out. Because other nodes have not been set up, they will exit [root@master01 cfg]# ./etcd.sh etcd01 192.168.0.128 etcd02=https://192.168.0.129:2380,etcd03=https://192.168.0.130:2380 Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.
Next, we will remotely copy the files and service startup files in the etcd directory of the master node to the corresponding directories of the two node servers. Here, take the node1 node as an example
[root@master01 cfg]# scp -r /opt/etcd root@192.168.0.129:/opt The authenticity of host '192.168.0.129 (192.168.0.129)' can't be established. ECDSA key fingerprint is SHA256:bkzGRcdP2iJrSTerWtyuqSDENF2mKLWUZHMRkzJZBFI. ECDSA key fingerprint is MD5:b0:9b:9f:31:de:da:51:8a:d3:ff:87:86:fa:19:63:2c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.0.129' (ECDSA) to the list of known hosts. root@192.168.0.129's password: etcd.sh 100% 1812 1.5MB/s 00:00 etcd 100% 509 583.4KB/s 00:00 etcd 100% 18MB 109.8MB/s 00:00 etcdctl 100% 15MB 104.3MB/s 00:00 ca-key.pem 100% 1679 689.5KB/s 00:00 ca.pem 100% 1265 1.1MB/s 00:00 server-key.pem 100% 1675 1.1MB/s 00:00 server.pem 100% 1338 2.2MB/s 00:00 [root@master01 cfg]# scp /usr/lib/systemd/system/etcd.service root@192.168.0.129:/usr/lib/systemd/system/ root@192.168.0.129's password: etcd.service 100% 923 430.4KB/s 00:00
Next to each node is configuration modification (mainly etcd cluster node name and ip address in the configuration file)
[root@node01 ~]# cd /opt/etcd/cfg/ [root@node01 cfg]# ls etcd etcd.sh #Mainly modify configuration file etcd [root@node01 cfg]# vim etcd #Modify ip address and node name #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.0.129:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.0.129:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.129:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.129:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.128:2380,etcd02=https://192.168.0.129:2380,etcd03=https://192.168.0.130:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
At this time, we execute the cluster building command on the master, and then quickly start the etcd service on the two node servers. After the service is started, it will automatically exit the blocking state
[root@master01 cfg]# ./etcd.sh etcd01 192.168.0.128 etcd02=https://192.168.0.129:2380,etcd03=https://192.168.0.130:2380
#Two nodes start etcd service [root@node01 cfg]# systemctl start etcd [root@node02 cfg]# systemctl start etcd
Finally, let's check the status of etcd cluster. If the status is health, it means that the etcd cluster is built successfully
[root@master01 cfg]# cd ../ssl/ #Certificate is required to check the cluster status [root@master01 ssl]# ls ca-key.pem ca.pem server-key.pem server.pem [root@master01 ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379" cluster-health member a25c294d3a391c7c is healthy: got healthy result from https://192.168.0.128:2379 member b2db359ffad36ee5 is healthy: got healthy result from https://192.168.0.129:2379 member eddae83baed564ba is healthy: got healthy result from https://192.168.0.130:2379 cluster is healthy
OK, etcd has been built successfully. Later, we will continue to deploy other components. Therefore, it is recommended to suspend the virtual machine as the best choice in the experimental environment. When more and more services come, restarting after shutdown may cause various problems.