preface
This article will lead you to install k8s1 18 multi master node high availability cluster. The previous article introduced the installation of single master node high availability cluster. If the previous article has passed the test, start learning this article and install multi master node high availability cluster. If you are a beginner, you can ensure 100% installation as long as you follow. Now let's officially start our installation journey, There are many contents, ten thousand words and long articles. They are dry goods. You can pay attention to them first and then learn slowly~
Why install a high availability cluster with multiple master nodes?
In the production environment, if you want the k8s cluster to run stably, you need to make high availability for the k8s cluster. As we all know, the master node of k8s runs apiserver, scheduler, controller, managr, etcd, codns and other components. Apiserver is responsible for resource requests, processing and other operations. If apiserver fails, the whole k8s cluster will not work, In order to prevent this phenomenon, it is necessary to make high availability for the master node, that is, to achieve high availability and load balancing for multiple master nodes through keepalive+lvs. When one master node goes down, the vip can drift to other master nodes and continue to provide external services to ensure that the k8s cluster is always working. See the following for details~
soul soother
If you feel tired, please take a look at the following paragraph: where we want to go, there is never a shortcut. Only those who are down-to-earth and step by step can go to poetry and distance!
Data download
1. github address of yaml file required below:
https://github.com/luckylucky421/kubernetes1.17.3/tree/master
You can fork my github warehouse into your own warehouse so that it can be permanently saved. If you can't access the yaml access address provided below, clone and download the content on this github to your computer. The github address above is 1.17 3. This is common to 1.18. I don't have a separate branch. You can use it directly.
Yaml files are used in the following experiments. You need to clone and download them locally from the github above, and then transfer the yaml files to the master node of k8s cluster. If you copy and paste them directly, there may be problems.
2. The image acquisition method for initializing k8s cluster mentioned below: the image is on Baidu network disk, and the link is as follows:
Link: https://pan.baidu.com/s/1k1heJy8lLnDk2JEFyRyJdA Extraction code: udkj
text
1, Prepare the experimental environment
1. Prepare four CentOS 7 virtual machines to install k8s clusters. The following is the configuration of the four virtual machines
master1(192.168.0.6)to configure: Operating system: centos7.6 And later versions are OK Configuration: 4 cores cpu,6G Memory, two 60 G Hard disk Networks: bridging networks
master2(192.168.0.16)to configure: Operating system: centos7.6 And later versions are OK Configuration: 4 cores cpu,6G Memory, two 60 G Hard disk Networks: bridging networks
master3(192.168.0.26)to configure: Operating system: centos7.6 And later versions are OK Configuration: 4 cores cpu,6G Memory, two 60 G Hard disk Networks: bridging networks
node1(192.168.0.56)to configure: Operating system: centos7.6 And later versions are OK Configuration: 4 cores cpu,4G Memory, two 60 G Hard disk Networks: bridging networks
2, Initialize the experimental environment
1. Configure static ip
Configure the virtual machine or physical machine as a static ip address so that the ip address will not change after the machine is restarted.
Note: Configuration description in: / etc / sysconfig / network scripts / ifcfg-ens33 file:
NAME=ens33
#The name of the network card can be consistent with the name of the DEVICE
DEVICE=ens33
#Network card device name. You can see your own network card device name through ip addr. Everyone's machine may be different and you need to write your own
BOOTPROTO=static
#Static stands for static ip address
ONBOOT=yes
#The network is started automatically after startup. It must be yes
1.1 configure the network at the master1 node
Modify the / etc / sysconfig / network scripts / ifcfg-ens33 file as follows:
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static IPADDR=192.168.0.6 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 DNS1=192.168.0.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 DEVICE=ens33 ONBOOT=yes
After modifying the configuration file, you need to restart the network service to make the configuration effective. The command to restart the network service is as follows:
service network restart
1.2 configure the network at the master2 node
Modify the / etc / sysconfig / network scripts / ifcfg-ens33 file as follows:
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.0.16
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes
After modifying the configuration file, you need to restart the network service to make the configuration effective. The command to restart the network service is as follows:
service network restart
1.3 configure the network at the master3 node
Modify the / etc / sysconfig / network scripts / ifcfg-ens33 file as follows:
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static IPADDR=192.168.0.26 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 DNS1=192.168.0.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 DEVICE=ens33 ONBOOT=yes
After modifying the configuration file, you need to restart the network service to make the configuration effective. The command to restart the network service is as follows:
service network restart
Note: ifcfg-ens33 file configuration explanation:
IPADDR=192.168.0.6 #The ip address needs to be consistent with the network segment where your computer is located NETMASK=255.255.255.0 #The subnet mask must be consistent with the network segment of your computer GATEWAY=192.168.0.1 #Gateway, open cmd on your computer and enter ipconfig /all to see DNS1=192.168.0.1 #DNS, open cmd on your computer and enter ipconfig /all to see
1.4 configure the network at node1 node
Modify the / etc / sysconfig / network scripts / ifcfg-ens33 file as follows:
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static IPADDR=192.168.0.56 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 DNS1=192.168.0.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 DEVICE=ens33 ONBOOT=yes
After modifying the configuration file, you need to restart the network service to make the configuration effective. The command to restart the network service is as follows:
service network restart
2. Modify the yum source and operate on each node
(1) Back up the original yum source
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
(2) Download Alibaba's yum source
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
(3) Generate new yum cache
yum makecache fast
(4) Configure yum source for installation k8s
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 EOF
(5) Clean up yum cache
yum clean all
(6) Generate new yum cache
yum makecache fast
(7) Update yum source
yum -y update
(8) Install package
yum -y install yum-utils device-mapper-persistent-data lvm2
(9) Add new software source
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Clean up the cache as follows to generate new yum source data
yum clean all yum makecache fast
3. Install the basic software package and operate each node
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate
4. Turn off firewalld firewall and operate on all nodes. The centos7 system uses firewalld firewall by default. Stop firewalld firewall and disable this service
systemctl stop firewalld && systemctl disable firewalld
5. Install iptables and operate on each node. If you are not used to firewalld, you can install iptables. This step can be omitted according to your actual needs
5.1 installing iptables
yum install iptables-services -y
5.2 disable iptables
service iptables stop && systemctl disable iptables
6. Time synchronization, operation of each node
6.1 time synchronization
ntpdate cn.pool.ntp.org
6.2 edit scheduled tasks and synchronize them every hour
1)crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
2) Restart the crond service process:
service crond restart
- Close selinux and operate on each node
Close selinux and set permanent shutdown, so that selinux is also closed when restarting the machine
Modify the / etc/sysconfig/selinux and / etc/selinux/config files
SELinux = forcing becomes SELINUX=disabled, which can also be modified in the following way:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
After the above file is modified, the virtual machine needs to be restarted, which can be forcibly restarted:
reboot -f
8. Close the switch partition and operate each node
swapoff -a
Permanently disable it. Open / etc/fstab and comment out the swap line.
sed -i 's/.*swap.*/#&/' /etc/fstab
9. Modify kernel parameters and operate each node
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
10. Modify host name
At 192.168 On 0.6:
hostnamectl set-hostname master1
At 192.168 On 0.16:
hostnamectl set-hostname master2
At 192.168 On 0.26:
hostnamectl set-hostname master3
At 192.168 On 0.56:
hostnamectl set-hostname node1
11. Configure the hosts file for each node operation
Add the following lines to the / etc/hosts file:
192.168.0.6 master1 192.168.0.16 master2 192.168.0.26 master3 192.168.0.56 node1
12. Configure no password login from master1 to node, and configure no password login from master1 to master2 and master3
Operate on master1
ssh-keygen -t rsa
#Just keep returning
ssh-copy-id -i .ssh/id_rsa.pub root@master2
#After you enter yes, enter the password and enter the password of the master 2 physical machine
ssh-copy-id -i .ssh/id_rsa.pubroot@master3
#After you enter yes, enter the password and the password of the master 3 physical machine
ssh-copy-id -i .ssh/id_rsa.pubroot@node1
#After entering yes, enter the password and the node1 physical machine password
3, Install kubernetes1 18.2 high availability cluster
1. Install docker19 03. Operation of each node
1.1 view the supported docker versions
yum list docker-ce --showduplicates |sort -r
1.2 installation 19.03 Version 7
yum install -y docker-ce-19.03.7-3.el7 systemctl enable docker && systemctl start docker
#Check the docker status. If the status is active (running), it indicates that the docker is running normally
systemctl status docker
1.3 modify docker configuration file
cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF
1.4 restart docker to make the configuration effective
systemctl daemon-reload && systemctl restart docker
1.5 set the path of the bridge package through IPTables and core files, and the configuration will take effect permanently
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables echo """ vm.swappiness = 0 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 """ > /etc/sysctl.conf sysctl -p
1.6 if you enable ipvs, iptables will be used if you do not enable ipvs, but the efficiency is low. Therefore, the official website recommends that you need to open the ipvs kernel
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in \${ipvs_modules}; do /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe \${kernel_module} fi done EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
2. Install kubernetes1 eighteen point two
2.1 install kubedm and kubelet on master1, master2, master3 and node1
yum install kubeadm-1.18.2 kubelet-1.18.2 -y systemctl enable kubelet
2.2 after uploading the image to master1, master2, master3 and node1 nodes, manually decompress the image through docker load -i as follows. The image is on Baidu network disk. The address of Baidu network disk where the image is located is attached at the top of the article. I downloaded the image from the official. You can use it with ease.
docker load -i 1-18-kube-apiserver.tar.gz docker load -i 1-18-kube-scheduler.tar.gz docker load -i 1-18-kube-controller-manager.tar.gz docker load -i 1-18-pause.tar.gz docker load -i 1-18-cordns.tar.gz docker load -i 1-18-etcd.tar.gz docker load -i 1-18-kube-proxy.tar.gz
explain:
Pause version is 3.2, and the image used is k8s gcr. io/pause:3.2
Etcd version is 3.4 3. The image used is k8s gcr. io/etcd:3.4. 3-0
cordns version is 1.6 7. The image used is k8s gcr. io/coredns:1.6. seven
apiserver, scheduler, controller manager and Kube proxy versions are 1.18 2. The images used are
k8s.gcr.io/kube-apiserver:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2
Why decompress the image manually?
1) Because many students' companies have Intranet environment or can't access the dockerhub image warehouse, we need to upload the image to each machine and manually decompress it. Many students will ask, if there are many machines, what should we do? Does it take a lot of time to copy the image to many machines? Indeed, if there are many machines, We only need to transfer these images to our internal private image warehouse. In this way, when kubedm initializes kubernetes, we can pull the images by "– image repository = private image warehouse address". In this way, we do not need to manually transfer the images to each machine, which will be described later;
2) Images stored in Baidu online disk can be used permanently to prevent the official from not maintaining them. We have no way to download images, so students with private warehouses can transfer these images to their own private image warehouses.
2.3 deploy keepalive+lvs to achieve high availability of master node - high availability of apiserver
(1) Deploy kept + LVS and operate on each master node
yum install -y socat keepalived ipvsadm conntrack
(2) Modify the keepalived.conf file of master1 as follows
Modify / etc / kept / kept conf
The master 1 node is kept alive after modification Conf is as follows:
global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP nopreempt interface ens33 virtual_router_id 80 priority 100 advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { 192.168.0.199 } } virtual_server 192.168.0.199 6443 { delay_loop 6 lb_algo loadbalance lb_kind DR net_mask 255.255.255.0 persistence_timeout 0 protocol TCP real_server 192.168.0.6 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.16 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.26 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
(3) Modify the keepalived.conf file of master2 as follows
Modify / etc / kept / kept conf
The master 2 node is kept alive after modification Conf is as follows:
global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP nopreempt interface ens33 virtual_router_id 80 priority 50 advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { 192.168.0.199 } } virtual_server 192.168.0.199 6443 { delay_loop 6 lb_algo loadbalance lb_kind DR net_mask 255.255.255.0 persistence_timeout 0 protocol TCP real_server 192.168.0.6 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.16 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.26 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
(4) Modify the keepalived.conf file of master3 as follows
Modify / etc / kept / kept conf
The master 3 node is kept alive after modification Conf is as follows:
global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP nopreempt interface ens33 virtual_router_id 80 priority 30 advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { 192.168.0.199 } } virtual_server 192.168.0.199 6443 { delay_loop 6 lb_algo loadbalance lb_kind DR net_mask 255.255.255.0 persistence_timeout 0 protocol TCP real_server 192.168.0.6 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.16 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.26 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
Important knowledge points must be seen, otherwise production will encounter huge pits
BACKUP needs to be configured for keepalive, and the non preemptive mode is nopreempt. It is assumed that master1 is down,
After startup, the vip will not automatically drift to master1, which can ensure that the k8s cluster is always in a normal state,
Assuming that master1 is started, apiserver and other components will not run immediately. If vip drifts to master1,
Then the whole cluster will hang up, which is why we need to configure the non preemptive mode
The startup sequence is master1 - > master2 - > master3. Execute the following commands in master1, master2 and master3
systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived
After keepalived is started successfully, you can see that vip has been bound to the network card ens33 through ip addr on master1
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:9d:7b:09 brd ff:ff:ff:ff:ff:ff inet 192.168.0.6/24 brd 192.168.0.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.0.199/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::e2f9:94cd:c994:34d9/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:61:b0:6f:ca brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever
2.4 initialize k8s cluster on master1 node. The operations on master1 are as follows
If I manually upload the image to each node according to Section 2.2, initialize it with the following yaml file. Everyone uploads the image to each machine according to this method and decompresses it manually, so that the later experiments can be carried out normally.
cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.18.2 controlPlaneEndpoint: 192.168.0.199:6443 apiServer: certSANs: - 192.168.0.6 - 192.168.0.16 - 192.168.0.26 - 192.168.0.56 - 192.168.0.199 networking: podSubnet: 10.244.0.0/16 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs kubeadm init --config kubeadm-config.yaml
Note: if the image is not uploaded to each node according to the method in Section 2.2, use the following yaml file to add imagerepository: Registry aliyuncs. com/google_ The containers parameter indicates that the Alibaba cloud image is used and we can access it directly. This method is simpler, but we can understand it here. If we do not use this method first, there will be a problem if we manually add nodes to the k8s cluster later.
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.18.2 controlPlaneEndpoint: 192.168.0.199:6443 imageRepository: registry.aliyuncs.com/google_containers apiServer: certSANs: - 192.168.0.6 - 192.168.0.16 - 192.168.0.26 - 192.168.0.56 - 192.168.0.199 networking: podSubnet: 10.244.0.0/16 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs kubeadm init --config kubeadm-config.yaml After the initialization command is executed successfully, the following contents are displayed, indicating that the initialization is successful To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.0.199:6443 --token 7dwluq.x6nypje7h55rnrhl \ --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.199:6443 --token 7dwluq.x6nypje7h55rnrhl \ --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c
Note: kubedm join... This command needs to be remembered. We need to enter this command at the nodes of master2, master3 and node1 of k8s to join the cluster. The result is different every time we execute this command. Remember the result of your execution, which will be used below
2.5 on the master1 node, execute the following steps to have permission to operate k8s resources
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Execute on the master1 node
kubectl get nodes
As shown below, the master1 node is NotReady
NAME STATUS ROLES AGE VERSION master1 NotReady master 8m11s v1.18.2 kubectl get pods -n kube-system
As shown below, you can see that cordns is also in pending status
coredns-7ff77c879f-j48h6 0/1 Pending 0 3m16s coredns-7ff77c879f-lrb77 0/1 Pending 0 3m16s
As can be seen from the above, the STATUS is NotReady and the cordns is pending because the network plug-in is not installed. Calico or flannel needs to be installed. Next, we install calico and install calico network plug-in on the master1 node:
The image required to install calico is quay io/calico/cni:v3. 5.3 and quay io/calico/node:v3. 5.3. Mirror the Baidu network disk address at the beginning of the article
Manually upload the compressed packages of the above two images to each node and decompress them through docker load -i
docker load -i cni.tar.gz docker load -i calico-node.tar.gz
On the master1 node, execute the following steps:
kubectl apply -f calico.yaml
calico.yaml file content is at the address provided below. Open the following link to copy the content:
https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml
If you can't open the above link, you can visit the following github address, download the following directory clone and, unzip it, and then transfer the file to the master 1 node
https://github.com/luckylucky421/kubernetes1.17.3/tree/master
Execute on the master1 node
kubectl get nodes
As shown below, you can see that the STATUS is Ready
NAME STATUS ROLES AGE VERSION master1 Ready master 98m v1.18.2 kubectl get pods -n kube-system
Seeing that cordns is also in running status indicates that calico installation of master1 node is completed
NAME READY STATUS RESTARTS AGE calico-node-6rvqm 1/1 Running 0 17m coredns-7ff77c879f-j48h6 1/1 Running 0 97m coredns-7ff77c879f-lrb77 1/1 Running 0 97m etcd-master1 1/1 Running 0 97m kube-apiserver-master1 1/1 Running 0 97m kube-controller-manager-master1 1/1 Running 0 97m kube-proxy-njft6 1/1 Running 0 97m kube-scheduler-master1 1/1 Running 0 97m
2.6 copy the certificate of master1 node to master2 and master3
(1) Create a certificate storage directory on master2 and master3
cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
(2) Copy the certificate to master2 and master3 on master1 node. The operation on master1 is as follows. The following scp command is best copied line by line so that no error will occur:
scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernete/pki/etcd/ scp /etc/kubernetes/pki/ca.crt master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt master3:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/etcd/ca.key master3:/etc/kubernetes/pki/etcd/
After the certificate is copied, execute the following commands on master2 and master3 to copy their own, so that master2 and master3 can be added to the cluster
kubeadm join 192.168.0.199:6443 --token 7dwluq.x6nypje7h55rnrhl \ --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c --control-plane
– control plane: this parameter indicates that the master node is added to the k8s cluster
On master2 and master3:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g)$HOME/.kube/config
kubectl get nodes
The display is as follows:
NAME STATUS ROLES AGE VERSION master1 Ready master 39m v1.18.2 master2 Ready master 5m9s v1.18.2 master3 Ready master 2m33s v1.18.2
2.7 add node1 node to k8s cluster and operate on node1 node
kubeadm join 192.168.0.199:6443 --token 7dwluq.x6nypje7h55rnrhl \ --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c
Note: the above commands kubedm join added to the k8s node are generated during initialization in 2.4
2.8 view the cluster node status on the master1 node
kubectl get nodes
As shown below
NAME STATUS ROLES AGE VERSION master1 Ready master 3m36s v1.18.2 master2 Ready master 3m36s v1.18.2 master3 Ready master 3m36s v1.18.2 node1 Ready <none> 3m36s v1.18.2
It shows that node1 node has also joined the k8s cluster. Through the above, the k8s multi master high availability cluster has been built
2.9 installation of traefik
Upload the traefik image to each node and decompress it through docker load -i according to the following method. The image address is in the Baidu network disk at the beginning of the article and can be downloaded by yourself
docker load -i traefik_1_7_9.tar.gz
The image used by traefik is k8s gcr. io/traefik:1.7. nine
1) Generate traefik certificate and operate on master1
mkdir ~/ikube/tls/ -p echo """ [req] distinguished_name = req_distinguished_name prompt = yes [ req_distinguished_name ] countryName = Country Name (2 letter code) countryName_value = CN stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_value = Beijing localityName = Locality Name (eg, city) localityName_value = Haidian organizationName = Organization Name (eg, company) organizationName_value = Channelsoft organizationalUnitName = Organizational Unit Name (eg, p) organizationalUnitName_value = R & D Department commonName = Common Name (eg, your name or your server\'s hostname) commonName_value = *.multi.io emailAddress = Email Address emailAddress_value = lentil1016@gmail.com """ > ~/ikube/tls/openssl.cnf openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key
2) Execute yaml file and create traefik
kubectl apply -f traefik.yaml
traefik. The contents of yaml file are copied at the following link address:
https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/traefik.yaml
3) Check whether traefik is successfully deployed:
kubectl get pods -n kube-system
As shown below, the deployment is successful
traefik-ingress-controller-csbp8 1/1 Running 0 5s traefik-ingress-controller-hqkwf 1/1 Running 0 5s traefik-ingress-controller-wtjqd 1/1 Running 0 5s
3. Install version 2.0 of kubernetes dashboard (web ui interface of kubernetes)
Upload the kubernetes dashboard image to each node and decompress it through docker load -i according to the following method. The image address is in the Baidu network disk at the beginning of the article and can be downloaded by yourself
docker load -i dashboard_2_0_0.tar.gz docker load -i metrics-scrapter-1-0-1.tar.gz
Operate on the master1 node
kubectl apply -f kubernetes-dashboard.yaml
kubernetes-dashboard. The yaml file content is copied at the following link address https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/kubernetes-dashboard.yaml
If you can't access the above, you can visit the following link, clone and download the following branches, and manually transfer the yaml file to master1:
https://github.com/luckylucky421/kubernetes1.17.3
Check whether the dashboard is successfully installed:
kubectl get pods -n kubernetes-dashboard
As shown below, dashboard is installed successfully
NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-694557449d-8xmtf 1/1 Running 0 60s kubernetes-dashboard-5f98bdb684-ph9wg 1/1 Running 2 60s
View the service s on the front end of the dashboard
kubectl get svc -n kubernetes-dashboard
The display is as follows:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.100.23.9 <none> 8000/TCP 50s kubernetes-dashboard ClusterIP 10.105.253.155 <none> 443/TCP 50s
Change the service type to NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
Change type: ClusterIP to type: NodePort, save and exit
kubectl get svc -n kube-system
The display is as follows:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.100.23.9 <none> 8000/TCP 3m59s kubernetes-dashboard NodePort 10.105.253.155 <none> 443:31175/TCP 4m
You can see that the service type is NodePort. You can access the kubernetes dashboard by accessing the ip:31175 port of the master1 node. My environment needs to enter the following address
https://192.168.0.6:31775/
You can see that the dashboard interface appears
3.1 log in to the dashboard through the default token specified in the yaml file
1) View the secret under the kubernetes dashboard namespace
kubectl get secret -n kubernetes-dashboard
The display is as follows:
NAME TYPE DATA AGE default-token-vxd7t kubernetes.io/service-account-token 3 5m27s kubernetes-dashboard-certs Opaque 0 5m27s kubernetes-dashboard-csrf Opaque 1 5m27s kubernetes-dashboard-key-holder Opaque 2 5m27s kubernetes-dashboard-token-ngcmg kubernetes.io/service-account-token 3 5m27s
2) Find the corresponding kubernetes dashboard token ngcmg with token
kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard
The display is as follows:
...
...
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
Remember the value after the token, and copy the following token value to the browser token login to log in:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
Click sing in to log in, and the display is as follows. By default, you can only see the content of the default namespace
3.2 create an administrator token to view any space permissions
kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
1) View the secret under the kubernetes dashboard namespace
kubectl get secret -n kubernetes-dashboard
The display is as follows:
NAME TYPE DATA AGE default-token-vxd7t kubernetes.io/service-account-token 3 5m27s kubernetes-dashboard-certs Opaque 0 5m27s kubernetes-dashboard-csrf Opaque 1 5m27s kubernetes-dashboard-key-holder Opaque 2 5m27s kubernetes-dashboard-token-ngcmg kubernetes.io/service-account-token 3 5m27s
2) Find the corresponding kubernetes dashboard token ngcmg with token
kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard
The display is as follows:
...
...
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
Remember the value after the token, and copy the following token value to the browser token login to log in:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
Click sing in to log in, as shown below. This time, you can see and operate resources in any namespace
4. Install plug-ins related to metrics monitoring
Put metrics-server-amd64_0_3_1.tar.gz and addon tar. Upload the GZ image to each node and decompress it through docker load -i according to the following method. The image address is in the Baidu network disk at the beginning of the article and can be downloaded by yourself
docker load -i metrics-server-amd64_0_3_1.tar.gz
docker load -i addon.tar.gz
Metrics server version 0.3 1. The image used is k8s gcr. io/metrics-server-amd64:v0. three point one
The addon Resizer version is 1.8 4. The image used is k8s gcr. io/addon-resizer:1.8. four
Operate on the k8s-master node
kubectl apply -f metrics.yaml
metrics. The yaml file content is copied at the following link address
https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml
If you can't access the above, you can access the following links, clone and download the following branches, and manually transfer the yaml file to master1 for normal use:
https://github.com/luckylucky421/kubernetes1.17.3
After the above components are installed, kubectl get Pods - n Kube system - O wide to check whether the components are installed normally. The STATUS is Running, indicating that the components are normal, as shown below
NAME READY STATUS RESTARTS AGE calico-node-6rvqm 1/1 Running 10 14h calico-node-cbrvw 1/1 Running 4 14h calico-node-l6628 0/1 Running 0 9h coredns-7ff77c879f-j48h6 1/1 Running 2 16h coredns-7ff77c879f-lrb77 1/1 Running 2 16h etcd-master1 1/1 Running 37 16h etcd-master2 1/1 Running 7 9h kube-apiserver-master1 1/1 Running 52 16h kube-apiserver-master2 1/1 Running 11 14h kube-controller-manager-master1 1/1 Running 42 16h kube-controller-manager-master2 1/1 Running 13 14h kube-proxy-dq6vc 1/1 Running 2 14h kube-proxy-njft6 1/1 Running 2 16h kube-proxy-stv52 1/1 Running 0 9h kube-scheduler-master1 1/1 Running 37 16h kube-scheduler-master2 1/1 Running 15 14h kubernetes-dashboard-85f499b587-dbf72 1/1 Running 1 8h metrics-server-8459f8db8c-5p59m 2/2 Running 0 33s traefik-ingress-controller-csbp8 1/1 Running 0 8h traefik-ingress-controller-hqkwf 1/1 Running 0 8h traefik-ingress-controller-wtjqd 1/1 Running 0 8h