1. Environment
Documents required in the document
reference resources: https://github.com/cby-chen/Kubernetes/releases/tag/cby
Host name | IP address | explain | Software |
---|---|---|---|
Master01 | 192.168.1.30 | master node | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,kubelet,kube-proxy,nfs-client |
Master02 | 192.168.1.31 | master node | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,kubelet,kube-proxy,nfs-client |
Master03 | 192.168.1.32 | master node | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,kubelet,kube-proxy,nfs-client |
Node01 | 192.168.1.33 | Node node | kubelet,kube-proxy,nfs-client |
Node02 | 192.168.1.34 | Node node | kubelet,kube-proxy,nfs-client |
Node03 | 192.168.1.35 | Node node | kubelet,kube-proxy,nfs-client |
Node04 | 192.168.1.36 | Node node | kubelet,kube-proxy,nfs-client |
Node05 | 192.168.1.37 | Node node | kubelet,kube-proxy,nfs-client |
Lb01 | 192.168.1.38 | Lb01 node | haproxy,keepalived |
Lb02 | 192.168.1.39 | Lb02 node | haproxy,keepalived |
192.168.1.88 | VIP | ||
Software | edition |
---|---|
kernel | 5.16.7-1.el8.elrepo.x86_64 |
CentOS 8 | v8 |
kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy | v1.23.4 |
etcd | v3.5.2 |
docker-ce | v20.10.9 |
containerd | v1.6.0 |
cfssl | v1.6.1 |
cni | v1.6.0 |
crictl | v1.23.0 |
haproxy | v1.8.27 |
keepalived | v2.1.5 |
Network segment
Physical host: 192.168.1.0/24
service: 10.96.0.0/12
pod: 172.16.0.0/12
If possible, it is recommended to install k8s cluster and etcd cluster separately
1.1.k8s basic system environment configuration
1.2. Configure IP
ssh root@192.168.1.76 "nmcli con mod ens18 ipv4.addresses 192.168.1.30/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.77 "nmcli con mod ens18 ipv4.addresses 192.168.1.31/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.78 "nmcli con mod ens18 ipv4.addresses 192.168.1.32/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.79 "nmcli con mod ens18 ipv4.addresses 192.168.1.33/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.80 "nmcli con mod ens18 ipv4.addresses 192.168.1.34/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.86 "nmcli con mod ens18 ipv4.addresses 192.168.1.35/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.87 "nmcli con mod ens18 ipv4.addresses 192.168.1.36/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.166 "nmcli con mod ens18 ipv4.addresses 192.168.1.37/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.100 "nmcli con mod ens18 ipv4.addresses 192.168.1.38/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.191 "nmcli con mod ens18 ipv4.addresses 192.168.1.39/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
1.3. Set host name
hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 hostnamectl set-hostname k8s-node03 hostnamectl set-hostname k8s-node04 hostnamectl set-hostname k8s-node05 hostnamectl set-hostname lb01 hostnamectl set-hostname lb02
1.4. Configure yum source
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=http://192.168.1.123/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo
1.5. Install some necessary tools
yum -y install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y
1.6. Install docker tool
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
1.7. Turn off the firewall
systemctl disable --now firewalld
1.8. Close SELinux
setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
1.9. Close swap partition
sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 0
1.10. Close NetworkManager and enable network
systemctl disable --now NetworkManager systemctl start network && systemctl enable network
1.11. Time synchronization
Server yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.1.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd systemctl enable chronyd client yum install chrony -y vim /etc/chrony.conf cat /etc/chrony.conf | grep -v "^#" | grep -v "^$" pool 192.168.1.30 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony systemctl restart chronyd ; systemctl enable chronyd yum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#192.168.1.30#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd Use client for authentication chronyc sources -v
1.12. Configure ulimit
ulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF
1.13. Configure password free login
yum install -y sshpass ssh-keygen -f /root/.ssh/id_rsa -P '' export IP="192.168.1.30 192.168.1.31 192.168.1.32 192.168.1.33 192.168.1.34 192.168.1.35 192.168.1.36 192.168.1.37 192.168.1.38 192.168.1.39" export SSHPASS=123123 for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST done
1.14. Add enable source
by RHEL-8 or CentOS-8 Configuration source yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm by RHEL-7 SL-7 or CentOS-7 install ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm View available installation packages yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
1.15. Upgrade kernel to above version 4.18
Install the latest kernel # I choose the stable version of kernel ml here. If you need to update the long-term maintenance version of kernel LT yum --enablerepo=elrepo-kernel install kernel-ml View which kernels are installed rpm -qa | grep kernel kernel-core-4.18.0-358.el8.x86_64 kernel-tools-4.18.0-358.el8.x86_64 kernel-ml-core-5.16.7-1.el8.elrepo.x86_64 kernel-ml-5.16.7-1.el8.elrepo.x86_64 kernel-modules-4.18.0-358.el8.x86_64 kernel-4.18.0-358.el8.x86_64 kernel-tools-libs-4.18.0-358.el8.x86_64 kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64 View default kernel grubby --default-kernel /boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64 If not, use the latest command settings grubby --set-default /boot/vmlinuz-「Your kernel version」.x86_64 Restart effective reboot The integration command is: yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot
1.16. Install ipvsadm
yum install ipvsadm ipset sysstat conntrack libseccomp -y cat >> /etc/modules-load.d/ipvs.conf <<EOF cat ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 176128 1 ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
1.17. Modify kernel parameters
cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
1.18. Configure hosts local resolution for all nodes
cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.30 k8s-master01 192.168.1.31 k8s-master02 192.168.1.32 k8s-master03 192.168.1.33 k8s-node01 192.168.1.34 k8s-node02 192.168.1.35 k8s-node03 192.168.1.36 k8s-node04 192.168.1.37 k8s-node05 192.168.1.38 lb01 192.168.1.38 lb02 192.168.1.88 lb-vip EOF
2.k8s basic component installation
2.1. Install container as Runtime on all k8s nodes
yum install containerd -y
2.1.1 modules required for configuring container
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
2.1.2 loading module
systemctl restart systemd-modules-load.service
2.1.3 kernel required for configuring container
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # Load kernel sysctl --system
2.1.4 create the configuration file of container
mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml modify Containerd Configuration file for sed -i '/containerd\.runtimes\.runc\.options/ a\\t\tSystemdCgroup\ =\ true' /etc/containerd/config.toml cat /etc/containerd/config.toml # Find containerd runtimes. runc. Options, add systemdcgroup under it = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni] # Add sandbox_image default address changed to match version address sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
2.1.5 start and set to start
systemctl daemon-reload systemctl enable --now
2.1.6 configure the runtime location of crictl client connection
cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
2.2.k8s and etcd download and installation (only in master01)
2.2.1 download k8s installation package (which one you use)
1.download kubernetes1.23.+Binary package for github Binary package download address: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md wget https://dl.k8s.io/v1.23.4/kubernetes-server-linux-amd64.tar.gz 2.download etcdctl Binary package github Binary package download address: https://github.com/etcd-io/etcd/releases wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz 3.docker-ce Binary package download address Binary package download address: https://download.docker.com/linux/static/stable/x86_64/ Here you need to download 20.10.+edition wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.9.tgz 4.containerd Binary package download github Download address: https://github.com/containerd/containerd/releases containerd Download tape when downloading cni Binary package of the plug-in. wget https://github.com/containerd/containerd/releases/download/v1.6.0-rc.2/cri-containerd-cni-1.6.0-rc.2-linux-amd64.tar.gz 5.download cfssl Binary package github Binary package download address: https://github.com/cloudflare/cfssl/releases wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64 6.cni Plug in download github Download address: https://github.com/containernetworking/plugins/releases wget https://github.com/containernetworking/plugins/releases/download/v1.0.1/cni-plugins-linux-amd64-v1.0.1.tgz 7.crictl Client binary Download github Download: https://github.com/kubernetes-sigs/cri-tools/releases wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz decompression k8s install files tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} decompression etcd install files tar -xf etcd-v3.5.2-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.2-linux-amd64/etcd{,ctl} # View the contents under / usr/local/bin ls /usr/local/bin/ etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler
2.2.2 view version
[root@k8s-master01 ~]# kubelet --version Kubernetes v1.23.4 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.1 API version: 3.5
2.2.3 sending components to other k8s nodes
Master='k8s-master02 k8s-master03' Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05' for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
2.2.4 documents related to cloning certificate
git clone https://github.com/cby-chen/Kubernetes.git
2.2.5 create directories for all k8s nodes
mkdir -p /opt/cni/bin
3. Generation of relevant certificates
master01 Node download certificate generation tool wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
3.1. Generate etcd certificate
Unless otherwise specified, the following operations are performed on all master nodes
3.1.1 create a certificate storage directory for all master nodes
mkdir /etc/etcd/ssl -p
3.1.2master01 node generates etcd certificate
cd Kubernetes/pki/ # Generate etcd certificate and etcd certificate key (if you think it may be expanded in the future, you can write more in the ip and reserve them) cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.30,192.168.1.31,192.168.1.32 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
3.1.3 copy certificates to other nodes
Master='k8s-master02 k8s-master03' for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE} done done
3.2. Generate k8s relevant certificates
Unless otherwise specified, the following operations are performed on all master nodes
3.2.1 create certificate storage directory for all k8s nodes
mkdir -p /etc/kubernetes/pki
3.2.2master 01 node generates k8s certificate
# Generate a root certificate cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 10.96.0.1 is the first address of the service network segment, which needs to be calculated. 192.168.1.88 is the highly available vip address cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.168.1.88,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.30,192.168.1.31,192.168.1.32,192.168.1.33,192.168.1.34,192.168.1.35,192.168.1.36,192.168.1.37,192.168.1.38,192.168.1.39,192.168.1.40,192.168.1.41 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
3.2.3 generate apiserver aggregation certificate
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # There is a warning that can be ignored cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
3.2.4 generate controller manage certificate
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # Set a cluster entry kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.88:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # Set an environment item and a context kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # Set a user item kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # Set default environment kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.88:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.88:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
3.2.5 create ServiceAccount Key - secret
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
3.2.6 send certificates to other master nodes
for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done
3.2.7 viewing certificates
ls /etc/kubernetes/pki/ admin.csr apiserver-key.pem ca.pem front-proxy-ca.csr front-proxy-client-key.pem scheduler.csr admin-key.pem apiserver.pem controller-manager.csr front-proxy-ca-key.pem front-proxy-client.pem scheduler-key.pem admin.pem ca.csr controller-manager-key.pem front-proxy-ca.pem sa.key scheduler.pem apiserver.csr ca-key.pem controller-manager.pem front-proxy-client.csr sa.pub # A total of 23 is right ls /etc/kubernetes/pki/ |wc -l 23
4.k8s system component configuration
4.1.etcd configuration
4.1.1master01 configuration
cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.30:2380' listen-client-urls: 'https://192.168.1.30:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.30:2380' advertise-client-urls: 'https://192.168.1.30:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.30:2380,k8s-master02=https://192.168.1.31:2380,k8s-master03=https://192.168.1.32:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF
4.1.2master02 configuration
cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.31:2380' listen-client-urls: 'https://192.168.1.31:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.31:2380' advertise-client-urls: 'https://192.168.1.31:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.30:2380,k8s-master02=https://192.168.1.31:2380,k8s-master03=https://192.168.1.32:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF
4.1.3 master03 configuration
cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.32:2380' listen-client-urls: 'https://192.168.1.32:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.32:2380' advertise-client-urls: 'https://192.168.1.32:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.30:2380,k8s-master02=https://192.168.1.31:2380,k8s-master03=https://192.168.1.32:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF
4.2. Create service (all master node operations)
4.2.1 create etcd Service and start
cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF
4.2.2 create etcd certificate directory
mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd
4.2.3 viewing etcd status
export ETCDCTL_API=3 etcdctl --endpoints="192.168.1.32:2379,192.168.1.31:2379,192.168.1.30:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.1.32:2379 | 56875ab4a12c94e8 | 3.5.1 | 25 kB | false | false | 2 | 8 | 8 | | | 192.168.1.31:2379 | 33df6a8fe708d3fd | 3.5.1 | 25 kB | true | false | 2 | 8 | 8 | | | 192.168.1.30:2379 | 58fbe5ec9743048f | 3.5.1 | 20 kB | false | false | 2 | 8 | 8 | | +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
5. High availability configuration
5.1 operate on lb01 and lb02 servers
5.1.1 install keepalived and haproxy services
systemctl disable --now firewalld setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux yum -y install keepalived haproxy
5.1.2 modify haproxy configuration file (two configuration files are the same)
# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak cat >/etc/haproxy/haproxy.cfg<<"EOF" global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.1.30:6443 check server k8s-master02 192.168.1.31:6443 check server k8s-master03 192.168.1.32:6443 check EOF
5.1.3lb01 configure the kept master node
#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens18 mcast_src_ip 192.168.1.38 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.88 } track_script { chk_apiserver } } EOF
5.1.4lb02 configure the kept backup node
# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens18 mcast_src_ip 192.168.1.39 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.88 } track_script { chk_apiserver } } EOF
5.1.5 health check script configuration (two lb hosts)
cat > /etc/keepalived/check_apiserver.sh << EOF #!/bin/bash err=0 for k in \$(seq 1 3) do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fi done if [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF # Authorize scripts chmod +x /etc/keepalived/check_apiserver.sh
5.1.6 start service
systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived
5.1.7 test high availability
# Can ping the same [root@k8s-node02 ~]# ping 192.168.1.88 # telnet access [root@k8s-node02 ~]# telnet 192.168.1.88 8443 # Close the primary node to see if the vip drifts to the standby node