WeChat official account: operation and development story, author: double winter
Recently, the company needs to build a 1.18 version of k8s high availability mode in the test environment, so kubedm is used. If you want to be more familiar with various components of k8s, it is recommended to use binary construction for learning. I built and tested it locally, which is safe and reliable. I hope it will be helpful to you! If you think it's useful, you can help pay attention or forward it
Resource download
Data download 1.Required below yaml Where the file is located github The address is as follows: https://github.com/luckylucky421/kubernetes1.17.3/tree/master You can take mine github Warehouse fork Go to your own warehouse so that you can keep it permanently, as provided below yaml Access address if you can't access it, put this github Content on clone And download to your computer 2.Initialization mentioned below k8s The image acquisition method required by the cluster: the image is on Baidu network disk, and the link is as follows: Link: https://pan.baidu.com/s/1k1heJy8lLnDk2JEFyRyJdA Extraction code: udkj
1. Node planning information
role | IP address | system |
---|---|---|
k8s-master01 | 10.211.55.3 | 「CentOS7.6.1810」 |
k8s-master02 | 10.211.55.5 | 「CentOS7.6.1810」 |
k8s-master03 | 10.211.55.6 | 「CentOS7.6.1810」 |
k8s-node01 | 10.211.55.7 | 「CentOS7.6.1810」 |
k8s-lb | 10.211.55.10 | 「CentOS7.6.1810」 |
2 basic environmental preparation
- Environmental information software version kubernetes 1.18 2 | | docker | 19.0. 3 |
2.1 environment initialization
1) Configure the host name, taking k8s-master01 as an example (you need to modify the host name according to the node planning role in turn)
❝
k8s-lb no setting required
❞
[root@localhost ~]# hostnamectl set-hostname k8s-master01
2) Configure host hosts mapping
[root@localhost ~]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.1.10.100 k8s-master01 10.1.10.101 k8s-master02 10.1.10.102 k8s-master03 10.1.10.103 k8s-node01 10.1.10.200 k8s-lb
After configuration, you can use the following command to test
[root@localhost ~]# for host in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-lb;do ping -c 1 $host;done PING k8s-master01 (10.211.55.3) 56(84) bytes of data. 64 bytes from k8s-master01 (10.211.55.3): icmp_seq=1 ttl=64 time=0.063 ms --- k8s-master01 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms PING k8s-master02 (10.211.55.5) 56(84) bytes of data. 64 bytes from k8s-master02 (10.211.55.5): icmp_seq=1 ttl=64 time=0.369 ms --- k8s-master02 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms PING k8s-master03 (10.211.55.6) 56(84) bytes of data. 64 bytes from k8s-master03 (10.211.55.6): icmp_seq=1 ttl=64 time=0.254 ms .....
❝
ping k8s-lb doesn't work here because we haven't configured VIP yet
❞
3) Disable firewall
[root@localhost ~]# systemctl stop firewalld [root@localhost ~]# systemctl disable firewalld
4) Close selinux
[root@localhost ~]# setenforce 0 [root@localhost ~]# sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
5) Close swap partition
[root@localhost ~]# swapoff -a # temporary [root@localhost ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #permanent
6) Time synchronization
[root@localhost ~]# yum install chrony -y [root@localhost ~]# systemctl enable chronyd [root@localhost ~]# systemctl start chronyd [root@localhost ~]# chronyc sources
7) Configure ulimt
[root@localhost ~]# ulimit -SHn 65535
8) Configure kernel parameters
[root@localhost ~]# cat >> /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 EOF [root@localhost ~]# sysctl -p
2.2 kernel upgrade
Because CentOS 7 The default kernel version of 6 is 3.10. The kernel of 3.10 has many bugs, and the most common one is group memory leak (which should be executed by all four hosts)
1) Download the required kernel version. I use rpm installation here, so download the rpm package directly
[root@localhost ~]# wget https://cbs.centos.org/kojifiles/packages/kernel/4.9.220/37.el7/x86_64/kernel-4.9.220-37.el7.x86_64.rpm
2) Just perform rpm upgrade
[root@localhost ~]# rpm -ivh kernel-4.9.220-37.el7.x86_64.rpm
3) After the reboot is upgraded, check whether the kernel is successfully upgraded
[root@localhost ~]# reboot [root@k8s-master01 ~]# uname -r
3 component installation
3.1 installation ipvs
3) Install the software required for ipvs
Since I intend to use ipvs as the proxy mode of Kube proxy, I need to install the corresponding software package.
[root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
2) Load module
[root@k8s-master01 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack modprobe -- ip_tables modprobe -- ip_set modprobe -- xt_set modprobe -- ipt_set modprobe -- ipt_rpfilter modprobe -- ipt_REJECT modprobe -- ipip EOF
❝
Note: in kernel version 4.19 nf_conntrack_ipv4 has been changed to nf_conntrack
❞
3) Configure restart auto load
[root@k8s-master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
3.2 installing docker CE
❝
All hosts need to be installed
❞
[root@k8s-master01 ~]# # Install required software [root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [root@k8s-master01 ~]# # Add yum source [root@k8s-master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- Check whether there is docker CE package
[root@k8s-master01 ~]# yum list | grep docker-ce containerd.io.x86_64 1.2.13-3.1.el7 docker-ce-stable docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable docker-ce-cli.x86_64 1:19.03.8-3.el7 docker-ce-stable docker-ce-selinux.noarch 17.03.3.ce-1.el7 docker-ce-stable
- Install docker CE
[root@k8s-master01 ~]# yum install docker-ce-19.03.8-3.el7 -y [root@k8s-master01 ~]# systemctl start docker [root@k8s-master01 ~]# systemctl enable docker
- Configure mirror acceleration
[root@k8s-master01 ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io [root@k8s-master01 ~]# systemctl restart docker
3.3 installing kubernetes components
❝
The above operations are performed on all nodes
❞
- Add yum source
[root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
- Install software
[root@k8s-master01 ~]# yum install -y kubelet-1.18.2-0 kubeadm-1.18.2-0 kubectl-1.18.2-0 --disableexcludes=kubernetes
- Set kubelet to power on and self start
[root@k8s-master01 ~]# systemctl enable kubelet.service
4 cluster initialization
4.1 configure cluster high availability
High availability uses HAProxy+Keepalived to balance the traffic load of high availability and master nodes. HAProxy and KeepAlived are deployed in all master nodes as daemons
- Install software
[root@k8s-master01 ~]# yum install keepalived haproxy -y
- Configure haproxy
The configuration of all master nodes is the same, as follows:
❝
Note: change the apiserver address to the master address planned by your node
❞
[root@k8s-master01 ~]# vim /etc/haproxy/haproxy.cfg #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # kubernetes apiserver frontend which proxys to the backends #--------------------------------------------------------------------- frontend kubernetes mode tcp bind *:16443 option tcplog default_backend kubernetes-apiserver #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver mode tcp balance roundrobin server k8s-master01 10.211.55.3:6443 check server k8s-master02 10.211.55.5:6443 check server k8s-master03 10.211.55.6:6443 check #--------------------------------------------------------------------- # collection haproxy statistics message #--------------------------------------------------------------------- listen stats bind *:9999 stats auth admin:P@ssW0rd stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats
- Configure keepalived
k8s-master01
[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } # Define script vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 2 weight -5 fall 3 rise 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.211.55.10 } # Call script #track_script { # check_apiserver #} }
k8s-master02 node configuration
[root@k8s-master02 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } # Define script vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 2 weight -5 fall 3 rise 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.211.55.10 } # Call script #track_script { # check_apiserver #} }
k8s-master03 node configuration
[root@k8s-master03 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } # Define script vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 2 weight -5 fall 3 rise 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 98 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.211.55.10 } # Call script #track_script { # check_apiserver #} }
Write health detection script
[root@k8s-master01 ~]# vim /etc/keepalived/check-apiserver.sh #!/bin/bash function check_apiserver(){ for ((i=0;i<5;i++)) do apiserver_job_id=${pgrep kube-apiserver} if [[ ! -z ${apiserver_job_id} ]];then return else sleep 2 fi apiserver_job_id=0 done } # 1->running 0->stopped check_apiserver if [[ $apiserver_job_id -eq 0 ]];then /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi
Start haproxy and keepalived
[root@k8s-master01 ~]# systemctl enable --now keepalived [root@k8s-master01 ~]# systemctl enable --now haproxy
4.2 deploying master
1) On k8s-master01, write kubedm Yaml configuration file, as follows:
[root@k8s-master01 ~]# cat >> kubeadm.yaml <<EOF apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.18.2 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers controlPlaneEndpoint: "k8s-lb:16443" networking: dnsDomain: cluster.local podSubnet: 192.168.0.0/16 serviceSubnet: 10.211.0.0/12 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: ipv EOF
2) Download Image
[root@k8s-master01 ~]# kubeadm config images pull --config kubeadm.yaml
The image address is the Alibaba cloud address used. Theoretically, it should be very fast. You can also directly download the image provided at the beginning of the article and import it into the node
docker load -i 1-18-kube-apiserver.tar.gz docker load -i 1-18-kube-scheduler.tar.gz docker load -i 1-18-kube-controller-manager.tar.gz docker load -i 1-18-pause.tar.gz docker load -i 1-18-cordns.tar.gz docker load -i 1-18-etcd.tar.gz docker load -i 1-18-kube-proxy.tar.gz explain: pause Version 3.2,The image used is k8s.gcr.io/pause:3.2 etcd Version 3.4.3,The image used is k8s.gcr.io/etcd:3.4.3-0 cordns Version 1.6.7,The image used is k8s.gcr.io/coredns:1.6.7 apiserver,scheduler,controller-manager,kube-proxy Version 1.18.2,The images used are k8s.gcr.io/kube-apiserver:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2
3) Initialize
[root@k8s-master01 ~]# kubeadm init --config kubeadm.yaml W0514 01:09:20.846675 11871 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-lb] and IPs [10.208.0.1 10.211.55.3] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.211.55.3 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.211.55.3 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0514 01:09:26.356826 11871 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0514 01:09:26.358323 11871 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 21.018365 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: q4ui64.gp5g5rezyusy9xw9 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 \ --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 \ --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1
❝
The final output kubedm jion needs to be recorded, and the subsequent master node and node node need to be recorded
❞
4) Configure environment variables
[root@k8s-master01 ~]# cat >> /root/.bashrc <<EOF export KUBECONFIG=/etc/kubernetes/admin.conf EOF [root@k8s-master01 ~]# source /root/.bashrc
5) View node status
[root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady master 3m47s v1.18.2
6) Install network plug-in
❝
If a node has multiple network cards, you need to specify the intranet network card in the resource list file (how to use a single network card without modification))
❞
[root@k8s-master01 ~]# wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml [root@k8s-master01 ~]# vi calico.yaml ...... containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: calico/node:v3.8.8-1 env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Wait for the datastore. - name: IP_AUTODETECTION_METHOD # Add the environment variable to the DaemonSet value: interface=ens33 # Specify intranet card - name: WAIT_FOR_DATASTORE value: "true" # Set based on the k8s node name. - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName ...... # Install calico network plug-in [root@k8s-master01 ~]# kubectl apply -f calico.yaml
After installing the network plug-in, view the node information as follows:
[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 10m v1.18.2
You can see that the status has changed from not ready to ready.
7) Join master02 to the cluster
- Download Image
[root@k8s-master02 ~]# kubeadm config images pull --config kubeadm.yaml
- Join cluster
[root@k8s-master02 ~]# kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 \ --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1 \ --control-plane
- The output is as follows:
... This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. ...
- Configure environment variables
[root@k8s-master02 ~]# cat >> /root/.bashrc <<EOF export KUBECONFIG=/etc/kubernetes/admin.conf EOF [root@k8s-master02 ~]# source /root/.bashrc
-
The operation of the other machine is the same. Add master03 to the cluster
-
View cluster status
[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 41m v1.18.2 k8s-master02 Ready master 29m v1.18.2 k8s-master03 Ready master 27m v1.18.2
- View cluster component status
If all components are Running, all components are normal and abnormal. You can check the pod log for troubleshooting
[root@k8s-master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE NODE NOMINATED NODE READINESS GATES calico-kube-controllers-77c5fc8d7f-stl57 1/1 Running 0 26m k8s-master01 <none> <none> calico-node-ppsph 1/1 Running 0 26m k8s-master01 <none> <none> calico-node-tl6sq 1/1 Running 0 26m k8s-master02 <none> <none> calico-node-w92qh 1/1 Running 0 26m k8s-master03 <none> <none> coredns-546565776c-vtlhr 1/1 Running 0 42m k8s-master01 <none> <none> coredns-546565776c-wz9bk 1/1 Running 0 42m k8s-master01 <none> <none> etcd-k8s-master01 1/1 Running 0 42m k8s-master01 <none> <none> etcd-k8s-master02 1/1 Running 0 30m k8s-master02 <none> <none> etcd-k8s-master03 1/1 Running 0 28m k8s-master03 <none> <none> kube-apiserver-k8s-master01 1/1 Running 0 42m k8s-master01 <none> <none> kube-apiserver-k8s-master02 1/1 Running 0 30m k8s-master02 <none> <none> kube-apiserver-k8s-master03 1/1 Running 0 28m k8s-master03 <none> <none> kube-controller-manager-k8s-master01 1/1 Running 1 42m k8s-master01 <none> <none> kube-controller-manager-k8s-master02 1/1 Running 1 30m k8s-master02 <none> <none> kube-controller-manager-k8s-master03 1/1 Running 0 28m k8s-master03 <none> <none> kube-proxy-6sbpp 1/1 Running 0 28m k8s-master03 <none> <none> kube-proxy-dpppr 1/1 Running 0 42m k8s-master01 <none> <none> kube-proxy-ln7l7 1/1 Running 0 30m k8s-master02 <none> <none> kube-scheduler-k8s-master01 1/1 Running 1 42m k8s-master01 <none> <none> kube-scheduler-k8s-master02 1/1 Running 1 30m k8s-master02 <none> <none> kube-scheduler-k8s-master03 1/1 Running 0 28m k8s-master03 <none> <none>
- View CSR
[root@k8s-master01 ~]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION csr-cfl2w 42m kubernetes.io/kube-apiserver-client-kubelet system:node:k8s-master01 Approved,Issued csr-mm7g7 28m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:3k4vr0 Approved,Issued csr-qzn6r 30m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:3k4vr0 Approved,Issued
4.3 deploy node
- node nodes only need to join the cluster
[root@k8s-master01 ~]# kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 \ --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1
- The output log is as follows:
W0509 23:24:12.159733 10635 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
- Finally, view the cluster node information
[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 47m v1.18.2 k8s-master02 Ready master 35m v1.18.2 k8s-master03 Ready master 32m v1.18.2 k8s-node01 Ready node01 55s v1.18.2
5 test cluster high availability
Shut down the master01 host and view the entire cluster.
# Analog shutdown keepalived systemctl stop keepalived # Then check whether the cluster is available [root@k8s-master02 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1c:42:ab:d3:44 brd ff:ff:ff:ff:ff:ff inet 10.211.55.5/24 brd 10.211.55.255 scope global noprefixroute dynamic eth0 valid_lft 1429sec preferred_lft 1429sec inet 10.211.55.10/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fdb2:2c26:f4e4:0:72b2:f577:d0e6:50a/64 scope global noprefixroute dynamic valid_lft 2591676sec preferred_lft 604476sec inet6 fe80::c202:94c6:b940:2d6b/64 scope link noprefixroute ...... [root@k8s-master02 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 64m v1.18.2 k8s-master02 Ready master 52m v1.18.2 k8s-master03 Ready master 50m v1.18.2 k8s-node01 Ready <none> 18m v1.18.2 [root@k8s-master02 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77c5fc8d7f-stl57 1/1 Running 0 49m calico-node-8t5ft 1/1 Running 0 19m calico-node-ppsph 1/1 Running 0 49m calico-node-tl6sq 1/1 Running 0 49m calico-node-w92qh 1/1 Running 0 49m coredns-546565776c-vtlhr 1/1 Running 0 65m coredns-546565776c-wz9bk 1/1 Running 0 65m etcd-k8s-master01 1/1 Running 0 65m etcd-k8s-master02 1/1 Running 0 53m etcd-k8s-master03 1/1 Running 0 51m kube-apiserver-k8s-master01 1/1 Running 0 65m kube-apiserver-k8s-master02 1/1 Running 0 53m kube-apiserver-k8s-master03 1/1 Running 0 51m kube-controller-manager-k8s-master01 1/1 Running 2 65m kube-controller-manager-k8s-master02 1/1 Running 1 53m kube-controller-manager-k8s-master03 1/1 Running 0 51m kube-proxy-6sbpp 1/1 Running 0 51m kube-proxy-dpppr 1/1 Running 0 65m kube-proxy-ln7l7 1/1 Running 0 53m kube-proxy-r5ltk 1/1 Running 0 19m kube-scheduler-k8s-master01 1/1 Running 2 65m kube-scheduler-k8s-master02 1/1 Running 1 53m kube-scheduler-k8s-master03 1/1 Running 0 51m
6 install automatic completion command
yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
7 "install kubernetes-dashboard-2 Version (web ui interface of kubernetes)"
Upload the kubernetes dashboard image to each node and decompress it through docker load -i according to the following method. The image address is in the Baidu network disk at the beginning of the article and can be downloaded by yourself
docker load -i dashboard_2_0_0.tar.gz
docker load -i metrics-scrapter-1-0-1.tar.gz
The extracted image is kubernetesui / dashboard: v2 0.0-beta 8 and kubernetesui / metrics scraper: v1 zero point one
7.1 operation at the master01 node
[root@k8s-master01 ~]# kubectl apply -f kubernetes-dashboard.yaml
❝
kubernetes-dashboard. The yaml file content is copied at the following link address https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/kubernetes-dashboard.yaml
❞
If you can't access the above, you can visit the following link, clone and download the following branches, and manually transfer the yaml file to master1:
https://github.com/luckylucky421/kubernetes1.17.3
- verification
[root@k8s-master01 ~]# kubectl get pods -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-694557449d-8xmtf 1/1 Running 0 60s kubernetes-dashboard-5f98bdb684-ph9wg 1/1 Running 2 60s
- View the service s on the front end of the dashboard
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.211.23.9 <none> 8000/TCP 3m59s kubernetes-dashboard ClusterIP 10.211.253.155 <none> 443/TCP 50s
- Change the service type to NodePort
[root@k8s-master01 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard hold type: ClusterIP become type: NodePort,Save and exit.
- View exposed ports
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.211.23.9 <none> 8000/TCP 3m59s kubernetes-dashboard NodePort 10.211.253.155 <none> 443:31175/TCP 4m
You can see that the service type is NodePort. You can access the kubernetes dashboard by accessing the ip:31175 port of the master1 node. My environment needs to enter the following address
https://10.211.55.10:31775/
7.2 "log in to dashboard through the default token specified in yaml file"
"1) view the secret under the kubernetes dashboard namespace"
[root@k8s-master01 ~]# kubectl get secret -n kubernetes-dashboard NAME TYPE DATA AGE default-token-vxd7t kubernetes.io/service-account-token 3 5m27s kubernetes-dashboard-certs Opaque 0 5m27s kubernetes-dashboard-csrf Opaque 1 5m27s kubernetes-dashboard-key-holder Opaque 2 5m27s kubernetes-dashboard-token-ngcmg kubernetes.io/service-account-token 3 5m27s
"2) find the corresponding kubernetes dashboard token ngcmg with token"
[root@k8s-master01 ~]# kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard
"Remember the value after the token, and copy the following token value to the browser token login to log in:"
Click sing in to log in, and the display is as follows. By default, you can only see the content of the default namespace
**"Create an administrator token to view any space permissions"
[root@k8s-master01 ~]# kubectl create clusterrolebinding dashboard-cluster-admin--clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
"Find the corresponding kubernetes dashboard token ngcmg with token"
[root@k8s-master01 ~]# kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard
"Remember the value behind the token, and copy the following token value to the browser token login to log in, so you have permission to view all resources"
**
"8 installing metrics components"
Put metrics-server-amd64_0_3_1.tar.gz and addon tar. Upload the GZ image to each node and decompress it through docker load -i according to the following method. The image address is in the Baidu network disk at the beginning of the article and can be downloaded by yourself
[root@k8s-master01 ~]# docker load -i metrics-server-amd64_0_3_1.tar.gz [root@k8s-master01 ~]# docker load -i addon.tar.gz
Metrics server version 0.3 1. The image used is k8s gcr. io/metrics-server-amd64:v0. three point one
The addon Resizer version is 1.8 4. The image used is k8s gcr. io/addon-resizer:1.8. four
8.1 operation at k8s master1 node
[root@k8s-master01 ~]# kubectl apply -f metrics.yaml
metrics. The yaml file content is copied at the following link address
https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml
If you can't access the above, you can visit the following link, clone and download the following branches, and manually transfer the yaml file to master1:
https://github.com/luckylucky421/kubernetes1.17.3
- verification
After the above components are installed, check whether the components are installed normally. The STATUS is Running, indicating that the components are normal, as shown below:
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATE calico-node-h66ll 1/1 Running 0 51m 192.168.0.56 node1 <none> calico-node-r4k6w 1/1 Running 0 58m 192.168.0.6 master1 <none> coredns-66bff467f8-2cj5k 1/1 Running 0 70m 10.244.0.3 master1 <none> coredns-66bff467f8-nl9zt 1/1 Running 0 70m 10.244.0.2 master1 <none> etcd-master1 1/1 Running 0 70m 192.168.0.6 master1 <none> kube-apiserver-master1 1/1 Running 0 70m 192.168.0.6 master1 <none> kube-controller-manager-master1 1/1 Running 0 70m 192.168.0.6 master1 <none> kube-proxy-qts4n 1/1 Running 0 70m 192.168.0.6 master1 <none> kube-proxy-x647c 1/1 Running 0 51m 192.168.0.56 node1 <none> kube-scheduler-master1 1/1 Running 0 70m 192.168.0.6 master1 <none> metrics-server-8459f8db8c-gqsks 2/2 Running 0 16s 10.244.1.6 node1 <none> traefik-ingress-controller-xhcfb 1/1 Running 0 39m 192.168.0.6 master1 <none> traefik-ingress-controller-zkdpt 1/1 Running 0 39m 192.168.0.56 node1 <none>
If you see that metrics-server-8459f8db8c-gqsks is in running status, it indicates that the metrics server component has been successfully deployed. Next, you can use the kubectl top Pods - n Kube system or kubectl top nodes command on the master1 node
Official account: operation and development story
github: https://github.com/orgs/sunsharing-note/dashboard
Love life, love operation and maintenance
If you think the article is good, please click on the top right corner to send it to your friends or forward it to your circle of friends. Your support and encouragement is my greatest motivation. If you like, please pay attention to me~
Scanning QR code
Pay attention to me and maintain high-quality content from time to time
reminder
If you like this article, please share it with your circle of friends. For more information, please follow me.
........................