Binary installation k8s

Posted by figo2476 on Sat, 13 Jun 2020 10:39:21 +0200

Article catalog

1, Kubernetes platform environment planning

1. Environment

Software edition
Linux operating system CentOS7.6_x64
Kubernetes 1.15.3
Docker 19.03.1
Etcd 3.x
Flannel 0.10

2. Component allocation planning

role IP assembly
Master01 192.168.1.244 etcd,Kube-apiserver,Kube-controller-manager,Kube-scheduler,docker,flannel
Master02 192.168.1.245 etcd,Kube-apiserver,Kube-controller-manager,Kube-scheduler,docker,flannel
Node01 192.168.1.246 etcd,kubelet,Kube-Proxy,docker,flannel
Node02 192.168.1.247 kubelet,Kube-Proxy,docker,flannel
Load Balancer (Master) 192.168.1.248,192.168.1.241(VIP) Nginx keepalibed
Load Balancer (Backup) 192.168.1.249,192.168.1.241(VIP) Nginx keepalibed
  • Single cluster architecture

[failed to save the image in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-ysoubxq8-1592027716298) (C: \ users \ qinzicheng \ appdata \ roaming \ typora \ typora user images \ 1566912632493. PNG))

  • Multi Master cluster architecture

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-ykfanfxd-1592027716300) (C: \ users \ qinzicheng \ appdata \ roaming \ typora \ typora user images \ 1566912675986. PNG))

2, Three official deployment methods

1.minikube

Minikube is a tool that can run a single point of Kubernetes quickly locally, only for users trying Kubernetes or daily development. Deployment address: https://kubernetes.io/docs/setup/minikube/

2.kubeadm

Kubeadm is also a tool that provides kubeadm init and kubeadm join for rapid deployment of Kubernetes clusters. Deployment address: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

3. Binary package

It is recommended to download the binary package of the distribution from the official, manually deploy each component to form a Kubernetes cluster. Download address: https://github.com/kubernetes/kubernetes/releases

4. Preparation before deployment (important!! )

#close
1.setenforce 0

#Turn off firewall
2.systemctl stop firewalld

#Modify host name
3.hostname master01

#time synchronization 
4.
yum -y install ntpdate
ntpdate time2.aliyun.com

#(1) Permanently close the swap partition
5.sed -ri 's/.*swap.*/#&/' /etc/fstab

#(2) Temporarily close the swap partition and restart it;
swapoff -a




3, Self signed SSL certificate

assembly Certificate used
etcd ca.pem,server.pem,server-key.pem
flannel ca.pem,server.pem,server-key.pem
kube-apiserver ca.pem,server.pem,server-key.pem
kubelet ca.pem,ca-key.pem
kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem
kubectl ca.pem,admin.pem,admin-key.pem

1. Generate etcd certificate

$mkdir k8s
$cd k8s
$mkdir etcd-cert k8s-cert
$cd etcd-cert
//Run the following two scripts after changing the etcd node IP( cfssl.sh ,etcd-cert.sh )
$sh ./cfssl.sh
$sh ./etcd-cert.sh

cfssl.sh #Cfssl: a tool for certificate generation
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
etcd-cert.sh #Start Certificate creation
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.1.244",
    "192.168.1.245",
    "192.168.1.246"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

4, Etcd database cluster deployment

1. Binary package download address

https://github.com/etcd-io/etcd/releases

etcd-v3.3.10-linux-amd64.tar.gz

2. Decompress and install

wget https://github.com/etcd-io/etcd/releases/download/v3.3.15/etcd-v3.3.15-linux-amd64.tar.gz
tar -zxvf etcd-v3.3.15-linux-amd64.tar.gz
cd etcd-v3.3.15-linux-amd64
mkdir -p /opt/etcd/{ssl,cfg,bin}
mv etcd etcdctl /opt/etcd/bin/
#Copy certificate to specified directory
cp /root/k8s/etcd-cert/{ca,server-key,server}.pem /opt/etcd/ssl

Deploy configuration ETCD, create configuration file, and startup file

sh ./etcd.sh etcd01 192.168.1.244 etcd02=https://192.168.1.245:2380,etcd03=https://192.168.1.246:2380

etcd.sh

#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/opt/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd


#Copy to other etcd nodes
scp -r /opt/etcd/ root@192.168.1.245:/opt/
scp -r /opt/etcd/ root@192.168.1.246:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.1.245:/usr/lib/systemd/system/etcd.service

scp /usr/lib/systemd/system/etcd.service root@192.168.1.246:/usr/lib/systemd/system/etcd.service


##And edit the corresponding etcd name
vim /opt/etcd/cfg/etcd

ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.245:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.245:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.245:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.245:2379"              

vim /opt/etcd/cfg/etcd

ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.246:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.246:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.246:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.246:2379"              



#start-up
systemctl start etcd

3. View the cluster status

/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379" \
cluster-health

5, Node install Docker

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-z9fzrigj-1592027716301) (C: \ users \ qinzicheng \ appdata \ roaming \ typora \ typora user images \ 1566912846202. PNG))

Official website: https://docs.docker.com

step 1: Install some necessary system tools
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

Step 2: Add software source information
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Step 3: Update and install Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce

Step 4: Image download acceleration configuration: https://www.daocloud.io/mirror

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

Step 5: open Docker service
sudo systemctl restart docker 
sudo systemctl enable docker

step 6: see docker Version No
docker version






6, Deploy Kubernetes network

Basic requirements of Kubernetes network model design

  • One Pod and one IP
  • Each Pod is an independent IP, and all containers in the Pod share the network (the same IP)
  • All containers can communicate with all other containers
  • All nodes can communicate with all containers

Container Network Interface(CNI): container network interface, dominated by Google and CoreOS.

Mainstream technology:

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-pfhnwogy-1592027716304) (C: \ users \ qinzicheng \ appdata \ roaming \ typora \ typora user images \ 1566913010040. PNG))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-eiojk6to-1592027716306) (C: \ users \ qinzicheng \ appdata \ roaming \ typora \ typora user images \ 1566913016954. PNG))

Overlay Network

Overlay network, a virtual network technology mode superimposed on the basic network, in which hosts are connected by virtual links.

Installing Flannel

It is one kind of Overlay network, and also encapsulates the source data packet in another network packet for routing, forwarding and communication. At present, it supports UDP, VXLAN (commonly used), Host-GW (not supporting cross network segment), AWS, VPC, GCE routing and other data forwarding methods.

1. Write the allocated subnet segment to etcd for flanneld


/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379" \
set /coreos.com/network/config '{ "Network": "10.0.0.0/16", "Backend": {"Type": "vxlan"}}'


2. Download binary package

https://github.com/coreos/flannel/releases

3. Deployment and configuration of Flannel

wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

mkdir /opt/kubernetes/{bin,cfg,ssl} -p 
tar -zxvf flannel-v0.11.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/


###System D management Flannel
###Configure the subnet generated by Docker using Flannel

sh ./flannel.sh https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379

flannel.sh

#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

cat <<EOF >/usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker

4. Start Flannel

systemctl start flanneld.service

#######Copy to another node
scp -r /opt/etcd/ root@192.168.1.246:/opt/
scp -r /opt/kubernetes/ root@192.168.1.246:/opt/

scp -r /usr/lib/systemd/system/{docker,flanneld}.service root@192.168.1.246:/usr/lib/systemd/system/

#The other node also starts Flannel

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld.service
systemctl restart flannesld
systemctl restart docker




#View the configured subnet (running on the master)

/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379" \
ls /coreos.com/network/subnets


/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379" get /coreos.com/network/subnets/172.17.19.0-24

ip route

5. Communication between test vessels

docker run -it busybox

7, Deploy Master components

Official website: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md

wget https://dl.k8s.io/v1.16.1/kubernetes-server-linux-amd64.tar.gz

Generate apiserver certificate

#Execute script to generate certificate
$sh k8s-cert.sh

#Copy the certificate to the corresponding directory
$cp ca-key.pem ca.pem server.pem server-key.pem /opt/kubernetes/ssl


# Create TLS Bootstrapping Token
#Use the following command to generate random characters
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=8440d1ad1c6184d4ca456eb345d0feff

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

$mv token.csv /opt/kubernetes/cfg/

k8s-cert.sh

#Modify the IP script and the IP allowed to access apserver
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
      	    "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "192.168.1.244",
      "127.0.0.1",
      "10.0.0.1",
      "192.168.1.241",
      "192.168.1.242",
      "192.168.1.243",
      "192.168.1.245",
      "192.168.1.246",
      "192.168.1.247",
      "192.168.1.248",
      "192.168.1.249",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

1. Kube apiserver installation

tar -zxvf kubernetes-server-linux-amd64.tar.gz
mkdir /opt/kubernetes/{bin,cfg,ssl} -p
cd kubernetes/server/bin/
cp kube-controller-manager kube-apiserver kube-scheduler /opt/kubernetes/bin/
cp kubectl /usr/bin/


cd /Script directory
sh apiserver.sh 192.168.1.244 https://192.168.1.244:2379,https://192.168.1.245:2379,https://192.168.1.246:2379

apiserver.sh

#!/bin/bash

MASTER_ADDRESS=$1
ETCD_SERVERS=$2

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

Change log path

#View the apiserver profile
cat /opt/kubernetes/cfg/kube-apiserver

#########The default log is saved in / var / log / meesges. If you need to customize it, please see the following:

mkdir /opt/kubernetes/logs
vim /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \  Change to

KUBE_APISERVER_OPTS="--logtostderr=false  \
--log-dir=/opt/kubernetes/logs \

2. Kube Controller Manager installation

sh controller-manager.sh 127.0.0.1

#########The default log is saved in / var / log / meesges. If you need to customize it, see the apiserver installation

controller-manager.sh

#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager


KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

3. Kube scheduler installation

sh scheduler.sh 127.0.0.1
#########The default log is saved in / var / log / meesges. If you need to customize it, see the installation of apiserver
#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

Configuration file - > SYSTEMd management component - > start

abbreviation
kubectl api-resources

4. Add Master

Copy the master 1 file to the new master

scp -r /opt/kubernetes root@192.168.1.245:/opt/

scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service root@192.168.1.245:/usr/lib/systemd/system/

scp /usr/bin/kubectl root@192.168.1.245:/usr/bin/

scp -r /opt/etcd/ssl/ root@192.168.1.245:/opt/etcd/

Modify profile

[root@master02 cfg]# grep 244 *

vim kube-apiserver  
//Change to new master IP

start-up


systemctl daemon-reload
systemctl restart kube-apiserver
systemctl restart kube-scheduler
systemctl restart kube-controller-manager

kubectl get componentstatus

8, Deploy Node components

[external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-so16pqcu-1592027716307) (C: \ users \ qinzing \ appdata \ roaming \ typora \ typora user images \ 1567004758361. PNG))

1. Bind the kubelet bootstrap user to the system cluster role (executed on the Master)

#by token.csv Grant authority

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap





2. Create the kubeconfig file (executed on the Master)

##Sh kubeconfig.sh  Apiserver certificate directory  

$sh kubeconfig.sh 192.168.1.244  /root/k8s/k8s-cert/


kubeconfig.sh

APISERVER=$1
SSL_DIR=$2
#Fill in this location to generate token.csv The random character of
BOOTSTRAP_TOKEN=8440d1ad1c6184d4ca456eb345d0feff
# Create kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# Set cluster parameters
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# Set client authentication parameters
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# Set context parameters
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# Set default context
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# Create the Kube proxy kubeconfig file

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

###Will generate bootstrap.kubeconfig ,kube-proxy.kubeconfig Copy to node###

#Copy to node1:
scp bootstrap.kubeconfig  kube-proxy.kubeconfig  root@192.168.1.246:/opt/kubernetes/cfg/

##The node kubelet Kube proxy is also copied in the past (in kubernetes server linux amd 64 tar.gz Medium)
scp kubelet kube-proxy  root@192.168.1.246:/opt/kubernetes/bin/


#Copy to node2:
scp bootstrap.kubeconfig  kube-proxy.kubeconfig  root@192.168.1.247:/opt/kubernetes/cfg/
##The node kubelet Kube proxy is also copied (in kubernetes server linux amd 64 tar.gz Medium)
scp kubelet kube-proxy  root@192.168.1.247:/opt/kubernetes/bin/

3. Deploy kubelet and Kube proxy components (add master on 192.168.1.246 node)

Script:( kubelet.sh  proxy.sh)
$sh kubelet.sh 192.168.1.246
$sh proxy.sh 192.168.1.246

kubelet.sh

#!/bin/bash

NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}

cat <<EOF >/opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=docker.io/kubernetes/pause:latest"

EOF

cat <<EOF >/opt/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP} 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

proxy.sh

#!/bin/bash

NODE_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

4.master node execution command approval certificate

$kubectl get csr

$kubectl certificate approve node-csr-NK3xFo5gaa3-k6gLyytKmUW2sUHZxnouyD9Kn2arJmk




##Node node validation
//In the node ssl directory, you can see that there are four more kubelet certificate files
ll /opt/kubernetes/ssl



#########The default log is saved in / var / log / meesges. If you need to customize it, please see the following:
$vim  /opt/kubernetes/cfg/kubelet
$vim /opt/kubernetes/cfg/kube-proxy

$mkdir -p /opt/kubernetes/logs

KUBELET_OPTS="--logtostderr=true \  Change to
KUBELET_OPTS="--logtostderr=false  \
--log-dir=/opt/kubernetes/logs \




node rejoining cluster needs to be deleted kubelet.kubeconfig And ssl certificate

9, Deploy a test example

# kubectl run nginx --image=nginx --replicas=3 
# kubectl get pod
# kubectl scale deployment nginx --replicas=5
# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort 
# kubectl get svc nginx


#Authorization: otherwise, exec cannot log in to the container and view the container log.
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

#Manual start

/opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --hostname-override=192.168.1.246 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=docker.io/kubernetes/pause:latest

Delete node to join Master from New

kubectl delete nodes  192.168.1.246
systemctl stop kubelet kube-proxy
systemctl stop kubelet kube-proxy

#ssl certificate
rm -fr /opt/kubernetes/ssl/*

#Rebuild certificate join and start
sh kubelet.sh 192.168.1.246
sh proxy.sh 192.168.1.246


#Permission to join master 
$kubectl get csr

$kubectl certificate approve node-csr-NK3xFo5gaa3-k6gLyytKmUW2sUHZxnouyD9Kn2arJmk

10, Deploy the cluster internal DNS resolution service (CoreDNS)

1. Modify some parameters

There are three ways to modify parameters: one is to specify ip6.arpa, the other is to change to domestic image source, and the other is to define clusterIP, as follows

  • Ip6.arpa changed to kubernetes cluster.local . in- addr.arpa ip6.arpa

  • Domestic image changed to coredns/coredns:1.2.6

  • The clusterIP is modified to be within the IP range set by the cluster. The cluster address of my cluster is 10.0.0.0/24, so it is set to 10.0.0.2 (the default address specified by the kubelet configuration file when deploying the node. And not used IP)

    The specific yaml is as follows: (we only need to modify the IP of the clusterIP to be within the IP range of our own cluster and not duplicate)

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local. in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.2.6
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

2. Create dns

kubectl create -f coredns.yaml

3. Check pod and svc

[root@K8S-M1 ~]# kubectl get all -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/coredns-57b8565df8-nnpcc   1/1     Running   1          9h

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   10.10.10.2   <none>        53/UDP,53/TCP   9h

NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   1         1         1            1           9h

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-57b8565df8   1         1         1       9h

4. Check the coreDNS service. It has been started here

[root@K8S-M1 ~]# kubectl  cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
CoreDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at http://localhost:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
monitoring-influxdb is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

5. Verification method 1

Create a simple centos,busybox is a bit of a hole, there is a problem with the test.

cat >centos.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: centoschao
  namespace: default
spec:
  containers:
  - image: centos
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: centoschao
  restartPolicy: Always
EOF

5.1. Testing

kubectl create -f centos.yaml
[root@K8S-M1 ~]# kubectl get pods
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes       ClusterIP   10.10.10.1     <none>        443/TCP          15d
nginx            ClusterIP   10.10.10.252   <none>        80/TCP           9h
    
[root@master-a yaml]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
centoschao               1/1     Running   0          76s
nginx-6db489d4b7-cxljn   1/1     Running   0          4h55m

[root@K8S-M1 ~]#  kubectl exec -it centoschao sh
sh-4.2# yum install bind-utils -y
sh-4.2# nslookup kubernetes
Server:     10.10.10.2
Address:    10.10.10.2#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.10.10.1

sh-4.2# nslookup nginx     
Server:     10.10.10.2
Address:    10.10.10.2#53

Name:   nginx.default.svc.cluster.local
Address: 10.10.10.252

sh-4.2# nslookup nginx.default.svc.cluster.local
Server:     10.10.10.2
Address:    10.10.10.2#53

Name:   nginx.default.svc.cluster.local
Address: 10.10.10.252

ok, it's a success

6. Verification method 2

cat >busybox.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

6.1 create and test resolution kubernetes.default

kubectl create -f busybox.yaml
kubectl get pods busybox
kubectl exec busybox -- cat /etc/resolv.conf
kubectl exec -ti busybox -- nslookup kubernetes.default

11, Deploy Web UI (Dashboard)

1. Download

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

-rw-r--r-- 1 root root  264 Oct  9 15:22 dashboard-configmap.yaml
-rw-r--r-- 1 root root 1784 Oct  9 15:22 dashboard-controller.yaml
-rw-r--r-- 1 root root 1353 Oct  9 15:22 dashboard-rbac.yaml
-rw-r--r-- 1 root root  551 Oct  9 15:22 dashboard-secret.yaml
-rw-r--r-- 1 root root  322 Oct  9 15:22 dashboard-service.yaml

1. vim dashboard-controller.yaml
#Kubernetes dashboard image "k8s gcr.io/kubernetes-dashboard-amd64: v1.10.1 "you can download only when you climb over the wall, so change to the following address:
registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0

2.Exposed exterior: NodePort
kubectl edit svc -n kube-system kubernetes-dashboard


3.Create login admin Account login
cat > k8s-admin.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
EOF

  
  
4.application
kubectl apply -f .

2. View TOKEN

#View account
kubectl get secrets  -n kube-system 
#View account TOKEN
kubectl describe secrets  -n kube-system  dashboard-admin-token-9g9hp

#kubectl get secrets  -n kube-system dashboard-admin-token-9g9hp -o yaml
#echo TOKEN | base64 -d


3. Solve the problem of expiry of dashboard certificate

The solution is simply to replace the default certificate

#Mr Cheng's certificate
vim shengche.sh

cat > dashboard-csr.json <<EOF
{
    "CN": "Dashboard",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

K8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system

sh shecheng.sh  ca certificate directory

Dashboard will be generated after execution- key.pem  ,  dashboard.pem  Certificates attached 
# dashboard-controller.yaml  Add two lines of certificate, and then apply
#        args:
#          # PLATFORM-SPECIFIC ARGS HERE
#          - --auto-generate-certificates
#          - --tls-key-file=dashboard-key.pem
#          - --tls-cert-file=dashboard.pem



12, LB configuration (preserved + nginx)

1. Install keepalived + nginx

yum -y install keepalived nginx

2. Configure nginx

vim /etc/nginx/nginx.conf  


//Parent increase:
stream {
    log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log /var/log/nginx/k8s-access.log main;
    
    upstream k8s-apiserver {
        server 192.168.1.244:6443;
        server 192.168.1.245:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-apiserver;
    }
}   


//start-up
systemctl start nginx

3. Configure keepalived

vim /etc/keepalived/keepalived.conf


! Configuration File for keepalived 
 
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/usr/local/nginx/sbin/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER      #BACKUP backup settings 
    interface eth0   #Network card name
    virtual_router_id 51 # VRRP route ID instance, each instance is unique 
    priority 100    # Priority, standby server setting 90 
    advert_int 1    # Specifies the notification interval of VRRP heartbeat package, 1 second by default 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.7.43/24 
    } 
    track_script {
        check_nginx
    } 
}

systemctl start keepalived

4. Create a health check script

mkdir -p /usr/local/nginx/sbin/
vim /usr/local/nginx/sbin/check_nginx.sh

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi

5. Modify VIP of node node pointing to new IP pointing to LB

cd /opt/kubernetes/cfg
vi bootstrap.kubeconfig
vi kubelet.kubeconfig
vi kube-proxy.kubeconfig

//Change to: 192.168.1.240

systemctl restart kubelet
systemctl restart kube-proxy

XIII kubectl Remote connection K8S colony

Enter certificate directory vim kubectl.sh generate config

kubectl config set-cluster kubernetes \
--server=https://192.168.1.241:6443 \
--embed-certs=true \
--certificate-authority=/root/k8s/k8s-cert/ca.pem \
--kubeconfig=config

kubectl config set-credentials cluster-admin \
--certificate-authority=/root/k8s/k8s-cert/ca.pem \
--embed-certs=true \
--client-key=/root/k8s/k8s-cert/admin-key.pem \
--client-certificate=/root/k8s/k8s-cert/admin.pem \
--kubeconfig=config

kubectl config set-context default --cluster=kubernetes --user=cluster-admin --kubeconfig=config  

kubectl config use-context default --kubeconfig=config

To be executed on a kubectl host

kubectl --kubeconfig=./config get node

Topics: Kubernetes SSL kubelet Docker