Installation and deployment of kubernetes (binary package mode)
- 1, Introduction to installation and deployment
- 2, kubernetes (binary package mode) installation and deployment
- 1. Deploy etcd cluster
- 1.1 generate etcd certificate
- 1.2 download / decompress etcd binary package
- 1.3 create etcd configuration file
- 1.4 systemd manages etcd and creates etcd service files
- 1.5 start and set to start etcd service
- 1.6 verify the success of etcd deployment
- 2. Install Docker
- 3. Deploy the Flannel network
- 3.1 etcd write predefined subnet segment
- 3.2 download / decompress the flannel binary package
- 3.3 configuring Flannel
- 3.4 system D manages the Flannel and creates the Flannel service
- 3.5 configure Docker to start the specified subnet segment and recreate docker.service
- 3.6 restart flannel and docker
- 3.7 verify whether flannel and docker are effective
- 4. Install and deploy kubernetes
- 4.1 generate certificate
- 4.2 download / decompress kubernetes binary package
- 4.3 master node deployment apserver
- 4.4master node deployment scheduler
- 4.5master node deployment Controller Manager
- 4.6 view the current cluster component status
- 5. Bind the system cluster role on the master node
- 6. Add the master node as a node
- 7. Create an Nginx Web to determine whether the cluster works normally
1, Introduction to installation and deployment
This time, kubernetes is installed and deployed in binary package mode, and its necessary components include etcd, apiserver, scheduler, controller manager, kubelet, Kube proxy, flannel and other components.
- Operating system: CentOS 7.5
- 1 machine: 172.27.19.143
-
etcd file directory structure
/opt
├ ─ ─ etcd
├ - bin (store etcd related executable files or commands, such as etcd, etcdctl)
├ - cfg (store etcd related configuration files, such as etcd.conf configuration files)
├ - ssl (store etcd certificate related files, such as *. pem files)
├ - logs (store etcd service log files)
├ - data (store etcd related data files) -
kubernetes file directory structure
/opt
├ ─ ─ kubernetes
├ - bin (store kubernetes related executable files or commands, such as Kube API server, kubelet, kubectl, etc.)
├ - cfg (store kubernetes related configuration files)
├ - ssl (store kubernetes certificate related files, such as *. pem files)
├ - logs (storing kubernetes service log files)
├ - data (storing kubernetes related data files) -
preparation in advance
1. Turn off firewall, selinux and swapoff-a;
2. Download cfssl tool -
kubernetes environment installation and deployment process
1, Deploy etcd cluster
Generation of etcd certificate
├ - 1.1 create CA certificate configuration ca-config.json
├ - 1.2 create CA certificate signature request configuration ca-csr.json
├ - 1.3 create etcd certificate signing request configuration server-csr.json
├ - 1.4 execute cfssl command to generate certificate
├ - 1.5 put the generated certificate in / opt/etcd/ssl directory
├ - 2. Download / unzip etcd binary package
├ - 3. Create etcd configuration file
├ - 4.systemd manages etcd and creates etcd service files
├ - 5. Start and set up etcd service
├ - 6. Verify the success of etcd deployment
2, Install Docker
3, Deploy the Flannel network
├ - 1.Flannel needs to use etcd to store one of its own subnet information and write it into the predefined subnet segment
├ - 2. Download / unzip the flannel binary package
├ - 3. Configure Flannel
├ - 4. System D manages the Flannel and creates the Flannel service
├ - 5. Configure Docker to start the specified subnet segment and recreate docker.service
├ - 6. Restart flannel and docker
├ - 7. Verify whether flannel and docker are effective
4, Install and deploy kubernetes (deploy apiserver, scheduler, controller manager components on the master node)
├ - 1. Generate certificate
├ - 1.1 create CA certificate configuration ca-config.json
├ - 1.2 create CA certificate signature request configuration ca-csr.json
├ - 1.3 create kubernetes certificate signing request configuration server-csr.json
├ - 1.4 create Kube proxy certificate signing request configuration Kube proxy csr.json
├ - 1.5 execute cfssl command to generate certificate
├ - 1.6 put the generated certificate in / opt/kubernetes/ssl directory
Download / unzip kubernetes binary package
├ - 3.master node deployment apserver
├ - 3.1 create token file token.csv
├ - 3.2 create the apiserver configuration file kube-apiserver.conf
├ - 3.3 system D manages apserver and creates Kube apserver service
├ - 3.4 launch Kube API server service
├ - 3.5 verify whether the Kube API server service is started successfully
├ - 4.master node deployment scheduler.conf
├ - 4.1 creating the Kube scheduler configuration file
├ - 4.2 systemd manages the scheduler and creates the Kube Scheduler service
├ - 4.3 start Kube Scheduler service
├ - 4.4 verify whether the Kube Scheduler service is started successfully
├ - 5.master node deployment Controller Manager
├ - 5.1 create controller manager configuration file kube-controller-manager.conf
├ - 5.2 systemd manages controller manager and creates Kube Controller Manager Service
├ - 5.3 launch Kube Controller Manager Service
├ - 5.4 verify whether the Kube controller manager service is started successfully
├ - 6. After all components are started successfully, check the current cluster component status through the kubectl tool
5, Bind the system cluster role on the master node
├ - 1. Bind kubelet bootstrap user to system cluster role
├ - 2. Create kubeconfig file
6, Add the master node as a node (deploy kubelet and Kube proxy components)
├ - 1. Deploy kubelet components on the master node
├ - 1.1 create kubelet configuration file kubelet.conf
├ - 1.2 system D manages kubelet and creates kubelet services
├ - 1.3 start kubelet service
├ - 1.4 verify whether the kubelet service is started successfully
├ - 1.5 approve Node joining cluster in Master
├ - 1.6 add roles to the master node to distinguish it from the node (generally not used to schedule pod)
├ - 2. Deploy Kube proxy on the master node
├ - 2.1 create Kube proxy kubeconfig file
├ - 2.2 create kube-proxy configuration file kube-proxy.conf
├ - 2.3 systemd manages the Kube proxy and creates the Kube proxy service
├ - 2.4 start Kube proxy service
├ - 2.5 verify whether the Kube proxy service is started successfully
Note: adding a new node into the cluster is the same as deploying kubelet and Kube proxy on the node node. To access the cluster on the node node, you also need to install and deploy kubectl.
7, Create an Nginx Web to determine whether the cluster is working properly
2, kubernetes (binary package mode) installation and deployment
-
Turn off firewall, selinux and swapoff-a in advance
# Turn off firewall systemctl stop firewalld systemctl disable firewalld # Close selinux temporarily setenforce 0 # Close selinux permanently sed -i 's/enforcing/disabled/' /etc/selinux/config # Close swap swapoff -a # Synchronization system time ntpdate time.windows.com
-
Download cfssl tool in advance
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
cfss Knowledge related to certificate can refer to:
https://blog.51cto.com/liuzhengwei521/2120535?utm_source=oschina-app
https://www.jianshu.com/p/944f2003c829
https://www.bbsmax.com/A/RnJWLj8R5q/
1. Deploy etcd cluster
-
Create a directory for etcd to store relevant data
mkdir /opt/etcd/{bin,cfg,ssl,data,logs} -p
1.1 generate etcd certificate
-
Create CA certificate configuration ca-config.json
# Create CA(Certificate Authority) configuration file cat > /opt/etcd/data/ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
Knowledge points:
expiry: indicates the expiration time. If it is not written, the one in default shall prevail;
ca-config.json: multiple profiles can be defined to specify different expiration times, usage scenarios and other parameters; a profile will be used later when signing certificates; this instance has only one etcd template.
signing: indicates that the certificate can be used to sign other certificates; CA=TRUE in the generated ca.pem certificate;
server auth: indicates that the client can use the CA to verify the certificate provided by the server;
client auth: indicates that the server can use the CA to verify the certificate provided by the client;
Pay attention to punctuation. The last field is generally not good.
-
Create CA certificate signing request configuration ca-csr.json
# CA certificate signing request file cat > /opt/etcd/data/ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing" } ] } EOF
Knowledge points:
CN: Common Name, Kube apiserver extracts this field from the certificate as the requested user name
key: algorithm for generating certificate;
names: some other attributes, such as C, ST, and L, represent the country, province, and city respectively; O: Organization, Kube apiserver extracts this field from the certificate as the group of the requesting user and binds it to RBAC;
-
Create etcd certificate signing request configuration server-csr.json
# Note that hosts need to be changed to the host IP address of the etcd cluster cat > /opt/etcd/data/server-csr.json <<EOF { "CN": "etcd", "hosts": [ "172.27.19.143" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF
Knowledge points:
hosts: indicates which host name (domain name) or IP can use the certificate applied by this csr. If it is empty or "" indicates that all can be used;
-
Execute cfssl command to generate certificate
# First enter the directory where the etcd certificate configuration file is stored / opt/etcd/data cd /opt/etcd/data # Initialize CA and generate certificate: ca-key.pem (private key), ca.pem (public key), ca.csr (certificate signing request) cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # Rebuild certificate using existing private key cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server # View the generated certificates (ca-key.pem, ca.pem, server-key.pem, server.pem) ls *.pem
Knowledge point: cfssljson is just to sort out the json format. The main significance of - bare is to generate the name of certificate file
-
Put the generated certificate in / opt/etcd/ssl directory
mv ca*pem server*pem /opt/etcd/ssl
1.2 download / decompress etcd binary package
# Return to home directory and download the installation package to home directory cd ~ # Download etcd binary package wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz # Extract etcd binary package tar zxvf etcd-v3.2.12-linux-amd64.tar.gz # Move etcd,etcdctl to / opt/etcd/bin mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
1.3 create etcd configuration file
Note: etcd data dir path / var/lib/etcd/default.etcd needs to be created in advance, otherwise an error may be reported;
cat > /opt/etcd/cfg/etcd.conf <<EOF #[Member] ETCD_NAME="etcd" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.27.19.143:2380" ETCD_LISTEN_CLIENT_URLS="https://172.27.19.143:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.27.19.143:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.27.19.143:2379" ETCD_INITIAL_CLUSTER="etcd=https://172.27.19.143:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF * ETCD_NAME Node name * ETCD_DATA_DIR Data directory * ETCD_LISTEN_PEER_URLS Cluster communication listening address * ETCD_LISTEN_CLIENT_URLS Client access listening address * ETCD_INITIAL_ADVERTISE_PEER_URLS Cluster notification address * ETCD_ADVERTISE_CLIENT_URLS Client notification address * ETCD_INITIAL_CLUSTER Cluster node address * ETCD_INITIAL_CLUSTER_TOKEN colony Token * ETCD_INITIAL_CLUSTER_STATE The current state of joining the cluster, new It's a new cluster, existing Indicates joining an existing cluster
1.4 systemd manages etcd and creates etcd service files
cat > /usr/lib/systemd/system/etcd.service <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
1.5 start and set to start etcd service
systemctl daemon-reload systemctl start etcd systemctl enable etcd
1.6 verify the success of etcd deployment
/opt/etcd/bin/etcdctl \ --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \ --endpoints="https://172.27.19.143:2379" \ cluster-health
2. Install Docker
-
Install docker directly from yum source
yum install docker -y systemctl start docker systemctl enable docker
-
Configure the yun source and then install docker
# Configure one of two image sources # Configure Alibaba image source yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # Configure the official image source of Docker yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # Install docker # Show all installable versions of docker CE yum list docker-ce --showduplicates | sort -r # Install the specified docker version yum install docker-ce-18.06.1.ce-3.el7 -y # Start docker and set docker to start systemctl daemon-reload systemctl enable docker systemctl start docker # Check whether the docker service starts successfully systemctl status docker
-
Local rpm package installation
# Download (please download docker CE SELinux with version 17) wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/ # Create files attached to directory and alisource mkdir -p /data/docker-root mkdir -p /etc/docker touch /etc/docker/daemon.json chmod 700 /etc/docker/daemon.json cat > /etc/docker/daemon.json << EOF { "graph":"/data/docker-root", "registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com"] } EOF # Install docker yum localinstall ./docker* -y # Start docker and set docker to start systemctl enable docker systemctl start docker systemctl status docker
3. Deploy the Flannel network
3.1 etcd write predefined subnet segment
# Falnel uses etcd to store its own subnet information, so it is necessary to ensure that etcd can be successfully connected and written into the predefined subnet segment: /opt/etcd/bin/etcdctl \ --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \ --endpoints="https://172.27.19.143:2379" \ set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
3.2 download / decompress the flannel binary package
# Return to home directory and download the installation package to home directory cd ~ # Download the flannel binary package wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz # Unzip the flannel binary package tar zxvf flannel-v0.10.0-linux-amd64.tar.gz # Move flanneld mk-docker-opts.sh to / opt/kubernetes/bin mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
3.3 configuring Flannel
cat > /opt/kubernetes/cfg/flanneld.conf <<EOF FLANNEL_OPTIONS="-etcd-endpoints=https://172.27.19.143:2379 \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF
3.4 system D manages the Flannel and creates the Flannel service
[Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld.conf ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
3.5 configure Docker to start the specified subnet segment and recreate docker.service
cat > /usr/lib/systemd/system/docker.service <<EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF
3.6 restart flannel and docker
# Start flannel systemctl daemon-reload systemctl start flanneld systemctl enable flanneld # Check whether flannel starts successfully systemctl status flanneld # Restart docker systemctl restart docker systemctl status docker
3.7 verify whether flannel and docker are effective
# Make sure that docker0 and flannel.1 are in the same network segment ps -ef |grep docker # Test the interworking of different nodes, and access the docker0 IP of another Node in the current Node ping docker0 IP # If you can tell, the deployment of Flannel is successful. If not, check the log: journalctl -u flannel
4. Install and deploy kubernetes
-
Create a directory for kubernetes to store relevant data
mkdir /opt/kubernetes/{bin,cfg,ssl,data,logs} -p
4.1 generate certificate
-
Create CA certificate configuration ca-config.json
# Create CA(Certificate Authority) configuration file cat > /opt/kubernetes/data/ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
-
Create CA certificate signing request configuration ca-csr.json
# CA certificate signing request file cat > /opt/kubernetes/data/ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
-
Create kubernetes certificate signing request configuration server-csr.json
# Generate kubernetes certificate cat > /opt/kubernetes/data/server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", # This is the gateway of the virtual network to be used by dns later. Do not change it, just use this (delete this line) "127.0.0.1", # This is the local localhost. Don't change it, just use it (delete this line) "172.27.19.143", # This is modifiable and consistent with the added machine ip "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
Knowledge points:
hosts: indicates which host name (domain name) or IP can use the certificate applied by this csr. If it is empty or "" indicates that all can be used;
-
Create Kube proxy certificate signing request configuration Kube proxy csr.json
# Generate Kube proxy certificate cat > /opt/kubernetes/data/kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
-
Execute cfssl command to generate certificate
# First enter the directory where kubernetes certificate configuration file is stored / opt/kubernetes/data cd /opt/kubernetes/data # Initialize CA and generate certificate: ca-key.pem (private key), ca.pem (public key), ca.csr (certificate signing request) cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # Regenerate the kubernetes certificate using the existing private key cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server # Regenerate the Kube proxy certificate using the existing private key cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy #View generated certificates ls *pem
-
Put the generated certificate in / opt/kubernetes/ssl directory
mv *.pem /opt/kubernetes/ssl
4.2 download / decompress kubernetes binary package
# Note that linux cannot be downloaded directly. It can be downloaded in windows and uploaded to linux first # wget https://dl.k8s.io/v1.11.10/kubernetes-server-linux-amd64.tar.gz # Download address: https://github.com/kubernetes/kubernetes (including the necessary components of k8s, such as Kube apiserver, kubelet, Kube scheduler, Kube controller manager, etc.) # 1. After entering https://github.com/kubernetes/kubernetes under windows, click the file such as CHANGELOG-1.16.md to view the corresponding version (version 1.16) and the downloaded file; # 2. Select kubernetes-server-linux-amd64.tar.gz of Server Binaries to download # 3. After windows download, rz is uploaded to linux through lrzsz tool; # 4. After decompression, generate the kubernetes directory, and copy the executable file under / kubernetes/service/bin / to / opt/kubernetes/bin # Return to home directory and download the installation package to home directory cd ~ # kubernetes binary package tar zxvf kubernetes-server-linux-amd64.tar.gz #Copy the executable file under / kubernetes/service/bin / to / opt/kubernetes/bin cp kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl,kube-proxy,kubelet} /opt/kubernetes/bin
4.3 master node deployment apserver
-
Create token file token.csv
# The first column: random string, self generated; the second column: user name; the third column: UID; the fourth column: user group; cat > /opt/kubernetes/cfg/token.csv <<EOF 674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
-
Create the apiserver configuration file kube-apiserver.conf
cat > /opt/kubernetes/cfg/kube-apiserver.conf <<EOF KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://172.27.19.143:2379 \ --bind-address=172.27.19.143 \ --secure-port=6443 \ --advertise-address=172.27.19.143 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF
-
System D manages apserver and creates Kube apserver service
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver.conf ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
-
Start the Kube API server service
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver
-
Verify that the Kube apiserver service started successfully
systemctl status kube-apiserver
4.4master node deployment scheduler
-
Create the schduler configuration file kube-scheduler.conf
cat > /opt/kubernetes/cfg/kube-scheduler.conf <<EOF KUBE_SCHEDULER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect" EOF
-
systemd manages the scheduler and creates the Kube Scheduler service
cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler.conf ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
-
Start the Kube Scheduler service
systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler
-
Verify that the Kube Scheduler service started successfully
systemctl status kube-scheduler
4.5master node deployment Controller Manager
-
Create controller manager configuration file kube-controller-manager.conf
cat > /opt/kubernetes/cfg/kube-controller-manager.conf <<EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem" EOF
-
systemd manages controller manager and creates Kube Controller Manager Service
cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager.conf ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
-
Start Kube Controller Manager Service
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager
-
Verify that the kube-controller-manager service is started successfully
systemctl status kube-controller-manager
4.6 view the current cluster component status
# After the kubectl soft connection to / usr/bin /, you can use the kubectl command directly ln -s /opt/kubernetes/bin/kubectl /usr/bin/ # View the status of scheduler, etcd, controller manager components kubectl get cs -o yaml
5. Bind the system cluster role on the master node
# Enter the directory / opt/kubernetes/ssl first cd /opt/kubernetes/ssl
5.1 bind kubelet bootstrap user to system cluster role
/opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
5.2 create kubeconfig file
# Specify the apserver address (fill in the load balancing address if apserver does load balancing) KUBE_APISERVER="https://172.27.19.143:6443" BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc # Set cluster parameters /opt/kubernetes/bin/kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # Set client authentication parameters /opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # Set context parameters /opt/kubernetes/bin/kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # Set default context /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # Put bootstrap.kubeconfig in / opt/kubernetes/cfg directory mv *.kubeconfig /opt/kubernetes/cfg ls /opt/kubernetes/cfg/*.kubeconfig
6. Add the master node as a node
The basic configuration component of node node added to cluster is kubelet, but node node is mainly used to schedule Pod. If its service service needs to provide cluster IP or nodeport access mode, Kube proxy component needs to be deployed.
6.1. Deploy kubelet components on the master node
-
Create kubelet configuration file kubelet.conf
cat > /opt/kubernetes/cfg/kubelet.conf <<EOF KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=172.27.19.143 \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet.yaml\ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF # Where the / opt/kubernetes/cfg/kubelet.yaml configuration file is as follows cat > /opt/kubernetes/cfg/kubelet.yaml<<EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 172.27.19.143 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true webhook: enabled: false EOF
-
System D manages kubelet and creates kubelet service
cat > /usr/lib/systemd/system/kubelet.service <<EOF [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF
-
Start kubelet service
systemctl daemon-reload systemctl enable kubelet systemctl start kubelet
-
Verify that the kubelet service started successfully
systemctl status kubelet
-
Approve Node joining cluster in Master
# You need to allow this Node manually before you can join the cluster after startup. View the Node of the request signature in the Master Node # /opt/kubernetes/bin/kubectl get csr # /opt/kubernetes/bin/kubectl certificate approve XXXXID # /opt/kubernetes/bin/kubectl get node
-
Add roles to the master node to distinguish it from the node (generally not used to schedule pod)
# Modify the role label of a node kubectl label node 172.27.19.143 node-role.kubernetes.io/master=172.27.19.143
Deploy Kube proxy on 6.2master node
-
Create the Kube proxy kubeconfig file
#Execute the following command in the / opt/kubernetes/ssl directory /opt/kubernetes/bin/kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig mv *.kubeconfig /opt/kubernetes/cfg ls /opt/kubernetes/cfg/*.kubeconfig
-
Create the Kube proxy configuration file kube-proxy.conf
cat > /opt/kubernetes/cfg/kube-proxy.conf <<EOF KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=172.27.19.143 \ --cluster-cidr=10.0.0.0/24 \ --config=/opt/kubernetes/cfg/kube-proxy-config.yaml" EOF #The kube-proxy-config.yaml file is as follows: cat > /opt/kubernetes/cfg/kube-proxy-conf.yaml <<EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 172.27.19.143 clientConnection: kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig clusterCIDR: 10.0.0.0/24 healthzBindAddress: 172.27.19.143:10256 hostnameOverride: 172.27.19.143 kind: KubeProxyConfiguration metricsBindAddress: 172.27.19.143:10249 mode: "ipvs" EOF
-
System D manages the Kube proxy and creates the Kube proxy service
cat > /usr/lib/systemd/system/kube-proxy.service <<EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy.conf ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
-
Start the Kube proxy service
systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy
-
Verify that the Kube proxy service started successfully
systemctl status kube-proxy
7. Create an Nginx Web to determine whether the cluster works normally
-
Create an Nginx Web
# Running a depolyment of nginx /opt/kubernetes/bin/kubectl run nginx --image=docker.io/nginx:latest --replicas=1 --image-pull-policy=IfNotPresent # Create a nginx service /opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
-
View Pod, Service
[root@VM_19_143_centos ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-7c5cf9bcfc-t992w 1/1 Running 0 21m [root@VM_19_143_centos ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d16h nginx NodePort 10.0.0.156 <none> 88:35419/TCP 8m4s
-
Accessing nginx services
[root@VM_19_143_centos ~]# curl 172.27.19.143:35419 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
Note: on the machine 172.27.19.143 where the pod is located, it can not be accessed directly through curl 127.0.0.1:35419 or curl localhost:35419. It may be that the configuration is missing.