Then in the first part, before deploying Kubernetes, make sure that etcd, flannel and docker work normally, otherwise, solve the problem first and then continue.
Three main roles are deployed: Kube apiserver Kube Controller Manager Kube scheduler
1. Generate Certificate (on the master)
1 establish a directory to store certificates
cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.18.103", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2 finally, the following certificate files are generated:
[yx@tidb-tidb-03 k8sssl]$ ls *.pem admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
II. Deployment of apiserver components
1 download
First download the binary package: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
Just download this package (kubernetes-server-linux-amd64.tar.gz), which contains all the components you need.
mkdir /home/yx/kubernetes/{bin,cfg,ssl} -p tar zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin cp ca*.pem /home/yx/kubernetes/ssl/ #Copy certificate cp server*.pem /home/yx/kubernetes/ssl/
2 create a token file
head -c 16 /dev/urandom |od -An -t x |tr -d ' ' 71b6d986c47254bb0e63b2a20cfaf560 cat /opt/kubernetes/cfg/token.csv 71b6d986c47254bb0e63b2a20cfaf560,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
First column: random string, self generated
Column 2: user name
Column 3: UID
Column 4: user groups
3. To create the apiserver configuration file, you will use some certificates generated above.
cat /home/yx/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.18.103:2379,https://192.168.18.104:2379,https://192.168.18.105:2379 \ --bind-address=192.168.18.103 \ --secure-port=6443 \ --advertise-address=192.168.18.103 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/home/yx/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/home/yx/kubernetes/ssl/server.pem \ --tls-private-key-file=/home/yx/kubernetes/ssl/server-key.pem \ --client-ca-file=/home/yx/kubernetes/ssl/ca.pem \ --service-account-key-file=/home/yx/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/home/yx/etcd/ssl/ca.pem \ --etcd-certfile=/home/yx/etcd/ssl/server.pem \ --etcd-keyfile=/home/yx/etcd/ssl/server-key.pem" --logtostderr log enable ---v Log level --etcd-servers etcd Cluster address --bind-address Monitor address --secure-port https Secure port --advertise-address Cluster notification address --allow-privileged Enabling authorization --service-cluster-ip-range Service fictitious IP Address segment --enable-admission-plugins Access control module --authorization-mode Authentication authorization, enabling RBAC Authorization and node self management --enable-bootstrap-token-auth Enable TLS bootstrap Function, which will be discussed later --token-auth-file token file --service-node-port-range Service Node Type default assignment port range
4 create startup program
[yx@tidb-tidb-03 cfg]$ cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/home/yx/kubernetes/cfg/kube-apiserver ExecStart=/home/yx/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
5 launch Kube apiserver
systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver #Verification ps -ef | grep apiserver root 12768 1 99 14:45 ? 00:00:02 /home/yx/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.18.103:2379,https://192.168.18.104:2379,https://192.168.18.105:2379 --bind-address=192.168.18.103 --secure-port=6443 --advertise-address=192.168.18.103 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/home/yx/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/home/yx/kubernetes/ssl/server.pem --tls-private-key-file=/home/yx/kubernetes/ssl/server-key.pem --client-ca-file=/home/yx/kubernetes/ssl/ca.pem --service-account-key-file=/home/yx/kubernetes/ssl/ca-key.pem --etcd-cafile=/home/yx/etcd/ssl/ca.pem --etcd-certfile=/home/yx/etcd/ssl/server.pem --etcd-keyfile=/home/yx/etcd/ssl/server-key.pem
3. Deploy the Kube scheduler component
1 to create a scheduler profile:
[yx@tidb-tidb-03 cfg]$ cat kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect" --master Connect local apiserver --leader-elect Automatic election when the component starts multiple( HA)
2 configure startup script
cat kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect" [yx@tidb-tidb-03 cfg]$ cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/home/yx/kubernetes/cfg/kube-scheduler ExecStart=/home/yx/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
3 boot
systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler //Verification: ps -ef |grep scheduler root 13296 1 0 14:49 ? 00:00:03 /home/yx/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect yx 14450 25931 0 14:57 pts/0 00:00:00 grep --color=auto scheduler
4. Deployment of controller manager component
1 create controller manager configuration file
cat kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/home/yx/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/home/yx/kubernetes/ssl/ca-key.pem \ --root-ca-file=/home/yx/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/home/yx/kubernetes/ssl/ca-key.pem"
2 configure startup script
cat /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/home/yx/kubernetes/cfg/kube-controller-manager ExecStart=/home/yx/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
3 boot
systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager //Verify PS - EF grep Kube Controller Manager
5. View the current cluster component status through the kubectl tool:
/home/yx/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} #The above information indicates normal