master02 node deployment
Copy the certificate file, configuration file and service management file of each master component from the master 01 node to the master 02 node
scp -r /opt/etcd/ root@192.168.80.20:/opt/ scp -r /opt/kubernetes/ root@192.168.80.20:/opt scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.80.20:/usr/lib/systemd/system/
1. Modify the IP address in the configuration file Kube apiserver
vim /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.80.10:2379,https://192.168.80.11:2379,https://192.168.80.12:2379 \ --bind-address=192.168.80.20 \ #modify --secure-port=6443 \ --advertise-address=192.168.80.20 \ #modify ......
2. Start each service on the master02 node and set the startup and self startup
systemctl start kube-apiserver.service systemctl enable kube-apiserver.service systemctl start kube-controller-manager.service systemctl enable kube-controller-manager.service systemctl start kube-scheduler.service systemctl enable kube-scheduler.service
3. View node status
ln -s /opt/kubernetes/bin/* /usr/local/bin/ kubectl get nodes kubectl get nodes -o wide #-o=wide: output additional information; For Pod, the Node name where the Pod is located will be output At this time master02 Node found node Node status is only from etcd The information queried, and at this time node The node is not actually associated with master02 The node establishes a communication connection, so you need to use one VIP hold node Node and master Nodes are associated
Load balancing deployment
Configure load balancer cluster dual machine hot standby load balancing (nginx realizes load balancing and keepalived realizes dual machine hot standby)
Operate on nodes lb01 and lb02
1. Configure the official online Yum source of nginx and the yum source of local nginx
cat > /etc/yum.repos.d/nginx.repo << 'EOF' [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 EOF yum install nginx -y
2. Modify the nginx configuration file, configure the four layer reverse proxy load balancing, and specify the node ip and 6443 port of the two master servers in the k8s cluster
vim /etc/nginx/nginx.conf events { worker_connections 1024; } #add to stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.80.10:6443; server 192.168.80.20:6443; } server { listen 6443; proxy_pass k8s-apiserver; } } http { ......
3. Check the configuration file syntax
nginx -t
4. Start nginx service and check the monitored 6443 port
systemctl start nginx systemctl enable nginx netstat -natp | grep nginx
5. Deploy the keepalived service
yum install keepalived -y
6. Modify the keepalived configuration file
vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { # Receiving email address notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # Email address notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER #lb01 node is NGINX_MASTER, lb02 node is NGINX_BACKUP } #Add a script that executes periodically vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" #Specifies the script path to check nginx survival } vrrp_instance VI_1 { state MASTER #MASTER for node lb01 and BACKUP for node lb02 interface ens33 #Specify the network card name ens33 virtual_router_id 51 #Specify vrid, and the two nodes should be consistent priority 100 #100 for lb01 node and 90 for lb02 node advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.80.100/24 #Specify VIP } track_script { check_nginx #Specify vrrp_script configured script } }
7. Create nginx status check script
vim /etc/nginx/check_nginx.sh #!/bin/bash #egrep -cv "grep $$" is used to filter out the current Shell process ID containing grep or $$ count=$(ps -ef | grep nginx | egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi chmod +x /etc/nginx/check_nginx.sh
8. Start the keepalived service (be sure to start the nginx service before starting the keepalived service)
systemctl start keepalived systemctl enable keepalived ip a #Check whether VIP is generated
9. Modify the bootstrap on the node kubeconfig,kubelet. Kubeconfig configuration file is VIP
cd /opt/kubernetes/cfg/ vim bootstrap.kubeconfig server: https://192.168.80.100:6443 vim kubelet.kubeconfig server: https://192.168.80.100:6443 vim kube-proxy.kubeconfig server: https://192.168.80.100:6443
10 restart kubelet and Kube proxy services
systemctl restart kubelet.service systemctl restart kube-proxy.service
11. View nginx's k8s log on lb01
tail /var/log/nginx/k8s-access.log
Operate on the master01 node
Test create pod
kubectl run nginx --image=nginx
1. View the status information of the Pod
kubectl get pods NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-nf9sk 0/1 ContainerCreating 0 33s #Creating kubectl get pods NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-nf9sk 1/1 Running 0 80s #Creation completed, running kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-26r9l 1/1 Running 0 10m 172.17.36.2 192.168.80.15 <none>
READY is 1 / 1, indicating that there is one container in this Pod
2. Operate on the node node of the corresponding network segment, which can be accessed directly using the browser or curl command
curl 172.17.36.2
3. At this time, view the nginx log on the master01 node and find that you do not have permission to view it
kubectl logs nginx-dbddb74b8-nf9sk Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( nginx-dbddb74b8-nf9sk)
4. On the master01 node, grant the cluster admin role to the user system:anonymous
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
5. Check nginx log again
kubectl logs nginx-dbddb74b8-nf9sk
Deploy Dashboard UI
Introduction to Dashboard
The dashboard is based on Web of Kubernetes User interface. You can use dashboards to deploy containerized applications to Kubernetes Cluster, troubleshooting containerized applications, and managing the cluster itself and its accompanying resources. You can use dashboards to outline the applications running on a cluster and to create or modify individual Kubernetes Resources (such as deployment, jobs, daemons, etc.). For example, you can use the Deployment Wizard to extend the deployment, start rolling updates, and restart Pod Or deploy a new application. Dashboards also provide information about clusters Kubernetes Information about resource status and any errors that may occur.
##Operate on the master01 node
1. Create dashborad working directory in k8s working directory
mkdir /opt/k8s/dashboard cd /opt/k8s/dashboard //Upload dashboard Zip the compressed package and unzip it. There are seven yaml files, including five core files for building the interface, a k8s-admin Yaml file is written by yourself to generate the token to be used when logging in in the browser later; A dashboard-cert.sh is used to quickly generate the certificate file required to solve the problem of encrypted communication in Google browser
2. Address of official download resources of core documents:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard dashboard-configmap.yaml dashboard-rbac.yaml dashboard-service.yaml dashboard-controller.yaml dashboard-secret.yaml k8s-admin.yaml dashboard-cert.sh
(1),dashboard-rbac.yaml: It is used to set access control and configure the access control permissions and role binding (binding roles and service accounts) of various roles. The content includes the rules configured for various roles( rules) (2),dashboard-secret.yaml: Provide token, access API Used by the server (personally understood as a security authentication mechanism) (3),dashboard-configmap.yaml: Configuration template file, responsible for setting Dashboard Documents, ConfigMap It provides a way to inject configuration data into the container to ensure that the application configuration in the container is from Image Decoupling in content (4),dashboard-controller.yaml: Responsible for the creation of controller and service account to manage pod copy (5),dashboard-service.yaml: Be responsible for providing services in the container for external access
3. Create resources through kubectl create command
cd /opt/k8s/dashboard
1, Specify the permissions of kubernetes dashboard minimal: for example, it has different permissions such as obtaining updates and deleting
kubectl create -f dashboard-rbac.yaml
4. If there are several kind, several results will be created in the format of kind+apiServer/name
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
5. Check whether the resource object kubernetes dashboard minimal of type role and rolebinding is generated
kubectl get role,rolebinding -n kube-system //-N Kube system means to view the pod in the specified namespace. The default value is default
2, Certificate and key creation
kubectl create -f ·dashboard-secret.yaml secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-key-holder created
6. Check whether the resource objects kubernetes dashboard certs and kubernetes dashboard key holder of type Secret are generated
kubectl get secret -n kube-system
3, Configuration file, for the creation of cluster dashboard settings
kubectl create -f dashboard-configmap.yaml configmap/kubernetes-dashboard-settings created
7. Check whether the resource object kubernetes dashboard settings of type ConfigMap is generated
kubectl get configmap -n kube-system
4, Create the controller and service account required by the container
kubectl create -f dashboard-controller.yaml serviceaccount/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created
1... Check whether the resource object kubernetes dashboard settings of type serviceaccount and deployment is generated
kubectl get serviceaccount,deployment -n kube-system
5, Provide services
kubectl create -f dashboard-service.yaml service/kubernetes-dashboard created
1. View the pod and service status information created under the specified Kube system namespace
kubectl get pods,svc -n kube-system -o wide //svc is the abbreviation of service, which can be viewed through kubectl API resources NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod/kubernetes-dashboard-7dffbccd68-c6d24 1/1 Running 1 11m 172.17.26.2 192.168.80.11 <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes-dashboard NodePort 10.0.0.75 <none> 443:30001/TCP 11m k8s-app=kubernetes-dashboard
2. The dashboard is assigned to the node01 server. The access port is port 30001. Open the browser for access https://nodeIP:30001 To test
3. Firefox browser can directly access: https://192.168.80.11:30001
4. Google browser cannot be accessed directly due to the lack of authentication certificate for encrypted communication. You can view the reasons for the access failure through the menu - > more tools - > developer tools - > security.
5. Solve the problem of encrypted communication in Google browser, and use the script dashboard-cert.sh to quickly generate the certificate file
cd /opt/k8s/dashboard/ vim dashboard-controller.yaml ...... args: # PLATFORM-SPECIFIC ARGS HERE - --auto-generate-certificates #Add the following two lines under line 47 of the file to specify the private key and certificate file for encryption (tls) - --tls-key-file=dashboard-key.pem - --tls-cert-file=dashboard.pem
6. Execute script
cd /opt/k8s/dashboard/ chmod +x dashboard-cert.sh ./dashboard-cert.sh /opt/k8s/k8s-cert/
7. Two certificates will be generated in the dashboard working directory
ls *.pem dashboard.pem dashboard-key.pem
8. Redeploy (Note: when apply does not take effect, first use delete to clear resources, and then apply to create resources)
kubectl apply -f dashboard-controller.yaml
9. Since the assigned node may be replaced, check the server address and port number of the assigned node again
kubectl get pods,svc -n kube-system -o wide
10. Perform the access test again, choose to log in with token, and use k8s admin Yaml file to create token
cd /opt/k8s/dashboard/ kubectl create -f k8s-admin.yaml
Get the brief information of the token. The name is dashboard admin token XXXXX
kubectl get secrets -n kube-system NAME TYPE DATA AGE dashboard-admin-token-kpmm8 kubernetes.io/service-account-token 3 default-token-7dhwm kubernetes.io/service-account-token 3 kubernetes-dashboard-certs Opaque 11 kubernetes-dashboard-key-holder Opaque 2 kubernetes-dashboard-token-jn94c kubernetes.io/service-account-token 3
1. Check the token serial number and take the contents after token:
kubectl describe secrets dashboard-admin-token-kpmm8 -n kube-system
5. Copy and fill the token serial number into the browser page and click login
6. First check whether there are resources running in the cluster through the kubectl get pods command, then select default in the command space in the Dashboard UI interface, click "container group" in the sidebar, and click the container name to enter a page, Click the "run command" or "log" control in the upper right to pop up another additional page. You can enter curl command in "run command" to access the container, and then view the log update results through the dashboard page.