1, The easiest way to install docker
1.1 installing docker
sudo apt install docker.io docker version
1.2 add common user permissions
sudo groupadd docker sudo usermod -aG docker $USER newgrp docker docker version/info
1.3 modify the default Cgroup Driver of docker
Official tutorial: https://kubernetes.io/docs/setup/production-environment/container-runtimes/
docker info |grep Cgroup view the default CGroup driver and change it to systemd
vim /etc/docker/daemon.json ##The contents are as follows: { "exec-opts": ["native.cgroupdriver=systemd"] }
1.4 restart docker
systemctl enable docker systemctl daemon-reload systemctl restart docker #View Cgroup Driver information: docker info | grep Cgroup
2, Install k8s relevant kubectl / kubedm / kubelet
The installation orders given by the official are as follows: however, note that the official installation order is not suitable for installation in the domestic network environment. Do not use the official installation order:
sudo apt-get update && sudo apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubectl
2.1 basic installation tools
sudo apt update && sudo apt install -y apt-transport-https
2.2 configure apt source as alicloud source
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF
2.3 apt update and installation
#You can also install the latest version without specifying the version apt install -y kubelet kubeadm kubectl #The cluster adopts the specified version: sudo apt update apt install -y kubelet=1.18.5-00 apt install -y kubectl=1.18.5-00 apt install -y kubeadm=1.18.5-00 #You can also directly: sudo apt update && apt install -y kubelet=1.18.5-00 && apt install -y kubectl=1.18.5-00 && apt install -y kubeadm=1.18.5-00
2.4 check whether the installation is completed
kubectl version or kubectl version --client kubeadm version kubelet --version
3, Start building k8s clusters
3.1 cluster initialization
kubeadm init --image-repository=registry.aliyuncs.com/google_containers --apiserver-advertise-address masterNodeIP --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.5 --v=5
The results are shown in the figure below:
our Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root:
3.2 initialization of relevant configuration
Non root user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
root user:
export KUBECONFIG=/etc/kubernetes/admin.conf
In addition, in order to make it easier to use, enable the automatic completion function of kubectl command:
echo "source <(kubectl completion bash)" >> ~/.bashrc
3.3 view the cluster status through the command
systemctl status kubelet //Check kubelet: kubectl cluster-info //Use the command to view the information of the cluster: kubectl get nodes //View node list kubectl get nodes -o wide //View node details kubectl get cs //Check the status of Kube scheduler and Kube controller manager. The normal status is Healthy
At this time, the node status is NotReady because You should now deploy a pod network to the cluster, that is, you need to deploy a pod network in the cluster;
3.4 installing Pod network
For Kubernetes Cluster to work, Pod network must be installed, otherwise Pod cannot communicate with each other.
Kubernetes supports a variety of network schemes. Because it is simple and convenient, I use flannel here first;
Official documents of Flannel: https://github.com/flannel-io/flannel/blob/master/Documentation/kubernetes.md
Official tutorial: executing commands
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Note: you may encounter problems: Unable to connect to the server
Main reason: there is no way to directly access this file path https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Solution: use the most original method and the most effective version. Download the file through the host network and vpn;
Then run kubectl apply - F Kube flannel YML, it doesn't matter if you don't have Internet permission, Kube flannel The contents of the YML file are as follows:
--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.14.0-rc1 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.14.0-rc1 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
You can verify the following commands:
View node list kubectl get nodes //ready kubectl get daemonset -n kube-system -l app=flannel kubectl get pod -n kube-system -o wide -l app=flannel kubectl get cm -n kube-system -l app=flannel kubectl get cm -n kube-system -o yaml kube-flannel-cfg ip -d link show flannel.1 route -n arp -n
3.5 add work nodes to the cluster
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
--The token after the token parameter can be obtained through command query
kubeadm token list
--Discovery token CA cert hash # can be obtained through the command
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
Execute the command on the master node to view and verify:
kubectl get nodes
By default: tokens expire after 24 hours If it expires, use the command to directly generate a new join statement:
kubeadm token create --print-join-command
3.6 create Pod verification
kubectl get nodes //ready
//verification kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort kubectl get pod,svc
The browser accesses the following address:
http://nodeIp:NodePort/
Look at the status of Pod:
kubectl get pod --all-namespaces
3.7 deploy Web UI/ Dashboard
In fact, the deployment process is no different from that of other applications. Dashboard is also deployed on k8s as an ordinary web application.
Official Web UI dashboard tutorial: https://kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/
Note: the official installation may not be successful, so do not fully follow the official tutorial;
release page of Dashboard official Github: https://github.com/kubernetes/dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
Domestic network operation steps:
1. Download and modify recommended yaml:
Modify two parts:
Add a nodeport of type 30001 in the spec part of the configuration file and nodeport of type 30001 in the ports part to facilitate external access
Delete the line of pull policy, because the default policy is ifnotpreset;
The details are as follows:
#imagePullPolicy: Always spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard
2. Prepare the required image resources
Creating resources directly may be stuck in ContainerCreating due to network problems, so you can download the image first:
Searching image in vim shows that there are two images that need to be downloaded,
One is kubernetesui / dashboard: v2 2.0,
One is kubernetesui / metrics scraper: v1 zero point six
##Use the docker command to pull: docker pull kubernetesui/dashboard:v2.2.0 docker pull kubernetesui/metrics-scraper:v1.0.6
3 according to the document recommended Yaml create pod
If there are no network conditions, ignore steps 1 and 2 and directly use my one. The contents are as follows:
# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type: Opaque --- kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.2.0 #imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.6 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}
Execute command:
kubectl apply -f recommended.yaml
View results:
kubectl get pods --namespace=kubernetes-dashboard ##Troubleshooting: kubectl describe pod dashboard --namespace=kubernetes-dashboard
Dashboard will create its own Deployment and Service in kubernetes dashboard namespace
#Deployment kubectl get deployments kubernetes-dashboard --namespace=kubernetes-dashboard #Service kubectl get service kubernetes-dashboard --namespace=kubernetes-dashboard
If redeployment is required:
kubectl delete -f recommended.yaml kubectl apply-f recommended.yaml
4. Create the token required to log in to the dashboard ui
Official tutorial: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
I use the token method:
cat <<EOF > account.yml # Create Service Account apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- # Create ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system EOF
Then execute the command:
kubectl apply -f account.yml
5. Log in the browser to access the dashboard ui
Specific format: https://nodeIp:nodePortIp/
View port
kubectl get service kubernetes-dashboard --namespace=kubernetes-dashboard
View nodePort
kubectl get pod --all-namespaces -o wide
be careful:
The system components of Kubernetes are put into Kube system namespace
kubectl get pod -o wide --namespace=kubernetes-dashboard
Find the corresponding work node, and then find the Ip corresponding to the node
Use the following command to find the Token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')