Pod Controller-DaemonSet
This chapter gives you an explanation of the second controller, DaemonSet.
You'll learn what DaemonSet is, and what it's like to do with its configuration. Finally, I've come up with the proper terms for stain and tolerance, and if you already have a foundation, you can optionally go directly to that chapter (see yourself).
- What is DaemonSet?
- Command Supplement
- Actual Configuration
- Knowledge Point Supplement
- Remarks
1.What is DaemonSet?
DaemonSet is a controller that ensures that each regular node has only one Pod.You should pay attention to the following two points:
- 1. A new node joins the cluster and also adds a Pod
- 2. When a node goes offline, the corresponding Pod will also be recycled
2. Command Supplement
#You can view the DaemonSet using kubectl get ds [root@centos-1 mainfasts]# kubectl get ds -A NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system kube-flannel-ds-amd64 3 3 3 3 3 <none> 4d1h kube-system kube-flannel-ds-arm 0 0 0 0 0 <none> 4d1h kube-system kube-flannel-ds-arm64 0 0 0 0 0 <none> 4d1h kube-system kube-flannel-ds-ppc64le 0 0 0 0 0 <none> 4d1h kube-system kube-flannel-ds-s390x 0 0 0 0 0 <none> 4d1h kube-system kube-proxy 3 3 3 3 3 beta.kubernetes.io/os=linux 4d1h
3. Configuration of Actual Warfare
1) Edit filebeat-daemonset.yaml, where we create a filebeat's daemonset, which deploys a pod`container of filebeat on each client node, just as they would on a daily basis.You should note that:
Here we use a selector for a node: logcollecting: "on",Node does not have this label by default!
apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds labels: app: filebeat spec: selector: matchLabels: app: filebeat template: metadata: labels: app: filebeat spec: containers: - name: filebeat image: prima/filebeat:6.4.2 env: - name: REDIS_HOST value: db.ikubernetes.is:6379 - name: LOG_LEVEL value: info nodeSelector: #Node selector logcollecting: "on" #Custom Label
2) yaml was loaded with apply-f and observed.You can see that no pod was generated because of the definition of custom tags and no matching node!
[root@centos-1 mainfasts]# kubectl apply -f filebeat-daemonset.yaml daemonset.apps/filebeat-ds created [root@centos-1 mainfasts]# kubectl get pod NAME READY STATUS RESTARTS AGE ngx-new-cb79d555-gqwf8 1/1 Running 0 29h ngx-new-cb79d555-hcdr9 1/1 Running 0 30h [root@centos-1 mainfasts]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE filebeat-ds 0 0 0 0 0 logcollecting=on 8s
3) Next, we try to label the node 01 and find that the pod has started dispatching to the corresponding node
[root@centos-1 mainfasts]# kubectl label node centos-2.shared logcollecting="on" --overwrite node/centos-2.shared labeled [root@centos-1 mainfasts]# kubectl get pod NAME READY STATUS RESTARTS AGE filebeat-ds-dlxwn 0/1 CrashLoopBackOff 1 5s ngx-new-cb79d555-gqwf8 1/1 Running 0 29h ngx-new-cb79d555-hcdr9 1/1 Running 0 30h [root@centos-1 mainfasts]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS centos-1.shared Ready master 4d v1.16.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=centos-1.shared,kubernetes.io/os=linux,node-role.kubernetes.io/master= centos-2.shared Ready <none> 4d v1.16.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=centos-2.shared,kubernetes.io/os=linux,logcollceting=true,logcollecting=on centos-3.shared Ready <none> 4d v1.16.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=centos-3.shared,kubernetes.io/os=linux
4) Remove labels, you can use the following commands
kubectl label node centos-2.shared logcollceting-
4. Knowledge Point Supplement
The node has a stain which affects the scheduling policy. I am Smudge and Tolerance Chapters will be explained in detail.
[root@centos-1 mainfasts]# kubectl describe node centos-1.shared Name: centos-1.shared Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=centos-1.shared kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"6a:82:c9:37:15:dd"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 192.168.0.104 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 25 Nov 2019 17:00:45 +0800 Taints: node-role.kubernetes.io/master:NoSchedule. #Stain, advanced feature of pod scheduling, tolerance: not allowed to schedule to master node
5. Notes
The original address of this article is in my Github , I will update all the topics one after another, including docker, k8s, ceph, istio and prometheus, to share the technological knowledge and practical process of the cloud. If it is useful to you, please follow, star t and forward my github, which is also the motivation for me to update and share. Thank you~