Five controllers of kubernetes

Posted by skhale on Mon, 03 Jan 2022 09:59:03 +0100

Controller type for k8s

Many controller s are built in Kubernetes, which are equivalent to a state machine to control the specific state and behavior of Pod

  1. Deployment: suitable for stateless service deployment
  2. StatefullSet: suitable for stateful service deployment
  3. Daemon set: once deployed, all node nodes will be deployed. For example, some typical application scenarios:
    Run the cluster storage daemon, such as glusterd and ceph on each Node
    Run the log collection daemon on each Node, such as fluent D and logstash
    Run the monitoring daemon on each Node, such as Prometheus Node Exporter
  4. Job: a one-time task
  5. Cronjob: perform tasks periodically

Deployment controller

Deployment overview

Deployment object, as its name suggests, is an object used to deploy applications. It is the most commonly used object in Kubernetes. It provides a declarative definition method for the creation of ReplicaSet and Pod, so there is no need to manually create ReplicaSet and Pod objects as in the previous two articles (using deployment instead of directly creating ReplicaSet is because the deployment object has many features that ReplicaSet does not have, such as rolling upgrade and rollback).

Through the Deployment object, you can easily do the following things:

  • Create ReplicaSet and Pod
  • Rolling upgrade (upgrade without stopping the old service) and rolling back the application (roll back the application to the previous version)
  • Smooth expansion and contraction
  • Pause and resume Deployment

Creation of Deployment

Use the following deploy Taking the YML file as an example, create an nginx Deployment using the following command:

[root@master ~]# vi deploy.yml
[root@master ~]# cat deploy.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
   matchLabels:
     app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80




[root@master ~]# kubectl create -f deploy.yml  --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment created

--The record parameter can record which commands have been executed by the current version of Deployment.

Execute the get command immediately after creation to view the Deployment:

[root@master ~]# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           2m14s
[root@master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-74d589986c-kxcvx   1/1     Running   0          2m11s
nginx-deployment-74d589986c-s277p   1/1     Running   0          2m11s
nginx-deployment-74d589986c-zlf8v   1/1     Running   0          2m11s

NAME represents the NAME of the Deployment, specified represents the expected number of copies of the Deployment, CURRENT represents the number of copies that have been created, UP-TO-DATE represents the number of copies that have been updated, AVAILABLE represents the number of copies AVAILABLE to the CURRENT user, and AGE represents the running time of the CURRENT Deployment.

Wait a few seconds and run the get command again to see the changes:

[root@master ~]# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           3m4s

View the ReplicaSet object in the system through kubectl get rs, which shows that Deployment will automatically create a ReplicaSet object.

[root@master ~]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-74d589986c   3         3         3       4m29s

Use the kubectl get pods -- show labels command to view the Pod objects in the current system. You can successfully observe the three pods created by nginx deployment.

[root@master ~]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE     LABELS
nginx-deployment-74d589986c-kxcvx   1/1     Running   0          3m55s   app=nginx,pod-template-hash=74d589986c
nginx-deployment-74d589986c-s277p   1/1     Running   0          3m55s   app=nginx,pod-template-hash=74d589986c
nginx-deployment-74d589986c-zlf8v   1/1     Running   0          3m55s   app=nginx,pod-template-hash=74d589986c

Update of Deployment

Suppose we want nginx pod to use nginx: 1.9 1 instead of the original nginx image, run the following command:

[root@master ~]# kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment.apps/nginx-deployment image updated

Or we can use the edit command to edit the Deployment and rewrite the image from nginx to nginx: 1.9 1.

kubectl edit deployment/nginx-deployment

View update progress:

[root@master ~]# kubectl rollout status deployment/nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out

When the Deployment is updated, a new ReplicaSet will be created, and then the Pod in the new ReplicaSet will be slowly expanded to the specified number of replicas, and the old ReplicaSet will be slowly reduced to 0. Therefore, when updating, you can always ensure that the old service will not stop, which is rolling update.

Rollback of Deployment

After we updated the Deployment as above, we found nginx: 1.9 The image of 1 is not very stable, so I want to modify it back to nginx: 1.7 At this time, we do not need to manually change the Deployment file, but use the rollback function of Deployment.

Use the rollback history command to view the revision of the Deployment:

[root@master ~]# kubectl rollout history deployment/nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
1         kubectl create --filename=deploy.yml --record=true
2         kubectl create --filename=deploy.yml --record=true

Because we used the - recored parameter when creating the Deployment to record commands, we can easily view the changes of each revision.

To view the details of a single revision:

[root@master ~]# kubectl rollout history deployment/nginx-deployment --revision=2
deployment.apps/nginx-deployment with revision #2
Pod Template:
  Labels:       app=nginx
        pod-template-hash=658d7f4b4b
  Annotations:  kubernetes.io/change-cause: kubectl create --filename=deploy.yml --record=true
  Containers:
   nginx:
    Image:      nginx:1.9.1
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>

Now, you can use the rollback undo command to rollback to the previous revision:

[root@master ~]# kubectl rollout undo deployment/nginx-deployment
deployment.apps/nginx-deployment rolled back

[root@master ~]# kubectl describe deployment/nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Fri, 24 Dec 2021 22:24:10 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 3
                        kubernetes.io/change-cause: kubectl create --filename=deploy.yml --record=true
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>

You can also specify a historical version using the – to revision parameter:

[root@master ~]#  kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment.apps/nginx-deployment rolled back

[root@master ~]# kubectl describe deployment/nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Fri, 24 Dec 2021 22:24:10 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 4
                        kubernetes.io/change-cause: kubectl create --filename=deploy.yml --record=true
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 4 total | 3 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.9.1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>

You can set it by spec.revisonHistoryLimit item to specify the maximum number of revison history records retained by Deployment. By default, all revision s will be retained; If this item is set to 0, the Deployment does not allow fallback.

Only when the Deployment rollout is triggered will a revision be created! be careful! When and only when the Pod template of the Deployment is changed, such as updating the label and container image in the template, a rollout will be triggered to create a new revision for the Deployment.

More usage of the rollout command:

  • history (view historical version)
  • Pause (pause Deployment)
  • Resume (resume suspended Deployment)
  • Status (view resource status)
  • undo (rollback version)

Replicase controller

Replica overview

ReplicaSet is a replica controller (rs) in kubernetes. Its main function is to control the Pod managed by it and keep the number of Pod replicas at the preset number. Its main function is to ensure that a certain number of pods can operate normally in the cluster. It will continue to monitor the operation status of these pods, restart the Pod when the Pod fails, and re run a new Pod copy when the number of pods decreases. It is officially recommended not to use ReplicaSet directly and replace it with deployments. Deployments is a more advanced concept than ReplicaSet. It will manage ReplicaSet and provide many other useful features. The most important thing is that deployments supports declarative updates. The advantage of declarative updates is that historical changes will not be lost. Therefore, the Deployment controller does not directly manage the Pod object, but the Deployment manages the ReplicaSet, and then the ReplicaSet is responsible for managing the Pod object.

How replicast works

The core function of replicast is to create a specified number of pod replicas and ensure that the number of pod replicas always meets the user's expectations. It plays the role of refunding more and making up less. It also has the system of automatic capacity expansion and shrinkage.
The replicast controller is mainly composed of three parts:

  1. Number of pod copies expected by the user: used to define the number of pod copies controlled by this controller
  2. Tag selector: select which pods are managed by yourself. If the number of pod copies selected through the tag selector is less than the number specified by us, the following components need to be used
  3. Pod resource template: what if the number of existing pods in the cluster is not enough to meet the expected number of replicas we defined? You need to create a new pod, which requires a pod template. The new pod is created based on the template.

Replicase use cases

#Write a list of ReplicaSet resources
[root@k8s-master1 ~]# cat replicaset.yml 
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: nginx
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy:  IfNotPresent


[root@master ~]# kubectl apply -f replicaset.yaml 
replicaset.apps/frontend created
[root@master ~]# kubectl  get pods
NAME             READY   STATUS    RESTARTS   AGE
frontend-7rrp6   1/1     Running   0          9s
frontend-drmcf   1/1     Running   0          9s
frontend-qnlz6   1/1     Running   0          9s
[root@master ~]# kubectl get rs
NAME       DESIRED   CURRENT   READY   AGE
frontend   3         3         3       41s

DaemonSet controller

About DaemonSet

Daemonset: Service daemon. Its main function is to run the daemons we deployed on all nodes of the Kubernetes cluster, which is equivalent to deploying Pod copies on the cluster nodes respectively. If a new node joins the cluster, daemonset will automatically run the Pod copies we need to deploy on the node. On the contrary, if a node exits the cluster, Daemonset will also remove the Pod replica deployed on the old node.

Main features of DaemonSet

  • This Pod runs on every Node in the Kubernetes cluster;
  • Only one such Pod instance will run on each node;
  • If a new node joins the Kubernetes cluster, the Pod will be automatically created on the new node;
  • When the old node is deleted, the Pod on it will be recycled accordingly.

Scheduling characteristics of Daemon Pods

By default, the specific Node to which the Pod is assigned to run is determined by the Scheduler (which is responsible for assigning the scheduling Pod to the nodes in the cluster. It listens to ApiServer, queries the pods that have not been assigned nodes, and then assigns nodes to these pods according to the scheduling policy). However, the pods created by the DaemonSet object have some special characteristics:

  • The unscheduled attribute of Node is ignored by the DaemonSet Controller.
  • The DaemonSet Controller can create and run pods even if the Scheduler has not been started.

Daemon Pods support taints and tolerations However, when these Pods are created, the following effect s are tolerated by default: tails with NoExecute (tolerationSeconds is not set):

Toleration KeyEffectVersionDescription
node.kubernetes.io/not-readyNoExecute1.13+DaemonSet pods will not be evicted when there are node problems such as a network partition.
node.kubernetes.io/unreachableNoExecute1.13+DaemonSet pods will not be evicted when there are node problems such as a network partition.
node.kubernetes.io/disk-pressureNoSchedule1.8+...
node.kubernetes.io/memory-pressureNoSchedule1.8+...
node.kubernetes.io/unschedulableNoSchedule1.12+DaemonSet pods tolerate unschedulable attributes by default scheduler.
node.kubernetes.io/network-unavailableNoSchedule1.12+DaemonSet pods, who uses host network, tolerate network-unavailable attributes by default scheduler.

DaemonSet common scenarios

  • Agent components of network plug-ins, such as Flannel and Calico, need to run on each node to handle the container network on this node;
  • The Agent components of the storage plug-in, such as Ceph and Glusterfs, need to run on each node to mount the F remote storage directory on this node;
  • The data collection components of the monitoring system, such as Prometheus Node Exporter (Cadvisor), need to run on each node to collect monitoring information on this node.
  • The data collection components of the log system, such as (Fluent, Logstash) need to run on each node to collect log information on this node.

Create a DaemonSet object

The following description file creates a DaemonSet object running the nginx image:

[root@master kubenetres]# vi daemonset.yml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers



[root@master ~]# kubectl get pod -n kube-system
NAME                                         READY   STATUS    RESTARTS      AGE
coredns-6d8c4cb4d-6n2xc                      1/1     Running   2 (53m ago)   3d1h
coredns-6d8c4cb4d-hjznw                      1/1     Running   2 (53m ago)   3d1h
etcd-master.example.com                      1/1     Running   8 (53m ago)   3d1h
fluentd-elasticsearch-6sgnt                  1/1     Running   0             44s
fluentd-elasticsearch-chfhc                  1/1     Running   0             42s
kube-apiserver-master.example.com            1/1     Running   9 (53m ago)   3d1h
kube-controller-manager-master.example.com   1/1     Running   8 (53m ago)   3d1h
kube-flannel-ds-67kht                        1/1     Running   3 (53m ago)   3d1h
kube-flannel-ds-hr47p                        1/1     Running   2 (53m ago)   3d1h
kube-flannel-ds-k678m                        1/1     Running   2 (53m ago)   3d1h
kube-proxy-44zx6                             1/1     Running   2 (53m ago)   3d1h
kube-proxy-knkbm                             1/1     Running   2 (53m ago)   3d1h
kube-proxy-n875j                             1/1     Running   3 (53m ago)   3d1h
kube-scheduler-master.example.com            1/1     Running   8 (53m ago)   3d1h

Job controller

Job Controller

The Job Controller is responsible for creating a Pod according to the Job Spec and continuously monitoring the status of the Pod until it ends successfully. If it fails, decide whether to create a new Pod and retry the task again according to the restart policy (only OnFailure and Never are supported, not Always).

Job is responsible for batch processing of short live one-off tasks, that is, tasks that are executed only once. It ensures the successful completion of one or more pods of batch tasks.

Kubernetes supports the following types of jobs:

  • Non parallel Job: usually create a Pod until it ends successfully
  • Job with fixed ending times: setting spec.completions, create multiple pods until spec.completions Pod completed successfully
  • Parallel jobs with work queues: settings spec.Parallelism but not set spec.completions. When all pods end and at least one is successful, the Job is considered successful

According to spec.completions and According to the setting of spec.Parallelism, jobs can be divided into the following pattern s:

Job typeUse examplebehaviorcompletionsParallelism
One time JobDatabase migrationCreate a Pod until it ends successfully11
Job with fixed ending timesPod for processing work queuesCreate a Pod in turn and run until the completions are completed successfully2+1
Parallel Job with fixed end timesMultiple pods process work queues at the same timeCreate multiple pods in turn and run until the completion is completed successfully2+2+
Parallel JobMultiple pods process work queues at the same timeCreate one or more pods until one ends successfully12+

Use of job

[root@master ~]# vi job.yml 
---
apiVersion: batch/v1
kind: Job
metadata:
  name: myjob
spec:
  template:
    spec:
      containers:
      - name: myjob
        image: busybox
        command: ["echo",  "hello k8s job"]
      restartPolicy: Never


[root@master ~]# kubectl apply -f job.yml 
job.batch/myjob created
[root@master ~]# kubectl get pods
NAME          READY   STATUS      RESTARTS   AGE
myjob-gq27p   0/1     Completed   0          37s

#View the tasks of this pod
[root@master ~]# kubectl get job
NAME    COMPLETIONS   DURATION   AGE
myjob   1/1           19s        5m11s

#Check the log of this pod
[root@master ~]# kubectl logs myjob-gq27p
hello k8s job

CronJob controller

CronJob It can be used to perform scheduled tasks based on time schedule, similar to those in Linux/Unix systems crontable (opens new window).

CronJob is very useful when performing periodic repetitive tasks, such as backing up data, sending mail, etc. CronJob can also be used to specify a point in time to execute a single task in the future, such as timing a task to be executed when the system load is low.

A CronJob object is like a line in a crontab (cron table) file. It is written in Cron format and executes jobs periodically at a given scheduling time.

be careful:

All CronJob schedule s are based on Kube controller manager Time zone.

If your control plane runs Kube controller manager in a Pod or a bare container, the time zone set for the container will determine the time zone used by the controller of Cron Job.

When creating a manifest for CronJob resources, ensure that the name provided is a legal DNS subdomain name The name cannot exceed 52 characters. This is because the CronJob controller will automatically append 11 characters after the Job name provided, and there is a limit that the maximum length of the Job name cannot exceed 63 characters.

CronJob is used to perform periodic actions, such as backup, report generation, etc. Each of these tasks should be configured to repeat periodically (e.g. daily / weekly / monthly); you can define the time interval at which the task starts to execute.

The following CronJob sample list prints out the current time and greeting messages every minute:

[root@master kubenetres]# vi cronjob.yml
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello world
          restartPolicy: OnFailure

Create a pod view

[root@master ~]# kubectl apply -f cronjob.yml 
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
cronjob.batch/hello created

#Wait a minute to see
[root@master ~]# kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-27339330-kkfxv   0/1     Completed   0          2s

#view log
[root@master ~]# kubectl logs hello-27339330-kkfxv
Fri Dec 24 19:00 UTC 2021
Hello world

Topics: Java Docker Kubernetes