k8s - foundation and classification of pod

Posted by ball420 on Thu, 04 Nov 2021 17:58:09 +0100

k8s - foundation and classification of pod

1, Basic concepts of Pod

1. Explain

  • Pod is the smallest resource management component in kubernetes. Pod is also a resource object that minimizes the running of container applications. A pod represents a process running in the cluster. Most other components in kubernetes support and extend pod functions around pod. For example, controller objects such as StatefulSet and Deployment used to manage pod operation, Service and address objects used to expose pod applications, PersistentVolume storage resource objects that provide storage for P od, etc.

2. In the Kubrenetes cluster, Pod can be used in the following two ways

  • - run a container in a Pod. The mode of "one container in each Pod" is the most common usage: in this mode of use, you can think of a Pod as a package of a single container. kuberentes manages the Pod rather than directly managing the container.
  • Multiple containers can be run simultaneously in a pod. A pod can also encapsulate several containers that need to be closely coupled and cooperate with each other, and they share resources. These containers in the same pod can cooperate with each other to form a service unit, such as one container sharing files and the other "sidecar" "To update these files. Pod manages the storage resources of these containers as an entity.

A container under a pod must run on the same node. Modern container technology suggests that a container only runs one process. The process number in the PID command space of the container is 1, which can directly receive and process signals. When the process terminates, the container life cycle will end. If you want to run multiple processes in the container, you need a process similar to the init process of Linux operating system Manage and control class processes and complete the life cycle management of multiple processes in a tree structure. Processes running in their respective containers cannot directly complete Network communication because of the isolation mechanism between containers. Pod resource abstraction in k8s is to solve this problem. Pod objects are a collection of containers that share Network, UTS and IPC command space, so they have the same domain Name, host name and Network interface, and can communicate directly through IPC.

In Pod resources, the underlying basic container pause and the basic container (also known as the parent container) provide sharing mechanisms such as network command space for each container Pause is to manage the sharing operations between Pod containers. The parent container needs to know exactly how to create containers sharing the running environment and manage the life cycle of these containers. In order to realize the concept of this parent container, in kubernetes, this pause container is used as the parent container of all containers in a Pod. This pause container has two core functions, First, it provides the basis for the Linux naming room of the whole Pod. Second, it enables the PID namespace. It acts as a process with PID 1 (init process) in each Pod and recycles zombie processes.

3. The pause container allows all containers in the Pod to share two resources: network and storage.

(1) Network

  • Each Pod is assigned a unique IP address. All containers in the Pod share cyberspace, including IP address and port. Containers inside the Pod can communicate with each other using localhost. When containers in the Pod communicate with the outside world, shared network resources must be allocated (for example, using the port mapping of the host).

(2) Store

  • You can specify multiple shared volumes. All containers in the Pod can access the shared volume. Volume can also be used to persist the storage resources in the Pod to prevent file loss after container restart.

4. Summary

  • Each Pod has a special Pause container called the "basic container". The image corresponding to the Pause container belongs to the Kubernetes platform. In addition to the Pause container, each Pod also contains one or more closely related user application containers.

5. The pause container in kubernetes mainly provides the following functions for each container:

  • Serve as the basis for sharing Linux namespaces (such as network command space) in pod
  • Enable the PID namespace and start the init process.

6. Kubernetes designed such Pod concept and special composition structure

  • Reason 1: when a group of containers is used as a unit, it is difficult to simply judge and effectively act on the overall container. For example, when a container dies, is it considered as a whole dead? Then, introduce the Pause container irrelevant to business as the basic container of Pod, and its state represents the state of the entire container group, so as to solve this problem.
  • Reason 2: multiple application containers in the Pod share the IP of the Pause container and the Volume mounted by the Pause container, which simplifies the communication between application containers and solves the file sharing problem between containers.

2, Classification of Pod containers

1. Classification of pod

(1) Autonomous Pod

  • This kind of Pod itself cannot repair itself. After the Pod is created (whether it is created directly by you or by other controllers) , will be dispatched by Kuberentes to the nodes of the cluster. Until the process of the Pod is terminated, deleted, expelled due to lack of resources, or the Node is in danger, the Pod will remain on that Node. The Pod will not heal itself. If the INode in which the Pod is running or the regulator itself dies, the Pod will be deleted. Similarly, if the Node in which the Pod is located lacks resources Or if the Pod is in maintenance status, the Pod will also be expelled.

(2) Controller managed Pod

  • Kubernetes uses a more advanced abstraction layer called controller to manage Pod instances. The controller can create and manage multiple pods, providing replica management, rolling upgrade and cluster level self-healing capabilities. For example, if a Node fails, the controller can automatically schedule the pods on that Node to other healthy nodes. Although Pod can be used directly, kubernetes Controller is usually used to manage Pod in.

2. Classification of Pod containers

(1) infrastructure container

  • Maintain the entire pod network and storage space
  • Operation in node node
  • When you start a container, k8s it automatically starts a base container

 

cat /opt/kubernetes/cfg/kubelet
......
--pod-infra-container-image=registry.cnhangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

 

  • Each time a Pod is created, it will be created. Each container running has a base container of pause-amd64, which will run automatically and is transparent to users
docker ps -a
registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"

(2) Initialize containers

  • The Init container must run before the application container starts, and the application container runs in parallel, so the Init container can provide a simple method to block or delay the start of the application container.

The init container is very similar to an ordinary container, except for the following two points

  • The Init container always runs until successful completion
  • Each Init container must successfully start and exit before the next Init container starts

If the Init container of the Pod fails, k8s the Pod will be restarted continuously until the Init container succeeds. However, if the restart policy corresponding to the Pod is Never, it will not restart.

Container function of Init

Because the init container has a separate image from the application container, its startup related code has the following advantages

  • The Init container can contain some utilities or personalization code that does not exist in the application container during installation. For example, it is not necessary to generate a new image FROM an image just to use tools like sed, awk, python, or dig during installation.
  • The Init container can safely run these tools to prevent them from reducing the security of the application image.
  • The creator and deployer of the application image can work independently, and there is no need to jointly build a separate application image.
  • The Init container can run in a file system view different from the application container in the Pod. Therefore, the Init container can have access to Secrets, but the application container cannot.
  • Because the Init container must run before the application container starts, the Init container provides a mechanism to block or delay the start of the application container,

Until a set of prerequisites is met. Once the preconditions are met, all application containers in the Pod will start in parallel.

(3) Application container / / parallel startup

Example of official website:
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']

This example defines a 2 individual Init Container simplicity Pod.  First wait myservice Start, second wait mydb Start. Once these two Init The containers are all started, Pod Will start spec Application container in.
kubectl describe pod myapp-pod
kubectl logs myapp-pod -c init-myservice

vim myservice.yaml
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376
    
kubectl create -f myservice.yaml
kubectl get svc
kubectl get pods -n kube-system
kubectl get pods

vim mydb.yaml
apiVersion: v1
kind: Service
metadata:
  name: mydb
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9377
    
kubectl create -f mydb.yaml
kubectl get pods

Special note

  • During Pod startup, the Init container starts sequentially after network and data volume initialization. Each container must exit successfully before the next container starts.
  • If the container fails to start due to runtime or exit failure, it will retry according to the policy specified in the restart policy of Pod. However, if the restart policy of Pod is set to Always, the restart policy policy will be used when the Init container fails.
  • Before all Init containers fail, Pod will not become Ready. The ports of the Init container will not be aggregated in the Service. The Pod being initialized is in Pending state, but the Initializing state should be set to true.
  • If the Pod restarts, all Init containers must be re executed.
  • The modification of Init container spec is limited to the container image field, and the modification of other fields will not take effect. Changing the image field of the Init container is equivalent to restarting the Pod.
  • The Init container has all the fields of the application container. Except readinessProbe, because the Init container cannot define a state other than readiness different from completion. This is enforced during validation.
  • The name of each app and Init container in Pod must be unique; Sharing the same name with any other container will throw an error during validation.

3, Image pull strategy

The core of Pod is to run the container. You must specify a container engine, such as Docker. When starting the container, you need to pull the image. k8s's image pull strategy can be specified by the user:

  • IfNotPresent: when the image already exists, kubelet will no longer pull the image. It will only pull the image from the warehouse when the local image is missing. The default image pull policy is
  • Always: every time you create a Pod, you will pull the image again;
  • Never: Pod will not actively pull this image, but only use the local image.

Note: for the image file labeled ": latest", the default image acquisition strategy is "Always"; For the mirroring of other labels, the default policy is "IfNotPresent".

1. Official example

https://kubernetes.io/docs/concepts/containers/images

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: private-image-test-1
spec:
  containers:
    - name: uses-private-image
      image: $PRIVATE_IMAGE_NAME
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
EOF

2. Operation on master 01

kubectl edit deployment/nginx-deployment
......
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.15.4
        imagePullPolicy: IfNotPresent                            #The image pull policy is IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always                                        #The restart policy of Pod is Always, which is the default value
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
......

3. Create test cases

mkdir /opt/demo
cd /opt/demo

vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-test1
spec:
  containers:
    - name: nginx
      image: nginx
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]

kubectl create -f pod1.yaml

kubectl get pods -o wide
pod-test1                         0/1     CrashLoopBackOff   4          3m33s
//here Pod The status of is abnormal because echo When the execution process terminates, the container life cycle ends

kubectl describe pod pod-test1
......
Events:
  Type     Reason     Age                 From                    Message
  ----     ------     ----                ----                    -------
  Normal   Scheduled  2m10s               default-scheduler       Successfully assigned default/pod-test1 to 192.168.80.11
  Normal   Pulled     46s (x4 over 119s)  kubelet, 192.168.80.11  Successfully pulled image "nginx"
  Normal   Created    46s (x4 over 119s)  kubelet, 192.168.80.11  Created container
  Normal   Started    46s (x4 over 119s)  kubelet, 192.168.80.11  Started container
  Warning  BackOff    19s (x7 over 107s)  kubelet, 192.168.80.11  Back-off restarting failed container
  Normal   Pulling    5s (x5 over 2m8s)   kubelet, 192.168.80.11  pulling image "nginx"
//Can be found Pod At the end of the life cycle, the container in Pod The restart strategy is Always,The container restarts again and starts pulling the image again

4. Modify the pod1.yaml file

cd /opt/demo
vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-test1
spec:
  containers:
    - name: nginx
      image: nginx:1.14                            #Modify nginx image version
      imagePullPolicy: Always
      #command: [ "echo", "SUCCESS" ]            #delete

//Delete existing resources
kubectl delete -f pod1.yaml 

//Update resources
kubectl apply -f pod1.yaml 

//see Pod state
kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE
pod-test1                         1/1     Running   0          33s   172.17.36.4   192.168.80.11   <none>

//At any node Use on node curl View header information
curl -I http://172.17.36.4
HTTP/1.1 200 OK
Server: nginx/1.14.2
......

4, Deploy harbor to create a private project

//stay Docker harbor Node (192).168.80.30)Upper operation
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce
systemctl start docker.service
systemctl enable docker.service
docker version

//upload docker-compose and harbor-offline-installer-v1.2.2.tgz reach /opt In the directory
cd /opt
chmod +x docker-compose
mv docker-compose /usr/local/bin/

//deploy Harbor service
tar zxvf harbor-offline-installer-v1.2.2.tgz -C /usr/local/
vim /usr/local/harbor/harbor.cfg
--5 that 's ok--Modify, set to Harbor Server IP Address or domain name
hostname = 192.168.80.30

cd /usr/local/harbor/
./install.sh

//stay Harbor Create a new project in
(1)Browser access: http://192.168.80.10 Sign in Harbor WEB UI Interface. The default administrator user name and password are admin/Harbor12345
(2)After entering the user name and password, you can create a new item. Click“+Project button
(3)Fill in the item name as“ kgc-project",Click OK to create a new project

//In each node Node configuration connection private warehouse (note that the comma after each line should be added)
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "insecure-registries":["192.168.80.30"]
}
EOF

systemctl daemon-reload
systemctl restart docker

//In each node Node login harbor Private warehouse
docker login -u admin -p harbor12345 http://192.168.80.30

//In a node Node Download Tomcat Image push
docker pull tomcat:8.0.52
docker images

docker tag tomcat:8.0.52 192.168.80.30/kgc-project/tomcat:v1
docker images
docker push 192.168.80.30/kgc-project/tomcat:v1

//View login credentials
cat /root/.docker/config.json | base64 -w 0            #base64 -w 0: conduct base64 Encrypt and disable word wrap
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE5NS44MCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=

//establish harbor Login credentials resource list
vim harbor-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-pull-secret
data:
  .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE5NS44MCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=            #Copy and paste the login credentials viewed above
type: kubernetes.io/dockerconfigjson

//establish secret resources
kubectl create -f harbor-pull-secret.yaml

//see secret resources
kubectl get secret

//Create resource from harbor Download Image from
cd /opt/demo
vim tomcat-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      imagePullSecrets:                        #Add the option to pull secret resources
      - name: harbor-pull-secret            #Specify the secret resource name
      containers:
      - name: my-tomcat
        image: 192.168.80.30/kgc-project/tomcat:v1        #Specifies the image name in the harbor
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-tomcat
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 31111
  selector:
    app: my-tomcat

//Before deleting node Node downloaded Tomcat image
docker rmi tomcat:8.0.52
docker rmi 192.168.80.30/kgc-project/tomcat:v1
docker images

//Create resource
kubectl create -f tomcat-deployment.yaml

kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
my-tomcat-d55b94fd-29qk2   1/1     Running   0         
my-tomcat-d55b94fd-9j42r   1/1     Running   0         

//see Pod The description information of the image can be found from harbor Downloaded
kubectl describe pod my-tomcat-d55b94fd-29qk2

//Refresh harbor Page, you can see that the number of image downloads has increased