K8s basic concept
pod classification
Autonomous pod
- The self managed pod still needs to be submitted to the apiserver after it is created. After it is received by the apiserver, it is scheduled to the specified node node with the help of the scheduler, and the node starts the pod
- If this pod fails and the container needs to be restarted, kubelet will complete it
- If the node fails, the pod will disappear. Global scheduling cannot be realized. Therefore, this kind of pod is not recommended
Controller managed pod
- Replication controller: when a pod is started, if the pod is not enough, another replica can be started, and then the controller will manage various replicas and objects of the same kind of pod. Once there are fewer copies, they will be added automatically. Adopt the rule of refunding more and making up less, which accurately meets the expectations defined by us; Support rolling update
- ReplicaSet: managed by a declarative update controller called Deployment
- Deployment: deployment can only manage stateless applications
- StateFulSet: stateful replica set, which can manage stateful applications
- Daemon set: if you need to run a replica on each node, you can use daemon set
- Job: the created pod will exit as soon as the task is completed. It does not need to restart or rebuild. It is used to perform one-time tasks
- Cronjob: the Pod it creates is responsible for periodic task control and does not need to run continuously in the background; All the above controllers are used to realize a specific application management.
Core components
HPA
- Deployment also supports secondary controllers
HPA (horizontal pod autoscaler) - In general, we can ensure that there are two pods running on a node. What if the user's access traffic increases and the two pods are not enough to carry so many visits? At this time, we should increase pod resources, so how many should we add?
- HPA controller can automatically monitor pod and automatically expand.
service
- If there are two pods and the pod has its life cycle, in case the node where the pod is located goes down, the pod should be rebuilt on other nodes, and the rebuilt pod is no longer the same pod as the original pod, but both run the same service. And each container has its IP address. The P address of the container in the reconstructed pod is different from the IP address of the container in the previous pod. In this way, there will be a problem. How can the client access the containers in these pods?
- Measures: Service Discovery: for example, the registered stalls and declared addresses in the fair market. The registered stalls are stalls for shopping. One day, when the vendors of this stall change places, they will leave a stall statement on the original stall to tell customers that they have changed places, but the goods they sell are still the same. I just bought it in another place. This is the service discovery
- Pods have a life cycle. A pod may leave at any time, and other pods may be added at any time. If they provide the same service, the client cannot access these pods through fixed means, because the pod itself is not fixed, and they may be replaced at any time, regardless of the host name or IP address.
- In order to reduce the complexity of coordination between clients and pods as much as possible, k8s adds an intermediate layer between each group of pods that provide similar services and their clients. This intermediate layer is fixed, and this intermediate layer is called service
- As long as the service is not deleted, its address and name are fixed. When the client needs to write in its configuration file to access a service, it no longer needs to be found automatically. It only needs to write the name of the service in the configuration file. This service is a scheduler, which can not only provide a stable access entry, but also act as a reverse proxy, When the service receives the request from the client, it will proxy it to the back-end pod. Once the pod goes down, it will immediately create a new pod, which will be immediately associated with the service as one of the available pods at the back-end of the service
- Client programs access services through IP + port or hostname + port. The pod associated with the back-end of the service does not depend on its lIP and host name, but on the tag selector of the pod. As long as the label of the created pod is unified, it can be recognized by the service no matter how the P address and host are changed. In this way, as long as the pod belongs to the tag selector and is within the management scope of the service, it will be associated with the service. When the dynamic pod is associated with the service, it will dynamically detect the IP address and port of the pod, and then take it as an available server host object that can be scheduled at its own back end. Therefore, the client's request is sent to the service, and then the service proxy responds to the container in the back-end real pod.
- service is neither a program nor a component. It is just a dnat rule of iptables
- As k8s an object, service has its own name, and the name of service is equivalent to the name of service, which can be resolved.
AddOns attachment
- After installing k8s, the first thing is to deploy a dns pod on the k8s cluster to ensure that the name of each service can be resolved
- It can be changed dynamically, including dynamic creation, dynamic deletion and dynamic modification
- For example, if the name of the service is changed, dnspod will be triggered automatically, and the name in the dns resolution record will also be changed; If we manually change the ip address of the service, it will be triggered automatically after the change to change the resolution record in the dns service.
- In this way, the client can directly access the service name when accessing the pod resources, and then the dns service specially used in the cluster is responsible for resolving it.
- This kind of pod is used by k8s its own services, so we call it basic system architecture level pod objects, and they are also called cluster attachments
Three network models of K8s
- Node network
- service cluster network
- pod network
Flannel
Flannel is a network planning implementation designed by CoreOS team for Kubernetes. In short, its functions are as follows:
- Make Docker containers created by different Node hosts in the cluster have unique virtual IP addresses in the whole cluster.
- Establish an overlay network (overlay network). Through this overlay network, data packets are delivered to the target container. Overlay network is a virtual network built on another network and supported by its infrastructure. Overlay network separates network services from the underlying infrastructure by encapsulating one packet in another packet. After the encapsulated data packets are forwarded to the endpoint, the Its unpacking.
- Create a new virtual network card flannel0 to receive the data of docker bridge, and packet and forward the received data (vxlan) by maintaining the routing table.
- Routing information is generally stored in etcd: Flanneld on multiple nodes relies on an etcd cluster for centralized configuration services. Etcd ensures that the configurations seen by Flanneld on all nodes are consistent. At the same time, the flanned on each node monitors the data changes on etcd and senses the changes of nodes in the cluster in real time
- Flannel will first create a bridge named flannel0 (vxlan type device) on the node and run an agent named flanneld on each node The flannel agent on each node will apply for a CIDR address block for the current node from etcd to assign an address to the pod on the node.
- Flannel is committed to providing a three-layer network for nodes in the k8s cluster. He does not control how containers in nodes are networked, but only cares about how traffic flows between nodes.
kubectl common operations
kubectl command official document
kubeconfig configuration file
[root@master ~]# kubectl config Options: Available Commands: current-context display current_context delete-cluster delete kubeconfig Cluster specified in file delete-context delete kubeconfig Specified in the file context delete-user Delete the specified user from the kubeconfig get-clusters display kubeconfig Cluster defined in file get-contexts Describe one or more contexts get-users Display users defined in the kubeconfig rename-context Renames a context from the kubeconfig file. set set up kubeconfig A single value in the file set-cluster set up kubeconfig A cluster entry in the file set-context set up kubeconfig One of the files context entry set-credentials set up kubeconfig A user entry in the file unset Cancel setting kubeconfig A single value in the file use-context set up kubeconfig Current context in file view Show merged kubeconfig Configure or a specified kubeconfig file Usage: kubectl config SUBCOMMAND [options] Use "kubectl <command> --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). [root@master ~]# kubectl config view //colony apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://192.168.47.115:6443 name: kubernetes //Cluster context contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes //Current context current-context: kubernetes-admin@kubernetes kind: Config preferences: {} //Client authentication users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED
kubectl management command
type | command | describe |
---|---|---|
Basic command | create expose run expose set explain get edit delete | Create resources by file name or standard input; Create a Service for Deployment and Pod; Run a specific image in the cluster; Set specific functions on the object; Document references; Displaying one or more resources; Edit a resource using the system editor; Delete resources through file name, standard input, resource name or label selector |
Deployment command | rollout rolling-update scale autoscale | Manage the release of Deployment and daemon resources (such as status, release record, rollback, etc.); Rolling upgrade, ReplicationController only; Expand or shrink the number of pods for Deployment, ReplicaSet, RC or Job resources; Configure auto scaling rules for Deploy, RS and RC (depending on metrics server and hpa) |
Cluster management command | certificate cluster-info top cordon uncordon drain taint | Modify certificate resources; Display cluster information; View resource utilization (relying on metrics server); The marked node is not schedulable; Marked nodes are schedulable; Expel applications on nodes and prepare for offline maintenance; Modify node taint tag |
kubectl help
Use kubectl help to view kubectl related commands
[root@master ~]# kubectl --help kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/ Basic Commands (Beginner): create Create a resource from a file or from stdin. expose use replication controller, service, deployment perhaps pod And expose it as a new Kubernetes Service run Run a specified mirror in the cluster set by objects Sets a specified feature Basic Commands (Intermediate): explain View documents for resources get Show one or more resources edit Edit a resource on the server delete Delete resources by filenames, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale by Deployment, ReplicaSet, Replication Controller perhaps Job Set a new number of copies autoscale Automatically adjust one Deployment, ReplicaSet, perhaps ReplicationController Number of copies Cluster Management Commands: certificate modify certificate resources. cluster-info Display cluster information top Display Resource (CPU/Memory/Storage) usage. cordon sign node by unschedulable uncordon sign node by schedulable drain Drain node in preparation for maintenance taint Update one or more node Upper taints Troubleshooting and Debugging Commands: describe Displays a specified resource perhaps group of resources details logs Output container in pod Logs in attach Attach To a running container exec In a container Execute a command in port-forward Forward one or more local ports to a pod proxy Run a proxy reach Kubernetes API server cp copy files and directories reach containers And copy from container files and directories. auth Inspect authorization Advanced Commands: diff Diff live version against would-be applied version apply By file name or standard input stream(stdin)Configure resources patch use strategic merge patch Update the of a resource field(s) replace adopt filename perhaps stdin Replace a resource wait Experimental: Wait for a specific condition on one or many resources. convert In different API versions Conversion profile Settings Commands: label Update on this resource labels annotate Update the annotation of a resource completion Output shell completion code for the specified shell (bash or zsh) Other Commands: alpha Commands for features in alpha api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config modify kubeconfig file plugin Provides utilities for interacting with plugins. version output client and server Version information for Usage: kubectl [flags] [options] Use "kubectl <command> --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands).
The kubectl command uses
kubectl command official document
create command
Syntax: kubectl create deployment NAME --image=image -- [COMMAND] [args...] Options: --image Specify mirror --replicas Create a specified number of pod [root@master ~]# kubectl create deployment test1 --image busybox deployment.apps/test1 created #Create a test1 pod using the busybox image [root@master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE test1-78d64fd9b9-22p2j 0/1 Completed 0 7s #You can see that it is in exit status. Because busybox uses sh, it will exit without tasks //Create a deployment named test2 to run busybox [root@master ~]# kubectl create deployment test2 --image busybox -- sleep 60 deployment.apps/test2 created [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE test1-78d64fd9b9-22p2j 0/1 CrashLoopBackOff 2 76s test2-7c95bf5bcb-mlvh2 1/1 Running 0 37s #In operation //Create a deployment called web that runs an nginx image with three copies [root@master ~]# kubectl create deployment web --image nginx --replicas 3 deployment.apps/web created [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-96d5df5c8-7psw6 1/1 Running 0 31s web-96d5df5c8-hc66p 1/1 Running 0 31s web-96d5df5c8-jgwrm 1/1 Running 0 31s //View the node location where the pod runs //Create a deployment named web01 that runs the nginx image and exposes port 80 [root@master ~]# kubectl create deployment web01 --image nginx --port=80 deployment.apps/web01 created [root@master ~]# kubectl get pods -o wide test1-78d64fd9b9-22p2j 0/1 Completed 4 2m29s 10.244.2.2 node2 <none> <none> test2-7c95bf5bcb-mlvh2 1/1 Running 1 110s 10.244.2.3 node2 <none> <none> web-96d5df5c8-7psw6 1/1 Running 0 47s 10.244.1.6 node1 <none> <none> web-96d5df5c8-hc66p 1/1 Running 0 47s 10.244.2.4 node2 <none> <none> web-96d5df5c8-jgwrm 1/1 Running 0 47s 10.244.1.7 node1 <none> <none>
get command
//List all pods in ps output format [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-7rs5s 1/1 Running 3 2d test1-78d64fd9b9-22p2j 0/1 CrashLoopBackOff 6 7m46s test2-7c95bf5bcb-mlvh2 1/1 Running 4 7m7s web-96d5df5c8-7psw6 1/1 Running 0 6m4s web-96d5df5c8-hc66p 1/1 Running 0 6m4s web-96d5df5c8-jgwrm 1/1 Running 0 6m4s //View the pod of the type you specify, type plus pod name [root@master ~]# kubectl get deployment web NAME READY UP-TO-DATE AVAILABLE AGE web 3/3 3 3 6m22s //List all pods in ps output format and provide more information (such as node name) [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-6799fc88d8-7rs5s 1/1 Running 3 2d 10.244.1.5 node1 <none> <none> test1-78d64fd9b9-22p2j 0/1 CrashLoopBackOff 6 8m22s 10.244.2.2 node2 <none> <none> test2-7c95bf5bcb-mlvh2 1/1 Running 4 7m43s 10.244.2.3 node2 <none> <none> web-96d5df5c8-7psw6 1/1 Running 0 6m40s 10.244.1.6 node1 <none> <none> web-96d5df5c8-hc66p 1/1 Running 0 6m40s 10.244.2.4 node2 <none> <none> web-96d5df5c8-jgwrm 1/1 Running 0 6m40s 10.244.1.7 node1 <none> <none> //Lists all replication controllers and services in ps output format [root@master ~]# kubectl get rc,svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d service/nginx NodePort 10.110.26.29 <none> 80:30598/TCP 2d #svc is short for service and can also be spelled out kubectl get cs # View cluster status kubectl get nodes # View cluster node information kubectl get ns # View cluster namespace kubectl get svc -n kube-system # View services for the specified namespace kubectl get pod <pod-name> -o wide # View Pod details kubectl get pod <pod-name> -o yaml # View Pod details in yaml format kubectl get pods # View the resource object and view the list of all pods kubectl get rc,service # View resource objects and rc and service lists kubectl get pod,svc,ep --show-labels # View pod,svc,ep and label information kubectl get all --all-namespaces # View all namespaces
expose command
Options: --port Host port to map --target-port Map to that container port [root@master ~]# kubectl create deployment nginx --image=nginx / / run an nginx pod deployment.apps/nginx created [root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort / / map port 80 service/nginx exposed [root@master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/nginx-6799fc88d8-7rs5s 0/1 ContainerCreating 0 11s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12m service/nginx NodePort 10.110.26.29 <none> 80:30598/TCP 5s [root@master ~]# curl http://10.110.26.29 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
delete command
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-54n9q 1/1 Running 0 9m6s test1-78d64fd9b9-22p2j 0/1 CrashLoopBackOff 9 23m test2-7c95bf5bcb-mlvh2 0/1 CrashLoopBackOff 7 23m web-96d5df5c8-7psw6 1/1 Running 0 22m web-96d5df5c8-hc66p 1/1 Running 0 22m web-96d5df5c8-jgwrm 1/1 Running 0 22m [root@master ~]# kubectl delete deployment test1 deployment.apps "test1" deleted [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-54n9q 1/1 Running 0 9m24s test2-7c95bf5bcb-mlvh2 0/1 CrashLoopBackOff 7 23m web-96d5df5c8-7psw6 1/1 Running 0 22m web-96d5df5c8-hc66p 1/1 Running 0 22m web-96d5df5c8-jgwrm 1/1 Running 0 22m // Delete the pod of service type [root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d1h nginx NodePort 10.109.226.76 <none> 80:30220/TCP 6m7s [root@master ~]# kubectl delete svc nginx service "nginx" deleted [root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d1h //Delete all pod s [root@master ~]# kubectl delete pods --all //Force deletion of pod node [root@master ~]# kubectl delete pod foo --force
run command
Syntax: $ run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [--command] -- [COMMAND] [args...] Options: --image Specify mirror --port Exposed container port --labels key=value Specify label //Start an httpd pod [root@master ~]# kubectl run httpd --image httpd pod/httpd created [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES httpd 1/1 Running 0 45s 10.244.1.12 node1 <none> <none> //Delete httpd [root@master ~]# kubectl delete pods httpd pod "httpd" deleted // Port 80 of the exposed container [root@master ~]# kubectl run nginx --image nginx --port 80 pod/nginx created [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES httpd 1/1 Running 0 21s 10.244.1.14 node1 <none> <none> [root@master ~]# curl 10.244.1.14 <html><body><h1>It works!</h1></body></html> //detailed information [root@master ~]# kubectl describe pod httpd Name: httpd Namespace: default Priority: 0 Node: node1/192.168.47.120 Start Time: Mon, 20 Dec 2021 03:11:39 +0800 Labels: aap=nginx env=prod Annotations: <none> Status: Running IP: 10.244.1.14 IPs: IP: 10.244.1.14 Containers: httpd: Container ID: docker://f9b47b86cadd7f559bd6fbb54d177782a6ba398bce29deae1ef81b4d4fc6e3ed Image: httpd Image ID: docker-pullable://httpd@sha256:0c8dd1d9f90f0da8a29a25dcc092aed76b09a1c9e5e6e93c8db3903c8ce6ef29 Port: <none> Host Port: <none> State: Running Started: Mon, 20 Dec 2021 03:11:56 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-mrc8p (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-mrc8p: Type: Secret (a volume populated by a Secret) SecretName: default-token-mrc8p Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m50s default-scheduler Successfully assigned default/httpd to node1 Normal Pulling 2m49s kubelet Pulling image "httpd" Normal Pulled 2m33s kubelet Successfully pulled image "httpd" in 15.605983923s Normal Created 2m33s kubelet Created container httpd Normal Started 2m33s kubelet Started container httpd //Test (dry run) #It won't really run [root@master ~]# kubectl run nginx --image nginx --dry-run client W1220 03:15:51.357914 78444 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client. pod/nginx created (dry run)