Common deployment schemes of Kubernetes (XIV)

Posted by someone2088 on Sun, 30 Jan 2022 09:20:32 +0100

Python wechat ordering applet course video

https://edu.csdn.net/course/detail/36074

Python actual combat quantitative transaction financial management system

https://edu.csdn.net/course/detail/35475

1, Common deployment schemes

  • Rolling update

    • The service will not stop, but the old and new will coexist in the whole pod.
  • Recreate

    • Stop the old pod first, and then create a new one. The service will be interrupted in this process.
  • Blue green (no shutdown, low risk)

    • The application deploying v1 (the initial state) calls this version for all external request traffic
    • The application code of version 2 is different from that of version 1 (new functions, Bug repair, etc.)
    • Switch traffic from version 1 to version 2.
    • If the test of version 2 is normal, delete the resources (such as instances) being used by version 1 and officially use version 2.
  • Canary

1.1. Rolling update

  • maxSurge: the number of pod s started first during rolling upgrade

  • maxUnavailable: the maximum number of unavailable pod s allowed during rolling upgrade

(1) Create the file rollingupdate yaml

apiVersion: apps/v1
kind: Deployment
metadata:
 name: rollingupdate
spec:
 strategy:
 rollingUpdate:
 maxSurge: 25%
 maxUnavailable: 25%
 type: RollingUpdate
 selector:
 matchLabels:
 app: rollingupdate
 replicas: 4
 template:
 metadata:
 labels:
 app: rollingupdate
 spec:
 containers:
 - name: rollingupdate
 image: registry.cn-hangzhou.aliyuncs.com/ghy/test-docker-image:v1.0
 ports:
 - containerPort: 8080  
---
apiVersion: v1
kind: Service
metadata:
 name: rollingupdate
spec:
 ports:
 - port: 80
 protocol: TCP
 targetPort: 8080
 selector:
 app: rollingupdate
 type: ClusterIP

(2) Execute script

kubectl apply -f rollingupdate.yaml

(3) View pods

kubectl get pods

(4) View svc

kubectl get svc

(5) After the above is successful, you can directly access the services corresponding to the pod through ip

curl cluster-ip/dockerfile

(6) If the previous steps are successful, the next thing to do is to implement the rolling update action. First modify rollingupdate Yaml file, modify the image to v2 0 # then save the file

(7) On w1, keep accessing the observation output

while sleep 0.2;do curl cluster-ip/dockerfile;echo "";done

(8) On w2, monitor the pod

kubectl get pods -w

(7) Execute the apply operation to make the file take effect again and become version 2.0

kubectl apply -f rollingupdate.yaml

(8) Execute the following command to find the replacement process of new and old versions

kubectl get pods

1.2. Recreate

If we don't want new and old versions to coexist, we need to use this method if we want to stop the old version and then install the new version

(1) Write recreate Yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
 name: recreate
spec:
 strategy:
 type: Recreate
 selector:
 matchLabels:
 app: recreate
 replicas: 4
 template:
 metadata:
 labels:
 app: recreate
 spec:
 containers:
 - name: recreate
 image: registry.cn-hangzhou.aliyuncs.com/ghy/test-docker-image:v1.0
 ports:
 - containerPort: 8080
 livenessProbe:
 tcpSocket:
 port: 8080

(2) Execute script

kubectl apply -f rollingupdate.yaml

(3) View pods

kubectl get pods

(4) Modify recreate. As before Version number of yaml file

kubectl apply -f recreate.yaml

(5) View pod

kubectl get pods

(6) Execute the apply operation to make the file take effect again and become version 2.0

kubectl apply -f rollingupdate.yaml

(7) Execute the following command to find that the old version stops and then start the new version

kubectl get pods

1.3 blue and green

Blue and green deployment is actually a decoration, marking one blue and one green. Version switching is declared through blue and green;
(1) Create a bluegreen yaml

#deploy
apiVersion: apps/v1
kind: Deployment
metadata:
 name: blue
spec:
 strategy:
 rollingUpdate:
 maxSurge: 25%
 maxUnavailable: 25%
 type: RollingUpdate
 selector:
 matchLabels:
 app: bluegreen
 replicas: 4
 template:
 metadata:
 labels:
 app: bluegreen
 version: v1.0
 spec:
 containers:
 - name: bluegreen
 image: registry.cn-hangzhou.aliyuncs.com/ghy/test-docker-image:v1.0
 ports:
 - containerPort: 8080

(2) Execute script

kubectl apply -f bluegreen.yaml

(3) View pods

kubectl get pods

(4) Write a service script with the file name bluegreen service yaml

apiVersion: v1
kind: Service
metadata:
 name: bluegreen
spec:
 ports:
 - port: 80
 protocol: TCP
 targetPort: 8080
 selector:
 app: bluegreen
 version: v1.0
 type: ClusterIP

(5) Restart script

kubectl apply -f bluegreen-service.yaml

(6) View svc

kubectl get svc

(7) Keep visiting and observing on w1

while sleep 0.3;do curl cluster-ip/dockerfile;echo "";done

(8) The next step is to upgrade version 1.0 to version 2.0, and modify bluegreen Yaml, I mark the modified content in different colors

#deploy
apiVersion: apps/v1
kind: Deployment
metadata:
 name: green
spec:
 strategy:
 rollingUpdate:
 maxSurge: 25%
 maxUnavailable: 25%
 type: RollingUpdate
 selector:
 matchLabels:
 app: bluegreen
 replicas: 4
 template:
 metadata:
 labels:
 app: bluegreen
  version: v2.0
 spec:
 containers:
 - name: bluegreen
  image: registry.cn-hangzhou.aliyuncs.com/ghy/test-docker-image:v2.0
 ports:
 - containerPort: 8080

(9) Restart script

kubectl apply -f bluegreen.yaml

(10) View pod

kubectl get pods

(11) At the same time, observe whether the address you just accessed has changed. It can be found that the two versions coexist, and the address you previously accessed has not changed. How to switch the version? There is not a bluegreen service written in the front Yaml file? It's very simple. Modify bluegreen service Yaml file points to the line, the modified bluegreen service Yaml file is as follows

apiVersion: v1
kind: Service
metadata:
 name: bluegreen
spec:
 ports:
 - port: 80
 protocol: TCP
 targetPort: 8080 #That is, the traffic is switched to version 2.0
 selector:
 app: bluegreen
 version: v2.0
 type: ClusterIP

(12) Restart bluegreen service Yaml file

kubectl apply -f bluegreen-service.yaml

(13) Review svc

kubectl get svc

(14) At the same time, observe whether the address you just visited has changed. It is found that the traffic has been completely switched to v2 On version of 0

1.4 Canary

After the introduction of the first three, we will talk about a deployment method commonly used in development. In some scenarios, if we want multiple versions to coexist, how to deploy? The method is to be explained next

(1) In fact, it's very simple. Just make some small changes in our blue-green deployment in 1.3. Modify bluegreen service Yaml file, with the following changes

apiVersion: v1
kind: Service
metadata:
 name: bluegreen
spec:
 ports:
 - port: 80
 protocol: TCP
 targetPort: 8080
 selector:
 app: bluegreen
 version: v2.0 #Delete the veersion and select only according to bluegreen
 type: ClusterIP

(2) Restart bluegreen service Yaml file

kubectl apply -f bluegreen-service.yaml

(3) At the same time, observe whether the address you just visited has changed. It is more convenient in istio. At this time, the old and new versions can be accessed at the same time. AB test and fewer instances of new function deployment

Topics: Kubernetes Container Cloud Native computer