Take a small notebook and remember how kubernetes ingress nginx releases blue, green and gray

Posted by hrdyzlita on Mon, 07 Mar 2022 15:22:44 +0100

Background introduction

In some cases, we are using Kubernetes as the cloud platform for business applications. We want to realize the blue-green deployment of applications to iterate the application version. lstio is too heavy and complex, and it is positioned in flow control and grid governance; Ingress nginx introduces Canary function in version 0.21, which can configure multiple versions of applications for gateway entry, and use annotation to control the traffic allocation of multiple back-end services.

Introduction to ingress nginx annotation Canary function

If you want to enable Canary function, you must first set nginx ingress. kubernetes. IO / Canary: "true", then you can enable the following comments to configure canary

  • nginx. ingress. kubernetes. IO / Canary weight is the percentage of requests to the service specified in Canary ingress. The value is an integer of 0-100. According to the set value, it is determined how many percent of the traffic will be allocated to the back-end s service specified in Canary ingress

  • nginx. ingress. kubernetes. IO / Canary by header ― traffic segmentation based on request header, which is applicable to gray-scale publishing or A/B test. When the set value of header is always, the requested traffic will be allocated to Canary entry. When the value of header is set to never, the requested traffic will not be allocated to Canary entry, and other values of header will be ignored, And assign the request traffic to other rules by priority

  • nginx. ingress. kubernetes. IO / Canary by header value , this configuration should be the same as nginx ingress. kubernetes. IO / Canary by header , is used together when the header key and value in the request and nginx ingress. kubernetes. io/canary-by-header nginx. ingress. kubernetes. When IO / Canary by header value matches, the request traffic will be allocated to the Canary Ingress entry. Any other hearder value will be ignored, and the request traffic will be allocated to other rules by priority

  • nginx. ingress. kubernetes. IO / Canary by cookie this configuration is based on cookie traffic segmentation and is also applicable to grayscale publishing or A/B testing. When the cookie value is set to always, the request traffic will be routed to the Canary Ingress entry. When the cookie value is set to never, the request traffic will not be routed to the Canary Ingress entry. Other values will be ignored, And assign the request traffic to other rules by priority

Canary rules are sorted by priority as follows: Canary by header - > Canary by Cookie - > Canary weight

 

1. Small scale version test based on weight

  • v1 version orchestration file

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  labels:
    app: echoserverv1
  name: echoserverv1
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv1
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv1
  namespace: echoserver
spec:
  selector:
    name:  echoserverv1
  type:  ClusterIP
  ports:
  - name:  echoserverv1
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv1
  namespace: echoserver
  labels:
    name:  echoserverv1
spec:
  template:
    metadata:
      labels:
        name:  echoserverv1
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv1 
        ports:
        - containerPort:  8080
          name:  echoserverv1
  • View resources created by v1 version

$ [K8sSj] kubectl get pod,service,ingress -n echoserver
NAME                                READY   STATUS    RESTARTS   AGE
pod/echoserverv1-657b966cb5-7grqs   1/1     Running   0          24h

NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/echoserverv1   ClusterIP   10.99.68.72   <none>        8080/TCP   24h

NAME                              HOSTS              ADDRESS   PORTS   AGE
ingress.extensions/echoserverv1   echo.chulinx.com             80      24h
  • When accessing the v1 service, you can see that all 10 requests are accessed to a pod, that is, the v1 version of the service

$ [K8sSj] for i in `seq 10`;do curl -s echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
  • Create v2 version of service

We turn on the canary function and set the weight of v2 version to 50%. This percentage does not accurately distribute requests to two versions of services, but fluctuates up and down at 50%

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
  labels:
    app: echoserverv2
  name: echoserverv2
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv2
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv2
  namespace: echoserver
spec:
  selector:
    name:  echoserverv2
  type:  ClusterIP
  ports:
  - name:  echoserverv2
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv2
  namespace: echoserver
  labels:
    name:  echoserverv2
spec:
  template:
    metadata:
      labels:
        name:  echoserverv2
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv2 
        ports:
        - containerPort:  8080
          name:  echoserverv2
  • Version of service created v2

We turn on the canary function and set the weight of v2 version to 50%. This percentage does not accurately distribute requests to two versions of services, but fluctuates up and down at 50%

Search the top architect's official account to reply to the manual, and send you a surprise package.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
  labels:
    app: echoserverv2
  name: echoserverv2
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv2
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv2
  namespace: echoserver
spec:
  selector:
    name:  echoserverv2
  type:  ClusterIP
  ports:
  - name:  echoserverv2
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv2
  namespace: echoserver
  labels:
    name:  echoserverv2
spec:
  template:
    metadata:
      labels:
        name:  echoserverv2
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv2 
        ports:
        - containerPort:  8080
          name:  echoserverv2
  • View the created resource again

$ [K8sSj] kubectl get pod,service,ingress -n echoserver
NAME                                READY   STATUS    RESTARTS   AGE
pod/echoserverv1-657b966cb5-7grqs   1/1     Running   0          24h
pod/echoserverv2-856bb5758-f9tqn    1/1     Running   0          4s

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/echoserverv1   ClusterIP   10.99.68.72      <none>        8080/TCP   24h
service/echoserverv2   ClusterIP   10.111.103.170   <none>        8080/TCP   4s

NAME                              HOSTS              ADDRESS   PORTS   AGE
ingress.extensions/echoserverv1   echo.chulinx.com             80      24h
ingress.extensions/echoserverv2   echo.chulinx.com             80      4s
  • Access test

It can be seen that four requests fall into v2 and six fall into v1. Theoretically, the more requests are said, the closer the number of requests falling into v2 is to the set weight of 50%

$ [K8sSj] for i in `seq 10`;do curl -s echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs

2. A/B test based on header

  • Change the orchestration file of v2 version

Add headernginx ingress. kubernetes. io/canary-by-header: "v2"

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
    nginx.ingress.kubernetes.io/canary-by-header: "v2"
  labels:
    app: echoserverv2
  name: echoserverv2
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv2
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv2
  namespace: echoserver
spec:
  selector:
    name:  echoserverv2
  type:  ClusterIP
  ports:
  - name:  echoserverv2
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv2
  namespace: echoserver
  labels:
    name:  echoserverv2
spec:
  template:
    metadata:
      labels:
        name:  echoserverv2
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv2 
        ports:
        - containerPort:  8080
          name:  echoserverv2
  • Update access test

The three header values of v2:always, v2:never and v2:true are tested. It can be seen that when the header is v2:always, all the traffic will flow into v2, when v2:never, all the traffic will flow into v1, and when v2:true, that is, non always/never, the traffic will flow into the corresponding version of the service according to the configured weight

$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:never" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:true" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
  • Custom header value

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
    nginx.ingress.kubernetes.io/canary-by-header: "v2"
    nginx.ingress.kubernetes.io/canary-by-header-value: "true"
  labels:
    app: echoserverv2
  name: echoserverv2
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv2
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv2
  namespace: echoserver
spec:
  selector:
    name:  echoserverv2
  type:  ClusterIP
  ports:
  - name:  echoserverv2
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv2
  namespace: echoserver
  labels:
    name:  echoserverv2
spec:
  template:
    metadata:
      labels:
        name:  echoserverv2
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv2 
        ports:
        - containerPort:  8080
          name:  echoserverv2
  • Update test

It can be seen that only when the header is v2:never, the request traffic will flow into v2 version, and other value traffic will flow into different versions of services according to the weight setting

Search the back end architect's official account to reply to the "clean system" and send you a surprise package.

$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:true" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:never" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
  • Access test

It can be seen that the access effect is the same as that of the header, but the cookie cannot customize the value

$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged

$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

# zlx @ zlxdeMacBook-Pro in ~/Desktop/unicom/k8syml/nginx-ingress-canary-deployment [16:01:52]
$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

# zlx @ zlxdeMacBook-Pro in ~/Desktop/unicom/k8syml/nginx-ingress-canary-deployment [16:02:25]
$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai=always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

summary

Gray level release can ensure the stability of the overall system. At the initial gray level, the new version can be tested, found and adjusted to ensure its impact. The above content introduces the actual Canary Annotation of ingress nginx in detail through examples. With the help of ingress nginx, blue-green release and Canary release can be easily realized

other

About blue-green release, Canary release, and A/B test

  • Blue green release

In the blue-green deployment, there are two systems: one is the system providing services, which is marked as "green"; The other is the system ready for release, marked "blue". Both systems are fully functional and running systems, but the system version and external service are different.

At first, there was no system, no blue-green.

Then, the first system is developed and directly launched. There is only one system in this process, and there is no difference between blue and green.

Later, a new version was developed to replace the old version online with the new version. In addition to the online system, a new system using the new version code was built. At this time, a total of two systems are running. The old system providing services is the green system and the newly deployed system is the blue system.

The blue system does not provide external services. What is it used for?

It is used for pre release testing. Any problems found in the testing process can be modified directly on the blue system without disturbing the system that users are using. (note that only when the two systems are not coupled can there be 100% guarantee of no interference)

After repeated testing, modification and verification, the blue system determines that it meets the online standard, and directly switches the user to the blue system:

For a period of time after switching, the blue and green systems still coexist, but the user has accessed the blue system. During this period, observe the working state of the blue system (new system). If there is a problem, switch back to the green system directly.

When we are sure that the blue system that provides services to the outside world works normally and the green system that does not provide services to the outside world is no longer needed, the blue system officially becomes the external service system and the new green system. The original green system can be destroyed to release resources for the deployment of the next blue system.

Blue green deployment is only one of the online strategies. It is not a universal solution that can deal with all situations. The premise that blue-green deployment can be implemented simply and quickly is that the target system is very cohesive. If the target system is quite complex, how to switch, whether the data of the two systems need to be synchronized and so on need to be carefully considered.

  • Canary release

Canary release is also a release strategy, which is the same kind of strategy as the gray release commonly used in China. The blue-green deployment is to prepare two systems and switch between them. The Canary strategy is to have only one system and gradually replace this system

For example, the target system is a group of stateless Web servers, but the number is very large. Suppose there are 10000.
At this time, the blue-green deployment cannot be used, because you cannot apply for 10000 servers to deploy the blue system (in the definition of blue-green deployment, the blue system should be able to undertake all access).

One way you can think of is:

Prepare only a few servers, deploy the new version of the system on them, and test and verify it. After the test passed, I was afraid of accidents and did not dare to update all servers immediately. First update 10 of the 10000 online servers to the latest system, and then observe and verify. After confirming that there are no exceptions, update all remaining servers.
This method is Canary release.

In practice, more control can be done. For example, set a lower weight for the 10 servers updated initially, control the number of requests sent to the 10 servers, and then gradually increase the weight and increase the number of requests.

This control is called "flow segmentation", which can be used not only for Canary release, but also for subsequent A/B test.
Blue green deployment and Canary release are two release strategies, which are not omnipotent. Sometimes both can be used, sometimes only one can be used.

  • A/B test

First of all, it should be clear that the A/B test is completely different from the blue-green deployment and canary.

Blue-green deployment and Canary are the release strategies. The goal is to ensure the stability of the newly launched system, and pay attention to the bugs and hidden dangers of the new system.

A/B test is an effect test. There are multiple versions of external services at the same time. These services have been tested enough and meet the online standard. There are differences, but there is no difference between the old and the new (they may be deployed in blue and green when they are online).

A/B test focuses on the actual effect of different versions of services, such as conversion rate, order status, etc.

During A/B testing, multiple versions of services are running online at the same time. These services usually have some experience differences, such as different page styles, colors and operation processes. Relevant personnel select the best version by analyzing the actual effect of each version of the service.

In A/B test, you need to be able to control the flow allocation. For example, 10% of the flow is allocated to version A, 10% of the flow is allocated to version b, and 80% of the flow is allocated to version C.

Topics: Java Linux Database Docker Kubernetes