7, Kubernetes - Service details

Posted by phpdragon on Fri, 04 Mar 2022 13:52:01 +0100

catalogue

1. Service introduction

 2. Service type

3. Service usage

3.1 preparation of experimental environment

3.2 ClusterIP type Service

3.3 Service of headliner type

3.4 NodePort type Service

3.5 LoadBalancer type Service

3.6 Service of externalname type

4. Introduction to ingress

5. Use of ingress

5.1 environmental preparation - build an ingress environment

5.2 Http agent

5.3 Https agent

1. Service introduction

In kubernetes, pod is the carrier of application program. You can access the application program through the ip of pod, but the ip address of pod is not fixed, which means it is inconvenient to directly use the ip of pod to access the service.

In order to solve this problem, kubernetes provides service resources. Service will aggregate multiple pods that provide the same service and provide a unified entry address. The following pod services can be accessed through the entry address of the service.

 

In many cases, service is just a concept. What really works is the Kube proxy service process. A Kube proxy service process is running on each Node. When creating a service, it will write the information of the created service to etcd through API server, and Kube proxy will find the change of this service based on the listening mechanism, and then it will convert the latest service information into the corresponding access rules.

kube - proxy currently supports three working modes:

userspace mode

In userspace mode, Kube proxy will create a listening port for each Service. The request sent to Cluster IP will be redirected to the listening port of Kube proxy by Iptables rules. Kube proxy will select a Pod providing services according to LB algorithm and establish a connection with it to forward the request to Pod. In this mode, Kube proxy acts as a four layer load balancer. Because Kube proxy runs in userspace, the data copy between kernel and user space will be increased during forwarding processing. Although it is relatively stable, it is inefficient.

IPtables mode

In iptables mode, Kube proxy creates corresponding iptables rules for each Pod at the back end of the service, and directly redirects the request sent to the Cluster IP to a Pod IP. In this mode, Kube proxy does not assume the role of four-tier equalizer, but is only responsible for creating iptables rules. The advantage of this mode is that it is more efficient than the userspace mode, but it cannot provide a flexible LB strategy and cannot retry when the back-end Pod is unavailable.

IPv6 mode

ipvs mode is similar to iptables. Kube proxy monitors the changes of pod and creates corresponding ipvs rules. ipvs is more efficient than iptables. In addition, ipvs supports more LB algorithms.

#The kernel must be installed in vsables mode, otherwise the kernel must be degraded
#Turn on ipvs
[root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-system
mode: "ipvs"   #Modify mode
[root@k8s-master01 ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
pod "kube-proxy-4qpj7" deleted
pod "kube-proxy-7s4bs" deleted
pod "kube-proxy-pxbkz" deleted
[root@k8s-master01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 10.10.10.130:6443            Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.4:53                Masq    1      0          0         
  -> 10.244.0.5:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.4:9153              Masq    1      0          0         
  -> 10.244.0.5:9153              Masq    1      0          0         
TCP  10.102.96.95:443 rr
  -> 10.10.10.110:443             Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.4:53                Masq    1      0          0         
  -> 10.244.0.5:53                Masq    1      0          0  

 2. Service type

Resource manifest file for Service:

kind: Service  # Resource type
apiVersion: v1  # Resource version
metadata: # metadata
  name: service # Resource name
  namespace: dev # Namespace
spec: # describe
  selector: # Tag selector, which is used to determine which pod s are represented by the current service
    app: nginx
  type: # Service type, which specifies the access method of the service
  clusterIP:  # ip address of virtual service
  sessionAffinity: # session affinity, which supports ClientIP and None
  ports: # port information
    - protocol: TCP 
      port: 3017  # service port
      targetPort: 5003 # pod port
      nodePort: 31122 # Host port
  • ClusterIP: the default value. It is the virtual IP assigned by Kubernetes system and can only be accessed inside the cluster
  • NodePort: expose the Service to the outside through the port on the specified Node. Through this method, you can access the Service outside the cluster
  • LoadBalancer: use the external load balancer to complete the load distribution to the service. Note that this mode needs the support of external cloud environment
  • ExternalName: introduce services outside the cluster into the cluster and use them directly

3. Service usage

3.1 preparation of experimental environment

Before using the service, create three pods with Deployment. Note that set the tag app = nignx pod for the pod and create Deployment Yaml, as follows:

[root@k8s-master01 ~]# vim deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.2
        ports:
        - containerPort: 80
#Create pod
[root@k8s-master01 ~]# kubectl  create -f deployment.yaml
deployment.apps/pc-deployment created
#View pod details
[root@k8s-master01 ~]# kubectl  get pods -n dev -o wide --show-labels
NAME                             READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES   LABELS
pc-deployment-6499c8d644-7khrf   1/1     Running   0          22s   10.244.2.9   k8s-node02   <none>           <none>            app=nginx-pod,pod-template-hash=6499c8d644
pc-deployment-6499c8d644-9bp6f   1/1     Running   0          22s   10.244.1.8   k8s-node01   <none>           <none>            app=nginx-pod,pod-template-hash=6499c8d644
pc-deployment-6499c8d644-p8qpw   1/1     Running   0          22s   10.244.2.8   k8s-node02   <none>           <none>            app=nginx-pod,pod-template-hash=6499c8d644

#For the convenience of subsequent testing, modify the index of three nginx HTML page
# kubectl exec -it container - n dev /bin/sh
# echo "ip" > /usr/share/nginx/html/index.html

#Check whether the modification is successful
[root@k8s-master01 ~]# curl 10.244.1.8
10.244.1.8
[root@k8s-master01 ~]# curl 10.244.2.9
10.244.2.9
[root@k8s-master01 ~]# curl 10.244.2.8
10.244.2.8

3.2 ClusterIP type Service

Create service clusterip yaml

[root@k8s-master01 ~]# vim service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
  name: service-clusterip
  namespace: dev
spec:
  selector:
    app: nginx-pod
  clusterIP:       #If the ip address of the service is not written, it will generate one by default
  type: ClusterIP
  ports:
  - port: 80 #Service port
    targetPort: 80 #pod port
#Create service
[root@k8s-master01 ~]# kubectl create -f service-clusterip.yaml 
service/service-clusterip created
#View service
[root@k8s-master01 ~]# kubectl get svc -n dev -o wide
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service-clusterip   ClusterIP   10.98.88.209   <none>        80/TCP    10s   app=nginx-pod

#View service details
#Here is a list of Endpoints, which is the service entry that the current service can load
[root@k8s-master01 ~]# kubectl describe svc service-clusterip -n dev
Name:              service-clusterip
Namespace:         dev
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-pod
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.98.88.209
IPs:               10.98.88.209
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.8:80,10.244.2.8:80,10.244.2.9:80
Session Affinity:  None
Events:            <none>

#View the mapping rules of ipvs
[root@k8s-master01 ~]# ipvsadm -Ln
TCP  10.98.88.209:80 rr
  -> 10.244.1.8:80                Masq    1      0          0         
  -> 10.244.2.8:80                Masq    1      0          0         
  -> 10.244.2.9:80                Masq    1      0          0        
#Visit 10.98.88.209 to observe the effect
[root@k8s-master01 ~]# curl 10.98.88.209
10.244.2.9

Endpoint

Endpoint is a resource object in Kubernetes. It is stored in etcd and used to record the access addresses of all pod s corresponding to a service. It is generated according to the selector description in the service configuration file.

A service consists of a group of pods, which are exposed through endpoints, which are the collection of endpoints that implement the actual service. That is, the connection between service and pod is realized through endpoints.

Load distribution policy

Access to services is distributed to the back-end Pod. At present, Kubernetes provides two load distribution strategies:

  • If it is not defined, the Kube proxy policy is used by default, such as random and polling
  • Session persistence mode based on client address, that is, all requests from the same client will be forwarded to a fixed Pod. This mode can add the 'sessionAffinity:ClientIP' option to the spec
# View the mapping rules of ipvs [rr polling]
[root@k8s-master01 ~]# ipvsadm -Ln
TCP  10.98.88.209:80 rr
  -> 10.244.1.8:80                Masq    1      0          0         
  -> 10.244.2.8:80                Masq    1      0          0         
  -> 10.244.2.9:80                Masq    1      0          0        

#Cyclic access test
[root@k8s-master01 ~]# while true;do curl 10.98.88.209;sleep 5;done;
10.244.2.9
10.244.2.8
10.244.1.8
10.244.2.9
10.244.2.8
10.244.1.8

#Modify distribution policy - sessionAffinity:ClientIP

#
[root@k8s-master01 ~]# ipvsadm -Ln
TCP  10.98.88.209:80 rr persistent 10800
  -> 10.244.1.8:80                Masq    1      0          0         
  -> 10.244.2.8:80                Masq    1      0          0         
  -> 10.244.2.9:80                Masq    1      0          0       

#Cyclic access test
[root@k8s-master01 ~]# while true;do curl 10.98.88.209;sleep 5;done;
10.244.2.9
10.244.2.9
10.244.2.9

#Delete servcie
[root@k8s-master01 ~]# kubectl  delete -f service-clusterip.yaml 
service "service-clusterip" deleted

3.3 Service of headliner type

In some scenarios, developers may not want to use the load balancing function provided by the service, but want to control the load balancing strategy by themselves. In this case, kubernetes provides a header service, which does not assign Cluster IP. If you want to access the service, you can only query through the domain name of the service.

Create service headline yaml

[root@k8s-master01 ~]# vim service-headliness.yaml
apiVersion: v1
kind: Service
metadata:
  name: service-headliness
  namespace: dev
spec:
  selector:
    app: nginx-pod
  clusterIP: None  #Set clusterIP to None to create the headline service
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80

#Create service
[root@k8s-master01 ~]# kubectl  create -f service-headliness.yaml 
service/service-headliness created
#Get the service and find that CLUSTER-IP is not allocated
[root@k8s-master01 ~]# kubectl  get svc service-headliness -n dev -o wide
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service-headliness   ClusterIP   None         <none>        80/TCP    17s   app=nginx-pod
#View service details
[root@k8s-master01 ~]# kubectl  describe svc service-headliness -n dev 
Name:              service-headliness
Namespace:         dev
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-pod
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                None
IPs:               None
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.8:80,10.244.2.8:80,10.244.2.9:80
Session Affinity:  None
Events:            <none>

#Check the resolution of the domain name
[root@k8s-master01 ~]# kubectl  exec -it pc-deployment-6499c8d644-7khrf -n dev /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@pc-deployment-6499c8d644-7khrf:/# cat /etc/resolv.conf 
nameserver 10.96.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
root@pc-deployment-6499c8d644-7khrf:/# exit
exit
[root@k8s-master01 ~]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local
;; ANSWER SECTION:
service-headliness.dev.svc.cluster.local. 30 IN	A 10.244.2.9
service-headliness.dev.svc.cluster.local. 30 IN	A 10.244.1.8
service-headliness.dev.svc.cluster.local. 30 IN	A 10.244.2.8

3.4 NodePort type Service

In the previous case, the ip address of the created service can only be accessed inside the cluster. If you want to expose the service to the outside of the cluster, you need to use another type of service, called NodePort. The working principle of NodePort is to map the port of the service to a port of the Node, and then you can access the service through NodeIP: NodePort.

Create service nodeport yaml

[root@k8s-master01 ~]# vim service-nodeport.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service-nodeport
  namespace: dev
spec:
  selector:
    app: nginx-pod
  type: NodePort #service type
  ports:
  - port: 80
    nodePort: 30003 #Specify the port of the bound node (the default value range is 30000-32767). If it is not specified, it will be assigned by default
    targetPort: 80
#Create service
[root@k8s-master01 ~]# kubectl create -f service-nodeport.yaml 
service/service-nodeport created
#View service
[root@k8s-master01 ~]# kubectl  get svc -n dev -o wide
NAME               TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service-nodeport   NodePort   10.102.216.64   <none>        80:30003/TCP   25s   app=nginx-pod
#Next, you can access the 30002 port of any nodeip in the cluster through the browser of the computer host, and you can access the pod

 

3.5 LoadBalancer type Service

LoadBalancer is similar to NodePort. The purpose is to expose a port to the outside. The difference is that LoadBalancer will build a load balancing device outside the cluster. If the device needs the support of the external environment, the requests sent by external services to the device will be loaded by the device and forwarded to the cluster.

3.6 Service of externalname type

The Service of externalname type is used to introduce the external Service of the cluster. It specifies the address of an external Service through the externalname attribute, and then accesses the Service inside the cluster to access the external resources.

[root@k8s-master01 ~]# vim  service-externalname.yaml
apiVersion: v1
kind: Service
metadata:
  name: service-externalname
  namespace: dev
spec:
  type: ExternalName #service type
  externalName: www.baidu.com #It can also be changed to ip
#Create pod
[root@k8s-master01 ~]# kubectl create -f service-externalname.yaml 
service/service-externalname created
#Domain name resolution
[root@k8s-master01 ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local
www.baidu.com.		30	IN	CNAME	www.a.shifen.com.
www.a.shifen.com.	30	IN	A	110.242.68.4
www.a.shifen.com.	30	IN	A	110.242.68.3

4. Introduction to ingress

There are two main ways for Service to expose services outside the cluster: NotePort and LoadBalancer, but both of them have a certain disadvantage:

  • The disadvantage of NodePort mode is that it will occupy many ports of cluster machines. When there are more cluster services, this disadvantage will become more and more obvious
  • The disadvantage of LB mode is that each service needs an lb, which is wasteful and troublesome, and requires the support of devices other than kubernetes

Based on this situation, kubernetes provides an ingress resource object. Ingress only needs one NodePort or one LB to meet the needs of exposing multiple services. The working mechanism is roughly as follows:

In fact, ingress is equivalent to a 7-tier load balancer. It is an abstraction of kubernetes' reverse proxy. Its main working principle is similar to Nginx. It can be understood as establishing many mapping rules in ingress. Ingress Controller monitors these configuration rules and converts them into Nginx's reverse proxy configuration, and then provides services to the outside. There are two core concepts:

  • Ingress: an object in kubernetes that defines rules for how to forward to a service
  • ingress controller: it is a specific program to realize reverse proxy and load balancing. It parses the rules defined by ingress and realizes request forwarding according to the configured rules. There are many implementation methods, such as Nginx, Contour, Haproxy and so on

The working principle of Ingress (taking Nginx as an example) is as follows:

  1. The user writes an Ingress rule to specify which domain name corresponds to which Service in the kubernetes cluster

  2. The Ingress controller dynamically senses the changes of Ingress service rules, and then generates a corresponding Nginx reverse proxy configuration

  3. The Ingress controller will write the generated Nginx configuration to a running Nginx service and update it dynamically

  4. So far, what is really working is an Nginx, which is internally configured with user-defined request forwarding rules

5. Use of ingress

5.1 environmental preparation - build an ingress environment

#Download yaml file
[root@k8s-master01 ingress-nginx]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml

#Foreign images can only be pulled by scientific Internet access. Here we can pull domestic images (all three nodes need to be pulled)
[root@k8s-master01 ingress-nginx]# docker pull dyrnq/controller:v1.1.0
[root@k8s-master01 ingress-nginx]# docker pull dyrnq/kube-webhook-certgen:v1.1.1

#Modify deploy Yaml configuration file, change the image address into the image address downloaded above
image: dyrnq/kube-webhook-certgen:v1.1.1
image: dyrnq/controller:v1.1.0

#Service needs to be modified
spec:
  type: NodePort
  externalTrafficPolicy: Cluster
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
      nodePort: 30080
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
      nodePort: 30443

#Create ingress nginx
[root@k8s-master01 ingress-nginx]# kubectl  apply -f deploy.yaml 
#Check whether the creation is successful
[root@k8s-master01 ingress-nginx]# kubectl get svc -n ingress-nginx 
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.40.71   <none>        80:30080/TCP,443:30443/TCP   6m40s
ingress-nginx-controller-admission   ClusterIP   10.97.46.45    <none>        443/TCP                      6m40s
[root@k8s-master01 ingress-nginx]# kubectl get po -n ingress-nginx
NAME                                        READY   STATUS             RESTARTS   AGE
ingress-nginx-admission-create-67sv5        0/1     Completed          0          8m7s
ingress-nginx-admission-patch-jczwb         0/1     ImagePullBackOff   0          8m7s
ingress-nginx-controller-67f6979dd8-7gpmj   1/1     Running            0          8m7s

Prepare service and pod

Create Tomcat nginx yaml

[root@k8s-master01 ~]# vim tomcat-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.2
        ports:
        - containerPort: 80

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat-pod
  template:
    metadata:
      labels:
        app: tomcat-pod
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5-jre10-slim
        ports:
        - containerPort: 8080

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: dev
spec:
  selector:
    app: nginx-pod
  clusterIP: None
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
  namespace: dev
spec:
  selector:
    app: tomcat-pod
  clusterIP: None
  type: ClusterIP
  ports:
  - port: 8080
    targetPort: 8080

#establish
[root@k8s-master01 ~]# kubectl create -f tomcat-nginx.yaml
deployment.apps/nginx-deployment created
deployment.apps/tomcat-deployment created
service/nginx-service created
service/tomcat-service created
#see
[root@k8s-master01 ~]# kubectl  get svc -n dev
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
nginx-service    ClusterIP   None         <none>        80/TCP     16s
tomcat-service   ClusterIP   None         <none>        8080/TCP   16s

5.2 Http agent

Create ingress http yaml

[root@k8s-master01 ~]# vim ingress-http.yaml 
apiVersion:  networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-http
  namespace: dev
spec:
  ingressClassName: nginx
  rules:
  - host: nginx.xiaohan.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80
  - host: tomcat.xiaohan.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: tomcat-service
            port: 
              number: 8080


#establish
[root@k8s-master01 ~]# kubectl create -f ingress-http.yaml
ingress.networking.k8s.io/ingress-http created
#see
[root@k8s-master01 ~]# kubectl get ing ingress-http -n dev
NAME           CLASS   HOSTS                                  ADDRESS        PORTS   AGE
ingress-http   nginx   nginx.xiaohan.com,tomcat.xiaohan.com   10.105.40.71   80      92s
[root@k8s-master01 ~]# kubectl describe ing ingress-http  -n dev
Name:             ingress-http
Labels:           <none>
Namespace:        dev
Address:          10.105.40.71
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                Path  Backends
  ----                ----  --------
  nginx.xiaohan.com   
                      /   nginx-service:80 (10.244.1.2:80,10.244.1.6:80,10.244.1.7:80 + 3 more...)
  tomcat.xiaohan.com  
                      /   tomcat-service:8080 (10.244.1.5:8080,10.244.1.8:8080,10.244.2.6:8080)
Annotations:          <none>
Events:
  Type    Reason  Age                 From                      Message
  ----    ------  ----                ----                      -------
  Normal  Sync    86s (x2 over 118s)  nginx-ingress-controller  Scheduled for sync


# Next, configure the host file on the local computer and resolve the above two domain names to 192.168.109.100(master)
# Then, you can access Tomcat. Com separately xiaohan. COM: 32240 and nginx xiaohan. COM: 32240 see the effect

5.3 Https agent

Create certificate

#Generate certificate
[root@k8s-master01 ~]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=BJ/L=BJ/O=nginx/CN=xiaohan.com"
Generating a 2048 bit RSA private key
...............................................................................+++
...................................+++
writing new private key to 'tls.key'
-----
#Create secret key
[root@k8s-master01 ~]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created

Create ingress HTTPS yaml

[root@k8s-master01 ~]# cat ingress-https.yaml 
apiVersion:  networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-https
  namespace: dev
spec:
  ingressClassName: nginx
  tls:
    - host:
      - nginx.xiaohan.com
      - tomcat.xiaohan.com
      secret:
        name: tls-secret #Specify secret key
  rules:
  - host: nginx.xiaohan.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80
  - host: tomcat.xiaohan.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: tomcat-service
            port: 
              number: 8080

Topics: Docker Kubernetes Container