ingress-nginx of Kubernetes Advancement
Catalog:
Best way to access applications from outside
Configuration Management
Three Data Volumes and Data Persistence Volumes
Fourth, Re-discussion on Stateful Application Deployment
Five K8S Security Mechanism
In the first place, if you choose node port to expose ports, you need to determine whether the exposed ports are occupied or not. Creating new applications will determine whether the ports are allocated. nodeport itself is based on the default iptables proxy mode to do network forwarding, that is SANT,DANT, based on four layers, do seven layers can not do, poor performance, because it requires firewall forwarding and filtering.
First, the best way to access applications from outside
- The relationship between Pod and Ingress
Connecting through Service
Pod load balancing through Ingress Controller- Support TCP/UDP Layer 4 and HTTP Layer 7
- Support TCP/UDP Layer 4 and HTTP Layer 7
- Ingress Controller
Controllers are similar to k8s components, they often have to go to API to interact with each other, often to get api-related information, refresh their own rules, similar to other controller s
Ingress, k8s has designed a relatively global load balancer. To be exact, Ingress is a rule in k8s. The implementation of this rule is the controller used, commonly known as ingress controller.
The main work of ingress controller is that it accesses this controller. It helps you forward specific pods, that is, cluster pools, which applications are associated with, which IP pods are associated with, which ports 80,443 are exposed.
1. Deployment of Ingress Controlle
Deployment documents: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md - Create Ingress rules, expose a port for your application, expose a domain name, and let users access the ingress controller controller.
3. Controller selection type
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
Matters needing attention:
* Mirror address changed to domestic: zhaocheng172/nginx-ingress-controller:0.20.0
Using Host Network: Host Network: True[root@k8s-master demo]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml [root@k8s-master demo]# kubectl apply -f mandatory.yaml [root@k8s-master demo]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE nginx-ingress-controller-5654f58c87-r5vcq 1/1 Running 0 46s
Allocated to Noe2, we can use netstat to view port 80/443 we listen on.
[root@k8s-master demo]# kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-5654f58c87-r5vcq 1/1 Running 0 3m51s 192.168.30.23 k8s-node2 <none> <none> [root@k8s-master demo]# vim ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress spec: rules: - host: www.dagouzi.com http: paths: - backend: serviceName: deployment-service servicePort: 80 [root@k8s-master demo]# kubectl create -f ingress.yaml [root@k8s-master demo]# kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS AGE example-ingress www.dagouzi.com 80 49m
Test access, here I wrote to my hosts file, if do domain name resolution, it is also to resolve our ingress IP.
This type, we can only assign ingress-nginx to a node, if our ingress-nginx hangs, we will certainly not be able to access our application services.
If we can solve this problem, we can expand the replica, using DaemonSet form can make our nodes can start a pod, delete the replica, because there is no need for replica.
The previous resources need to be deleted before they can be modified[root@k8s-master demo]# kubectl delete -f mandatory.yaml [root@k8s-master demo]# kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-4s5ck 1/1 Running 0 38s 192.168.30.22 k8s-node1 <none> <none> nginx-ingress-controller-85rlq 1/1 Running 0 38s 192.168.30.23 k8s-node2 <none> <none>
Look at our listening port, node1/node2, which is available above, but such an example is more suitable for small clusters.
In general, we can run two load balancers based on four layers in front of the DaemonSet controller like this.
User - > LB (vm-nginx/lvs/haproxy) - > node1/node2 IP, then use the algorithm to poll, --> pod[root@k8s-node1 ~]# netstat -anpt |grep 80 tcp 0 0 0.0.0.0:18080 0.0.0.0:* LISTEN 63219/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 63219/nginx: master tcp 0 0 127.0.0.1:33680 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:33700 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:33696 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:33690 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:18080 127.0.0.1:33580 TIME_WAIT - tcp 0 0 127.0.0.1:33670 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:33660 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:33676 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:33666 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:33686 127.0.0.1:18080 TIME_WAIT - tcp 0 0 127.0.0.1:33656 127.0.0.1:18080 TIME_WAIT - tcp6 0 0 :::18080 :::* LISTEN 63219/nginx: master tcp6 0 0 :::80 :::* LISTEN 63219/nginx: master [root@k8s-node1 ~]# netstat -anpt |grep 443 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 63219/nginx: master tcp 0 0 192.168.30.22:34798 192.168.30.21:6443 ESTABLISHED 1992/kube-proxy tcp 0 0 192.168.30.22:44344 10.1.0.1:443 ESTABLISHED 6556/flanneld tcp 0 0 192.168.30.22:44872 192.168.30.21:6443 ESTABLISHED 1718/kubelet tcp 0 0 192.168.30.22:58774 10.1.0.1:443 ESTABLISHED 63193/nginx-ingress tcp6 0 0 :::443 :::* LISTEN 63219/nginx: master
Access based on https
[root@k8s-master cert]# cat cfssl.sh curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo [root@k8s-master cert]# sh cfssl.sh [root@k8s-master cert]# ls certs.sh cfssl.sh [root@k8s-master cert]# chmod +x certs.sh [root@k8s-master cert]# sh certs.sh
Generate certificates, a key, a pem for our domain name
[root@k8s-master cert]# ls blog.ctnrs.com.csr blog.ctnrs.com-key.pem ca-config.json ca-csr.json ca.pem cfssl.sh blog.ctnrs.com-csr.json blog.ctnrs.com.pem ca.csr ca-key.pem certs.sh
Put our key into our k8s and use it when using ingress
[root@k8s-master cert]# kubectl create secret tls blog-ctnrs-com --cert=blog.ctnrs.com.pem --key=blog.ctnrs.com-key.pem [root@k8s-master cert]# kubectl get secret NAME TYPE DATA AGE blog-ctnrs-com kubernetes.io/tls 2 3m1s default-token-m6b7h kubernetes.io/service-account-token 3 9d [root@k8s-master demo]# vim ingress-https.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: tls-example-ingress spec: tls: - hosts: - blog.ctnrs.com secretName: blog-ctnrs-com rules: - host: blog.ctnrs.com http: paths: - path: / backend: serviceName: deployment-service servicePort: 80 [root@k8s-master demo]# kubectl create -f ingress-https.yaml ingress.extensions/tls-example-ingress created [root@k8s-master demo]# kubectl get ingress NAME HOSTS ADDRESS PORTS AGE example-ingress www.dagouzi.com 80 3h26m tls-example-ingress blog.ctnrs.com 80, 443 5s
It's not safe because we use self-signed certificates to authenticate. If we replace the certificates we buy, we can visit them normally.
Summary:
Two ways of exposing external access
User - > LB (external load balancing + keepalived) - - > ingress controller (node1 / node2) - - > pod
User - > node(vip ingress controller+keepalived backup) - - > pod
Ingress(http/https) --> service --->pod