1, Enterprise scheduler LVS (Linux Virtual Server)
- Cluster concept
- LVS model
- LVS scheduling algorithm
- LVS implementation
1.1 cluster and distributed
System performance expansion mode:
- Scale UP: vertical expansion, upward expansion, enhancement, computers with stronger performance run the same services
- Scale Out: horizontal expansion, outward expansion, adding equipment, and running multiple services in parallel
1.2 Cluster
-
LB: Load Balancing has a certain high availability, but it is not a high availability cluster. Its fundamental focus is to improve the concurrent processing capacity of services
-
HA: high availability cluster (increase service availability). High availability cluster aims to improve the service's ability to always be online, so that the service will not be unavailable due to downtime
-
SLA: service level agreement. It is a mutually recognized agreement defined between service providers and users to ensure the performance and availability of services under certain overhead. Usually, this cost is the main factor driving the quality of service provided. In the conventional field, the so-called three nines and four nines are always set to express. When this level is not reached, there will be a series of punishment measures. The main goal of operation and maintenance is to achieve this service level.
1 year = 365 day = 8760 hour 99.9 = 8760 * 0.1% = 8760 * 0.001 = 8.76 hour 99.99 = 876 * 0.0001 = 0.876 hour = 0.876 * 60 = 52.6 minute 99.999 = 8760 * 0.00001 = 0.0876 hour = 0.0876 * 60 = 5.26 minute
Downtime is divided into two types, one is planned downtime and the other is unplanned downtime, while operation and maintenance mainly focuses on unplanned downtime.
- HPC: hgih performance computing
1.3 LB Cluster load balancing cluster
- In order to improve the response ability of the application system, process more access requests as much as possible and reduce latency, we can obtain the overall performance of high concurrency and high load (LB)
- The load distribution of LB depends on the shunting algorithm of the master node
1.3.1 classification by implementation mode
- Hardware
- F5 Big-IP
- Citrix Netscaler
- A10 A10
- Software
- lvs: linux virtual server, which is used by Alibaba layer 4 SLB (Server Load Balance)
- nginx: supports seven layer scheduling. Alibaba's seven layer SLB uses Tengine
- haproxy: supports seven layer scheduling
1.3.2 work based protocol hierarchy
-
Transport layer (general): DNAT and DPORT
-
LVS:
-
nginx: stream
-
haproxy: mode tcp
-
-
Application layer (dedicated): for specific protocols, it is often called proxy server
http: nginx, httpd, haproxy(mode http), ...-
fastcgi: nginx,httpd, ...
-
mysql: mysql-proxy,...
-
1.3.3 session persistence for load balancing
- session sticky: the same user schedules a fixed server Source lP: LVS sh algorithm (for a specific service) Cookie
- session replication: each server has all sessions (session multicast cluster)
- Session server: a special session server (Memcached, Redis)
1.4 implementation of HA cluster high availability cluster
- In order to improve the reliability of the application system and reduce the interruption time as much as possible, ensure the continuity of services and achieve the fault-tolerant effect of high availability (HA). The working mode of HA includes duplex and master-slave modes
- Kept: VRRP protocol
- Ais: application interface specification heartbeat
- cman+rgmanager(RHCS)
- coresync_pacemaker
1.5 HPC Cluster High Performance Computing Cluster
- In order to improve the CPU computing speed, expand hardware resources and analysis ability of the application system, we can obtain the high-performance computing (HPC) ability equivalent to large-scale and supercomputer
- The high performance of high-performance computing cluster depends on "distributed computing" and "parallel computing". It integrates the CPU, memory and other resources of multiple servers through special hardware and software to realize the computing power that only large and supercomputers can have
2, Introduction to Linux Virtual Server
2.1 introduction to LVS
- LVS is a Linux Virtual Server, which is a virtual server cluster system
- LVS works on a Server and provides the function of Directory (load balancer). It does not provide services, but forwards specific requests to the corresponding real server (the host that really provides services), so as to realize load balancing in the cluster environment
2.2 NAT forwarding mode (NAT Network Address Translation)
working principle
- The client sends the request to the front-end load balancing server. The source address of the request message is CIP (client), hereinafter collectively referred to as CIP, and the target address is VIP (load balancing front-end address, hereinafter collectively referred to as VIP)
- After receiving the message, the load balancer finds that the requested address is the address existing in the rule, then it changes the target address of the client request message to the RIP address of the back-end server, and sends the message according to the algorithm
- After the message is sent to the Real Server, since the destination address of the message is itself, it will respond to the request and return the response message to the LVS
- Then LVS changes the message source address to the local machine and sends it to the client
Note: in NAT mode, the gateway of Real Server must point to LVS, otherwise the message cannot be delivered to the client
2.3 advantages and disadvantages
-
Advantages: save IP address and secure network isolation
-
Disadvantages: LVS is likely to become a system performance bottleneck, and all requests need to be answered
3, LVS+NAT actual combat
3.1 experimental environment
Use five hosts CentOS Linux release 8.3.2011 Client: Bridge local one hundred and ninety-two.168.2.181 LVS: Bridge local one hundred and ninety-two.168.2.151 VMnet2 192.168.1.151 Web1: VMnet2 192.168.1.161 Web2: VMnet2 192.168.1.162 Web2: VMnet2 192.168.1.163
3.2 experimental steps
3.2.1 web site configuration and routing
[root@localhost ~]# yum install nginx -y [root@localhost ~]# systemctl enable nginx --now && systemctl stop firewalld [root@localhost ~]# cat /etc/selinux/config SELINUX=disabled [root@localhost ~]# nmcli con mo ens33 ipv4.gateway 192.168.1.151 && nmcli con up ens33 [root@localhost ~]# echo "192.168.1.161" > /usr/share/nginx/html/index.html
3.2.2 LVS-NAT configuration routing function and load policy
Note here that the load configuration needs to be saved before starting the service, otherwise an error will be reported during startup. You can see from the log that there is no file or directory
[root@localhost ~]# yum install ipvsadm -y [root@localhost ~]# systemctl start ipvsadm Job for ipvsadm.service failed because the control process exited with error code. See "systemctl status ipvsadm.service" and "journalctl -xe" for details. [root@localhost ~]# ipvsadm-save > /etc/sysconfig/ipvsadm [root@localhost ~]# systemctl start ipvsadm
- Configure SNAT forwarding rules
[root@localhost ~]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf [root@localhost ~]# sysctl -p net.ipv4.ip_forward = 1 [root@localhost ~]# iptables -t nat -F [root@localhost ~]# iptables -F [root@localhost ~]# iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o ens37 -j SNAT --to-source 192.168.2.151
- Load LVS kernel module
root@localhost ~]# modprobe ip_Vs #Load ip_vs module [root@localhost ~]# cat /proc/net/ip_vs #View ip vs version information [root@localhost ~]# for i in $(ls /usr/1ib/modules/$(uname -r)/kernel/net/netfilter/ipvslgrep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
- Configure and enable policies
[root@localhost ~]# systemctl start ipvsadm [root@localhost ~]# ipvsadm -C [root@localhost ~]# ipvsadm -A -t 192.168.2.151:80 -s rr #-A: Externally provided address - t: tcp -s: Policy rr: round robin [root@localhost ~]# ipvsadm -a -t 192.168.2.151:80 -r 192.168.1.161 -m #-a: Internal real server-r: real-m: address camouflage [root@localhost ~]# ipvsadm -a -t 192.168.2.151:80 -r 192.168.1.162 -m [root@localhost ~]# ipvsadm -a -t 192.168.2.151:80 -r 192.168.1.163 -m [root@localhost ~]# ipvsadm #Enable policy
- Check the node status. Masq represents NAT mode
[root@localhost ~]# ipvsadm -ln #View the node status. Masq represents NAT mode IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.2.151:80 rr -> 192.168.1.161:80 Masq 1 0 0 -> 192.168.1.162:80 Masq 1 0 0 -> 192.168.1.163:80 Masq 1 0 0
3.2.3 test effect 192.168.2.181 client
[root@localhost ~]# curl 192.168.2.151 192.168.1.163 [root@localhost ~]# curl 192.168.2.151 192.168.1.162 [root@localhost ~]# curl 192.168.2.151 192.168.1.161 [root@localhost ~]# curl 192.168.2.151 192.168.1.163 [root@localhost ~]# curl 192.168.2.151 192.168.1.162
- View 192.168.1.161 server log
[root@localhost ~]# tail -f /var/log/nginx/access.log 192.168.2.181 - - [05/May/2021:01:39:42 -0400] "GET / HTTP/1.1" 200 14 "-" "curl/7.61.1" "-"
- View 192.168.1.162 server log
[root@localhost nginx]# tail -f /var/log/nginx/access.log 192.168.2.181 - - [05/May/2021:01:39:43 -0400] "GET / HTTP/1.1" 200 14 "-" "curl/7.61.1" "-" 192.168.2.181 - - [05/May/2021:01:39:48 -0400] "GET / HTTP/1.1" 200 14 "-" "curl/7.61.1" "-"```
- View 192.168.1.163 server log
[root@localhost nginx]# tail -f /var/log/nginx/access.log 192.168.2.181 - - [05/May/2021:01:16:27 -0400] "GET / HTTP/1.1" 200 14 "-" "curl/7.61.1" "-" 192.168.2.181 - - [05/May/2021:01:16:33 -0400] "GET / HTTP/1.1" 200 14 "-" "curl/7.61.1" "-"
[root@localhost network-scripts]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.2.151:80 rr -> 192.168.1.161:80 Masq 1 0 1 -> 192.168.1.162:80 Masq 1 0 2 -> 192.168.1.163:80 Masq 1 0 2
3.2.4 clear policy and restore
[root@localhost ~]# ipvsadm-save > /etc/sysconfig/ipvsadm #Backup LVS policy [root@localhost ~]# ipvsadm -C [root@localhost ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@localhost ~]# ipvsadm-restore < /etc/sysconfig/ipvsadm #Restore LVS policy [root@localhost ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 127.0.0.1:80 rr -> 127.0.0.1:80 Masq 1 0 0 -> 192.168.1.162:80 Masq 1 0 0 -> 192.168.1.163:80 Masq 1 0 0