Cluster concept
It is composed of multiple hosts, but it is only a whole. It only provides an access entry (domain name and IP address), which is equivalent to a mainframe computer.
background
In Internet applications, as the site has higher and higher requirements for hardware performance, response speed, service stability and data reliability, a single server is unable to meet its needs. It mainly comes from market demand (enterprise demand) and the rationality and efficiency of enterprise management and maintenance personnel; In order to solve the problem of Internet applications, with the increasing requirements of sites for hardware performance, response speed, service stability and data reliability, a single server has been unable to meet the requirements of load balancing and high availability.
terms of settlement
- Minicomputers and mainframes using price safety regulations
- Building a service cluster using a normal server
Cluster classification
- Load Balance Cluster
- High availability cluster
- High performance computing cluster
① , load balancing cluster
Improve the response ability of the application system, handle more access requests as much as possible and reduce latency, so as to obtain the overall performance of high concurrency and high load (LB).
The load distribution of LB depends on the shunting algorithm of the master node, which shares the access requests from the client to multiple server nodes, so as to alleviate the load of the whole system.
② High availability cluster
Improve the reliability of the application system and reduce the interruption time as much as possible to ensure the continuity of service and achieve the fault-tolerant effect of high availability (HA).
The working mode of HA includes duplex and master-slave modes
③ High performance computing cluster
The goal is to improve the CPU operation speed of the application system, expand hardware resources and analysis ability, and obtain the high-performance operation (HPC) ability equivalent to large-scale and supercomputers.
High performance depends on "distributed computing" and "parallel computing". It integrates the CPU, memory and other resources of multiple servers through special hardware and software to realize the computing power only large and supercomputers have.
Load balancing cluster architecture
- Load scheduler: externally, the scheduler server provides a VIP as a unified entrance. Internally, it distributes traffic / requests to the server pool according to the shunting algorithm
- Server pool: the server receives, responds to, and processes the tasks of the load scheduler
- Shared storage: shared storage provides storage space for the server
Load balancing cluster working mode
The load scheduling technology of cluster has three working modes
- Address translation (NAT mode)
- IP tunnel (TUN mode)
- Direct routing (DR mode)
① . NAT mode
Network Address Translation (NAT mode for short)
Similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client and the access exit of each node in response to the client.
The server node uses a private IP address and is located in the same physical network as the load scheduler. The security is better than the other two methods.
② . TUN mode
IP Tunnel, referred to as TUN mode
The open network structure is adopted, and the load scheduler is only used as the access entrance of the client. Each node directly responds to the client through its own Internet connection, without passing through the load scheduler.
The server nodes are scattered in different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.
③ . DR mode
Direct Routing, referred to as DR mode for short
The semi open network structure is adopted, which is similar to the structure of TUN mode, but the nodes are not scattered everywhere, but located in the same physical network as the scheduler.
The load scheduler is connected with each node server through the local network, and there is no need to establish a special IP tunnel.
Load scheduling algorithm for LVS
Request based control
- polling
- In short, it is to come one by one in order; The received access requests are distributed to each node in the cluster (real server) in turn, and each server is treated equally, regardless of the actual number of connections and system load of the server
- Weighted polling
- The request is distributed according to the weight value set by the scheduler. The node with high weight value gets the task first, and the more requests are allocated
- Ensure that servers with strong performance bear more access traffic
Based on the number of connections
- Minimum connection
- Allocate according to the number of established connections of the real server, and give priority to the node with the least number of connections
- Weighted minimum connection
- When the performance of server nodes varies greatly, the weight can be automatically adjusted for the real server
- Nodes with higher performance will bear a greater proportion of the active connection load
ipvsadm tool
option function -A Add virtual server -D Delete entire virtual server -s Specify the load scheduling algorithm (polling): rr,Weighted polling: wrr,Minimum connections: lc,Weighted minimum connection: wlc) -a Represents adding a real server (node server) -d Delete a node -t appoint VIP Address and TCP port -r appoint RIP Address and TCP port -m Indicates use NAT Cluster Mode -g Indicates use DR pattern -i Indicates use TUN pattern -w Set the weight (when the weight is 0, it means to pause the node) -p 60 Indicates a long connection for 60 seconds -l List view LVS Virtual server (view all by default) -n Display address, port and other information in digital form, often with“-l"Options are used in combination. ipvsadm -ln
LVS-NAT deployment
Experimental preparation
Deploy shared storage (NFS server)
systemctl stop firewalld.service systemctl disable firewalld.service setenforce 0 yum -y install nfs-utils rpcbind systemctl start rpcbind.service systemctl start nfs.service mkdir /opt/ljm mkdir /opt/lucien chmod 777 /opt/ljm chmod 777 /opt/lucien vim /etc/exports /opt/ljm 192.168.220.0/24(rw,sync) /opt/lucien 192.168.220.5/24(rw,sync) exportfs -rv
Deploy web server
systemctl stop firewalld.service systemctl disable firewalld.service setenforce 0 yum install httpd -y systemctl start httpd.service yum -y install nfs-utils rpcbind showmount -e 192.168.220.40 systemctl start rpcbind mount.nfs 192.168.220.40:/opt/dzw /var/www/html mount.nfs 192.168.220.40:/opt/dzw1 /var/www/html vim /var/www/html/index.html <html> <body> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <h1>dzw</h1> </body> </html> vim /var/www/html/index.html <html> <body> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <h1>dzw1</h1> </body> </html> notes DNS,And change the gateway address to the load scheduler address
Configure load scheduler
systemctl stop firewalld.service systemctl disable firewalld.service setenforce 0 vim /etc/sysctl.conf net.ipv4.ip_forward = 1 or echo '1' > /proc/sys/net/ipv4/ip_forward sysctl -p iptables -t nat -F iptables -F iptables -t nat -A POSTROUTING -s 192.168.220.0/24 -o ens36 -j SNAT --to-source 192.168.10.100 modprobe ip_vs #Load ip_vs module cat /proc/net/ip_vs #View ip_vs version information yum -y install ipvsadm ipvsadm-save > /etc/sysconfig/ipvsadm or ipvsadm --save > /etc/sysconfig/ipvsadm systemctl start ipvsadm.service ipvsadm -C #Clear original policy ipvsadm -A -t 192.168.10.100:80 -s rr ipvsadm -a -t 192.168.10.100:80 -r 192.168.220.30:80 -m ipvsadm -a -t 192.168.10.100:80 -r 192.168.220.35:80 -m ipvsadm #Enable policy ipvsadm -ln #View the node status. Masq represents NAT mode ipvsadm-save > /etc/sysconfig/ipvsadm #Save policy
Summary
Differences between the three working modes
Working mode | NAT | TUN mode | DR mode |
---|---|---|---|
server number | low 10-20 | high 100 | hign 100 |
Real gateway | load balancer | Free router | Free router |
IP address | Public network + private network | Public network | Private network |
advantage | High security | Safe and fast | Best performance |
shortcoming | Low efficiency and high pressure | Need safe tunnel, expensive | Cannot span LAN (local area network) |