Nginx+Tomcat load balancing, dynamic and static separation

Posted by Guldstrand on Sun, 23 Jan 2022 00:13:34 +0100

catalogue

I. deploy Nginx load balancer

2, Deploy 2 Tomcat application servers

3, Dynamic and static separation configuration

4, Test effect

5, Nginx load balancing mode

6, Nginx four layer agent configuration

I. deploy Nginx load balancer

systemctl stop firewalld
setenforce 0

yum -y install pcre-devel zlib-devel openssl-devel gcc gcc-c++ make

useradd -M -s /sbin/nologin nginx

cd /opt
tar zxvf nginx-1.12.0.tar.gz -C /opt/

cd nginx-1.12.0/
./configure \
--prefix=/usr/local/nginx \
--user=nginx \
--group=nginx \
--with-file-aio \									#Enable file modification support
--with-http_stub_status_module \					#Enable status statistics
--with-http_gzip_static_module \					#Enable gzip static compression
--with-http_flv_module \							#Enable the flv module to provide pseudo stream support for flv video
--with-http_ssl_module								#Enable the SSL module to provide SSL encryption
--with-stream										#Enable the stream module to provide 4-tier scheduling
----------------------------------------------------------------------------------------------------------
./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --with-file-aio --with-http_stub_status_module --with-http_gzip_static_module --with-http_flv_module --with-stream

make && make install

ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/

vim /lib/systemd/system/nginx.service
[Unit]
Description=nginx
After=network.target
[Service]
Type=forking
PIDFile=/usr/local/nginx/logs/nginx.pid
ExecStart=/usr/local/nginx/sbin/nginx
ExecrReload=/bin/kill -s HUP $MAINPID
ExecrStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target

chmod 754 /lib/systemd/system/nginx.service
systemctl start nginx.service
systemctl enable nginx.service

2, Deploy 2 Tomcat application servers

systemctl stop firewalld
setenforce 0

tar zxvf jdk-8u91-linux-x64.tar.gz -C /usr/local/

vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_91
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:$PATH

source /etc/profile

tar zxvf apache-tomcat-8.5.16.tar.gz

mv /opt/apache-tomcat-8.5.16/ /usr/local/tomcat

/usr/local/tomcat/bin/shutdown.sh 
/usr/local/tomcat/bin/startup.sh

netstat -ntap | grep 8080

3, Dynamic and static separation configuration

#Prepare static pages and static pictures
echo '<html><body><h1>This is a static page</h1></body></html>' > /usr/local/nginx/html/index.html
mkdir /usr/local/nginx/html/img
cp /root/game.jpg /usr/local/nginx/html/img

vim /usr/local/nginx/conf/nginx.conf
......
http {
......
	#gzip on;
	
	#Configure the list of servers for load balancing. The weight parameter indicates the weight. The higher the weight, the greater the probability of being assigned
	upstream tomcat_server {
		server 192.168.80.100:8080 weight=1;
		server 192.168.80.101:8080 weight=1;
		server 192.168.80.101:8081 weight=1;
	}
	
	server {
		listen 80;
		server_name www.kgc.com;
	
		charset utf-8;
	
		#access_log logs/host.access.log main;
		
		#Configure Nginx to handle dynamic page requests The jsp file request is forwarded to the Tomcat server for processing
		location ~ .*\.jsp$ {
			proxy_pass http://tomcat_server;
#Setting the back-end Web server can obtain the real IP of the remote client
##Set the HOST name (domain name or IP, port) of the request received by the back-end Web server. The default HOST value is proxy_ The hostname set by the pass command. If the reverse proxy server does not rewrite the request header, the back-end real server will think that all requests come from the reverse proxy server. If the back-end has an anti attack strategy, the machine will be blocked.
			proxy_set_header HOST $host;
##Put $remote_addr is assigned to X-Real-IP to obtain the source IP
			proxy_set_header X-Real-IP $remote_addr;
##When nginx is used as a proxy server, the IP list set will record the passing machine IP and proxy machine IP
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		}
		
		#Configure Nginx to handle still picture requests
		location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|css)$ {
			root /usr/local/nginx/html/img;
			expires 10d;
		}
		
		location / {
			root html;
			index index.html index.htm;
		}
......
	}
......
}


(3)Nginx server to configure

4, Test effect

Test static page effect
 Browser access http://192.168.80.10/
Browser access http://192.168.80.10/game.jpg

Test the load balancing effect and constantly refresh the browser test
 Browser access http://192.168.80.10/index.jsp

5, Nginx load balancing mode

● rr load balancing mode:
Each request is allocated to different back-end servers one by one in chronological order. If the maximum number of failures is exceeded (max_failures, default 1), within the failure time (fail_timeout, default 10 seconds), the failure weight of the node becomes 0. After the failure time is exceeded, it will return to normal, or after all nodes are down, all nodes will be restored to effective and continue detection, Generally speaking, rr can be evenly distributed according to the weight.

●least_conn minimum connection:
Give priority to scheduling client requests to the server with the least current connections.

●ip_hash load balancing mode:
Each request is allocated according to the hash result of accessing ip, so that each visitor can access a back-end server regularly, which can solve the problem of session, but ip_hash will cause uneven load. Some accept more service requests and some accept less service requests. Therefore, ip is not recommended_ In the hash mode, the session sharing problem can use the session sharing of the back-end service to replace the ip of nginx_ hash.

● fair load balancing mode:
Requests are allocated according to the response time of the back-end server, and those with short response time are allocated first.

●url_hash (third party) load balancing mode:
Hash based on the uri requested by the user. And IP_ Similar to the hash algorithm, each request is allocated according to the hash result of the URL, so that each URL is directed to the same back-end server, but it will also cause the problem of uneven allocation. This mode is better when the back-end server is caching.

6, Nginx four layer agent configuration

./configure --with-stream

and http Same level: so generally only in http In the previous paragraph,
stream {
	
    upstream appserver {
		server 192.168.80.100:8080 weight=1;
		server 192.168.80.101:8080 weight=1;
		server 192.168.80.101:8081 weight=1;
    }
    server {
        listen 8080;
        proxy_pass appserver;
    }
}

http {
......

Topics: Linux Load Balance Nginx Tomcat