1. Complete Nginx reverse proxy example
Suppose you have a nginx server (192.168.56.101) to use as a reverse proxy server, then add three web servers that provide the same service and configure nginx+php on the web server. The three web servers are 103, 105, 106
# Users and User Groups Used user nginx nginx; # Specify the number of work-derived processes (generally equal to the total number of cores in the CPU) worker_processes 2; # Storage path of error log, error logging level can be {debug|info|notice|warn|error|crit} error_log /var/log/nginx/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; # Specify the storage path for the pid pid /usr/local/nginx/log/nginx.pid; events { # The network I/O model used, the epoll model recommended for Linux systems, and the kqueue model recommended for FreeBSD systems use epoll; # Number of connections allowed worker_connections 1024; } http { include mime.types; default_type application/octet-stream; # Set the file size that clients can upload client_max_body_size 8m; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 60; log_format main '$remote_addr - $remote_user [$time_local] "$request"' '$status $body_bytes_sent "$http_referer"' '"$http_user_agent" "$http_x_forwarded_for"'; #fastcgi fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; #gzip gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.1; gzip_comp_level 2; gzip_types text/plain applicaiton/x-javascript text/css application/xml; gzip_vary on; #Timeout for connection to back-end server, initiate handshake waiting response timeout proxy_connect_timeout 600; #After successful connection, waiting for backend server response time has actually entered the backend queue waiting to be processed proxy_read_timeout 600; #Back-end server data return time is when the back-end server must complete all data proxy_send_timeout 600; #The proxy requests the cache, which holds the user's header information for Nginx's rule processing. Generally, only the next header information is saved proxy_buffer_size 16k; #Same as above to tell Nginx how much space to use to save a few buffer s for a single use proxy_buffers 4 32k; #You can request a larger proxy_if your system is busy Buffers proxy_busy_buffers_size 64k; #Size of proxy cache temporary file proxy_temp_file_write_size 64k; upstream php_server_pool{ server 192.168.56.103:80 weight=4 max_fails=2 fail_timeout=30s; server 192.168.56.105:80 weight=4 max_fails=2 fail_timeout=30s; server 192.168.56.106:80 weight=4 max_fails=2 fail_timeout=30s; } server { listen 80; server_name localhost; #charset utf-8; access_log /usr/local/nginx/log/access.log main; location / { proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://php_server_pool; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } }
The upstream directive sets a set of commands that can be used in proxy_pass and fastcgi_ The proxy server used in the pass directive, with polling as the default load balancing method.
The server directive in the upstream module specifies the name and parameters of the back-end server, which can be a domain name, an IP address, a port number, or a unix socket.
Within the server {...} virtual host, you can use proxy_pass and fastcgi_ The pass directive sets up the upstream server cluster for reverse proxying.
Proxy_ Set_ The header directive is used to add the specified header header information when a request is made by the back-end web server of the reverse proxy.
When there are multiple domain name-based virtual hosts on the back-end web server, the header header information Host is added to specify the requested domain name so that the back-end web server can identify which virtual host the reverse proxy server is accessing.
With reverse proxy, back-end web servers cannot pass directly through S E R V E R [ ′ R E M O T E A D D R ′ ] change amount come Acquired take use household Of really real I P Yes , through too _ SERVER['REMOTE_ADDR'] variable to get the user's true IP, through S ERVER ['REMOTEA DDR'] variable to get the user's true IP, through _ SERVER ['REMOTE_ADDR'] gets the IP of the Nginx Complex Balanced Server. This is to allow the back-end web server to pass through $_by adding the Header header information X-Forwarded-for when Nginx reverse proxy SERVER ['HTTP_X_FORWARDED_FOR'] gets the user's true ip.
Visit 192.168.56.101 at this time and you will find:
[root@localhost ~]# curl 192.168.56.101|grep SERVER_ADDR % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 80206 0 80206 0 0 17.9M 0 --:--:-- --:--:-- --:--:-- 19.1M <tr><td class="e">$_SERVER['SERVER_ADDR']</td><td class="v">192.168.56.103</td></tr> [root@localhost ~]# curl 192.168.56.101|grep SERVER_ADDR % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 80206 0 80206 0 0 16.7M 0 --:--:-- --:--:-- --:--:-- 19.1M <tr><td class="e">$_SERVER['SERVER_ADDR']</td><td class="v">192.168.56.105</td></tr> [root@localhost ~]# curl 192.168.56.101|grep SERVER_ADDR % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 80206 0 80206 0 0 16.0M 0 --:--:-- --:--:-- --:--:-- 19.1M <tr><td class="e">$_SERVER['SERVER_ADDR']</td><td class="v">192.168.56.106</td></tr> [root@localhost ~]# curl 192.168.56.101|grep SERVER_ADDR % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 80206 0 80206 0 0 17.6M 0 --:--:-- --:--:-- --:--:-- 19.1M <tr><td class="e">$_SERVER['SERVER_ADDR']</td><td class="v">192.168.56.103</td></tr> [root@localhost ~]#
2. HTTP Upstream module for Nginx load balancing
The Upstream module is the main module of Nginx load balancing. It provides a simple way to load balance the back-end server between polling and client IP, and can perform health checks on the back-end server.
upstream backend { server backend1.example.com weight=5; server backend2.example.com:8080; server unix:/tmp/backend3; } server { location / { proxy_pass http://backend; } }
2.1 ip_hash directive
upstream bachend { ip_hash; server backend1.example.com; server backend2.example.com; server backend3.example.com down; server backend4.example.com; }
Ip_when load balancing multiple application servers on the back end The hash directive enables requests from a client's IP to be located on the same back-end server through a hash algorithm.
Ip_ The hash directive does not guarantee a balanced load on the back-end server. Some may receive more requests, some may receive less requests, and methods such as setting back-end server weights will not work.
If the back-end server has something to look for in the Nginx load balancing for a while, it must be marked down, not deleted or commented out.
The benefit of this is that sessions are saved on the same server and user login is normal, but if session sharing is possible, it is recommended that ip_be replaced by session sharing. Hash mode.
For example:
upstream php_server_pool{ ip_hash; server 192.168.56.103:80 max_fails=2 fail_timeout=30s; server 192.168.56.105:80 max_fails=2 fail_timeout=30s; server 192.168.56.106:80 max_fails=2 fail_timeout=30s; }
2.2 server directive
The server directive is used to specify the name and parameters of a back-end by-weapon. The name of the server can be a domain name, an IP address, a port number, or a Unix Socket.
After the back-end server name, you can follow the parameters:
weight=NUMBER - Sets the server's weight. The higher the weight, the more client requests are allocated. If no weight is set, the default weight is 1.
max_fails=NUMBERS - in parameter fail_ The number of times requests to the back-end server failed within the time specified by timeout and are marked as failed if a back-end server is detected to be unreachable and a server error occurs (except for a 404 error). Default 1 if not set. Setting the value 0 will turn off this check.
fail_timeout-TIME - experiencing the parameter max_ The time to pause after fails are set.
down - Price server is permanently offline, user ip_hash directive.
back_up - Enabled only when non-bacjuo servers are down or busy.
upstream backend { server backend1.example.com weight=5; server 127.0.0.1:8080 max_fails=3 fail_timeout=30s; server unix:/tmp/backend3; }
2.3 upstream directive
The upstream directive sets a set of commands that can be used in proxy_ass and fastcgi_ The proxy server used in the pass directive, the default load balancing method is polling.
2.4 upstream related variables
The upstream module has the following variables:
$upstream_addr Processing Request upstream Server address; $upstream_status Upstream Response status of the server; $upstream_response_time Upstream Server response time (milliseconds); $upstream_http_$HEADER Arbitrary HTTP Protocol header information, such as: $upstream_http_host
For example:
.................. upstream php_server_pool{ server 192.168.56.103:80 weight=4 max_fails=2 fail_timeout=30s; server 192.168.56.105:80 weight=4 max_fails=2 fail_timeout=30s; server 192.168.56.106:80 weight=2 max_fails=2 fail_timeout=30s; } log_format access '$upstream_addr - $upstream_response_time' '$upstream_status' '$upstream_http_host'; server { listen 80; server_name localhost; #charset utf-8; access_log /usr/local/nginx/log/access.log access; location / { proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://php_server_pool; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } }
View the log as:
[root@localhost ~]# tail -5 /usr/local/nginx/log/access.log 192.168.56.103:80 - 0.007200- 192.168.56.105:80 - 0.020200- 192.168.56.106:80 - 0.004200- 192.168.56.103:80 - 0.005200- 192.168.56.105:80 - 0.003200-