Load balancing is used to schedule nodes, so you only need to set the transfer on the server equipped with load balancing The conf file can be directed to the relevant server. It can be tested by a single server first, but since it is a scheduling server, it must not be one server,
Load balancing transfers the of a single machine conf configuration
-
There should not be multiple nodes with the same port in the request of load balancing conf file to avoid conflicts. Influence experiment.
-
Proxy is required to record the real ip_ set_ header Connection “”; And proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Set together.
server{ linten 80; Open 80 listening port server_name web.oldxu.com domain name location / { proxy_pass http://10.0.0.7:8080; By setting ip And ports to achieve load balancing scheduling proxy_http_version 1.1; Set the used for load balancing http Protocol version proxy_set_header Connection ""; The reality of future visits ip Humanized storage in/var/log/nginx/acesss.log proxy_set_header Host $http_host; Set the header of the domain name of the connection, so as to avoid the confusion of request results caused by the same port proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Record real requests ip,Because the agent only forwards, the node server should record the real request ip,Not on the agent side. } }
-
Other related settings:
Connection related:
Syntax: proxy_connect_timeout time; Timeout time of the connection request between the proxy and the backend Default: proxy_connect_timeout 60s; The default is 60 seconds Context: http, server, location Syntax: proxy_read_timeout time; The timeout time for the agent to wait for the back-end response Default: proxy_read_timeout 60s; Default 60 s Context: http, server, location Syntax: proxy_send_timeout time; The timeout time for the backend to transmit data to the agent Default: proxy_send_timeout 60s; Default 60 seconds Context: http, server, location
Cache related: enabling cache can speed up the response
Syntax: proxy_buffering on | off; Start hui Buffer Default: proxy_buffering on; The default is on When enabled, the data will be cached in the form of header and data proxy_buffer_size(Headers) 64k It can cache 64 k proxy_buffers(data) 4 64k That is, you can cache 4 x64=256k
proxy_temp_path path
proxy_max_temp_file_size the total size of the directory buffer
proxy_temp_file_write_size cache size for word writing
==The configuration is written in proxy_ buffer_ If the cache size of the header is set under size, it is generally unnecessary to set it==
Finally, because there are too many settings to be called, it is impossible to rewrite every time you write a configuration. Therefore, we can write all the configurations to a file, and then use include to agree to call,
vim /etc/nginx/proxy_params proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 30; proxy_send_timeout 60; proxy_read_timeout 60; proxy_buffering on; proxy_buffer_size 64k; proxy_buffers 4 64k;
location / { proxy_pass http://127.0.0.1:8080; include proxy_params; }
How to reasonably mobilize multiple server nodes for load balancing
It can't be simpler! Then write to http layer
upstream web_cluster { # web_cluster is the name of the cluster server 172.16.1.7:80; server 172.16.1.8:80; } server { listen 80; server_name web.oldxu.com; location / { proxy_pass http://web_cluster; # http: / / specifies the node for load balancing connections include proxy_params; } } ~
scheduling algorithm
Only one scheduling algorithm can exist
-
Polling scheduling is round robin
-
Weighted polling, based on the assigned weight
weight -
ip_hash
The request is assigned to a specific server for processing, but it will cause too much pressure on a single server and is not practical.
Written on upstream Web_ The following line of cluster {is enough -
Consistent hash scheduling
More intelligent than ip hash. It does not need a lot of calculation. If there is a server failure, it will be automatically postponed to the next server.
Written on upstream Web_ The following line of cluster {is enough
hash $remote_addr(source ip Variable of) consistent; Variables that determine the source: less/etc/nginx/nginx.conf
- url_hash
Cache scheduling requires a cache server, and load balancing uses URL_ When hash requests resources, it will cache a copy of the resources in the cache server. Later, if there is another request, it will automatically connect the request to the cached server to speed up the response. - least_conn
Which server has fewer requests, it will schedule the requests to which server
Set the status of the back-end server through load balancing
max_ Failures and fail_timeout should be used in combination
Example:
max_fails=2 fail_tinmeout=5s
After two failed attempts to connect to the server, the server is considered unavailable, and then reconnect every 5s. If Max is reached_ If the number of times set by fail is still unavailable, it is determined that the service is unavailable.