Detailed explanation of Nginx configuration

Posted by tpl41803 on Tue, 01 Feb 2022 16:11:53 +0100

Nginx

1.nginx introduction

Nginx function classification: (user access portal)

1. Reverse proxy, which only provides nginx server domain name to the outside, and hides the actual server address and port.

2. Load balancing. Generally, the reverse agent will also do load balancing at the same time.

3. Dynamic and static separation, and deploy static resources in nginx. Improve the response efficiency of the system.

2.nginx installation

nginx is written in c language.

xftp: transfer files to linux system.

$ history #Command is used to display the history of executed commands.
$ gcc -v  #See the compiling environment of c language
$ g++ -v #See the compiling environment of c + + language

$ tar -xzvf  ..xx.tar.gz #Display details decompression
$ 
Nginx The installation method is to --decompression-->to configure-->compile-->install
# To pre install gcc and g + + (c language and c + + environment)

# 1.1 install PCRE XX tar. gz
 decompression
 $ tar -xzvf pcre-xx.tar.gz
 Enter the extracted pcre Execute in file 
 $ ./configure

After configuration, go back to pcre Execute compilation under directory
 $ make

Performing installation
 $ make install

# 2.1 install transport protocol openssl

decompression
 $ tar -xzvf openssl-xx.tar.gz 
Enter the extracted directory and execute the configuration
 $ ./config
 Perform compilation and installation
 $ make && make install
 
# 3.1 installing zlib

decompression
 $ tar -xzvf zlib-xx.tar.gz 
Enter the extracted directory and execute the configuration
 $ ./configure
 Perform compilation and installation
 $ make && make install

# 4.1 installing nginx (depending on the first three environments)

decompression
 $ tar -xzvf nginx-xx.tar.gz 
Enter the extracted directory and execute the configuration
 $ ./configure
 Perform compilation and installation
 $ make && make install

----------
The default installation environment is:
/usr/local/ 

# Run nginx
$ cd /usr/local/nginx
$ ll
  conf 
  html
  logs
  sbin

$ cd sbin
 function nginx: 
$ ./nginx
 View process
ps -ef|grep nginx
 You can see two processes master,worker

# Access nginx
ip(Direct input ip Address (port number can be omitted)
192.168.140.130

# Possible problems

nginx Unable to start: libpcre.so.1/libpcre.so.0:cannot open shared object file terms of settlement
$ ln -s /usr/local/lib/libpcre.so.1/lib64

32 Bit system:
$ ln -s /usr/local/lib/libpcre.so.1/lib


# In the following environment, do not add the configuration address that can be executed directly.
/usr/local/sbin:  /usr/local/bin :  /usr/sbin :  /usr/bin:  /sbin : /bin: /usr/bin/X11:  /usr/games : /usr/X11R6/bin

3.nginx related commands

Start command

In / usr/local/nginx/sbin directory
Execution/ nginx

close command

In / usr/local/nginx/sbin directory
Execution/ nginx -s stop

Reload command (most commonly used hot load)

In / usr/local/nginx/sbin directory
Execution/ nginx -s reload

4. Configure nginx conf

http{
.....
	upstream myserver{
		ip_hash;
		server 115.28.52.63:8080 weight=1;  #Configure the target host port of nginx reverse proxy
		server 115.28.52.63:8081 weight=1;  #Configure the target host port of nginx reverse proxy
		
	}
....
		server{
				location /{    
						......
						#The server address name corresponding to the reverse proxy configured by myserver above
						proxy_pass http://myserver;
						proxy_connect_timeout 10;
				}
				.....
		}
}

Common errors and exceptions

There is an error positioning error, go to see the log!!!

When nginx -s reload is restarted, the error message is as follows:

nginx: [error] open() "/usr/local/var/run/nginx.pid" failed (2: No such file or directory)

reason:

No nginx PID file. Every time we stop nginx (nginx - s stop), nginx will name it nginx under the path of / usr/local/var/run / Delete your files
You can directly start nginx and regenerate nginx PID is OK:

nginx

If direct startup is still not feasible, execute nginx -t to view the path of nginx configuration file:

$ nginx -t
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful

Specify the following conf file:

nginx -c /usr/local/etc/nginx/nginx.conf

Restart nginx -s reload again and no error will be reported.
---------

5. Sessions in toCat can be managed by Redis

<Valve className ="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve"/>
<Manager className ="com.orangefunction.tomcat.redissessions.RedisSessionManager"
         host="127.0.0.1"
         port="6379"
         database="0"
         maxInactiveInterval="60" />

6.Nginx principle configuration

[the external chain image transfer fails, and the source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-yj6h8yct-1622973696361) (/ users / fanyuanxiang / library / Application Support / typera user images / screenshot 2021-03-15 23.24.16.png)]

Master& worker

Therefore, the master only performs control and management. The real task is the worker, who will configure tomcat as the corresponding reverse agent.

For each worker process, an independent process does not need to be locked, so the overhead brought by the lock is saved. At the same time, it will be much more convenient in programming and problem finding.

A process can not affect each other. After a process is launched, other processes are still working. The service will not be interrupted, and the master process will soon start a new worker process, so the risk is reduced.

Generally, several worker s are set for several core CPUs.
Similar to Redis, Nginx adopts IO multiplexing. Each worker is an independent process, but there is only one main thread in each process, which processes requests in an asynchronous and non blocking manner.

# Interview questions

Number of connections worker_connection
- This value represents worker The maximum number of connections that a process can establish, so a nginx The maximum number of connections that can be established should be worker_connection*
worker_processes(worker number) .Of course, the number of connections here is for http What is the maximum amount of concurrency that can be supported for requesting local resources worker_connections* worker_processes ,If yes http1.1 Each visit of your browser takes up two connections, so the maximum number of connections to access static resources: worker_connections* worker_processes/2 ;If http As a reverse proxy, the maximum number of concurrency should be worker_connections*worker_processes/4.The proxy server and the back-end server will make two concurrent connections at a time.

7. nginx.conf structure

[the external chain image transfer fails, and the source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-d3xem4rp-1622973696363) (/ users / fanyuanxiang / library / Application Support / typera user images / screenshot 2021-03-16 00.00.06.png)]

8.nginx configuration file

Nginx configuration file structure

If you have downloaded your installation files, you might as well open Nginx.exe in the conf folder Conf file, the basic configuration of Nginx server, and the default configuration are also stored here.

In nginx The annotation symbol of conf is:#

The default nginx configuration file is nginx The contents of conf are as follows:

# Security issues. The role of nobody with the smallest permission means that the worker is attacked and will not have an impact on the host.
#user  nobody;
#The number of worker s and the number of CPUs on the server are the most appropriate.
worker_processes  1;

#error log path level 
#debug |info|notice |warn|error| crit 
#Log details decrease from left to right.
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
    use epoll; #io multiplexing is not supported in window s. Generally, epoll or kquene is the default in linux
    
    #When a worker preempts a connection, whether to get more connections as much as possible. The default is off. (general access multiple open)
    multi_accept on; 
    
    #Preemption lock is opened when there are few visitors. (avoid the phenomenon of startling the crowd;)
    accept_mutex
}


http {
# When the web server receives a file request for a resource, find the corresponding MIME Type in the server's mime configuration file according to the suffix of the request file, set the Content Type of HTTP Response according to the MIME Type, and then the browser processes the file according to the Content Type.
    include       mime.types;
# If not in mime If the corresponding mapping is found in type, use the following as the default value. (binary file form)    
    default_type  application/octet-stream;

#Configure the log format of nginx, (operation and maintenance)
    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
    
#Access log location allows you to view ip attacks. By viewing the access log, you can directly disable relevant ip access to protect the server.
    #access_log  logs/access.log  main;

#Open the transmission of the disk directly to the network, which is suitable for the upload and download of large files, and improves the efficiency ten times.
    sendfile        on;
    #tcp_nopush     on;

#How long will a connection be maintained after a request is connected? The default is 0, that is, disconnect the connection immediately after the request is completed.
    #keepalive_timeout  0;
    keepalive_timeout  65;
#Turn the gzip module on or off.
    #gzip  on;
#Set the minimum number of bytes allowed to compress the page. The page bytes are obtained from the content length in the header header    
		#gzip_min_lenth 1k
#gzip compression ratio, compression ratio is small, processing speed is fast, but transmission is slow. On the contrary. (1-9) pictures and videos are not suitable for compression
		#gzip_comp_level 4;

#Dynamic and static separation		
#Server side static resource cache, the maximum file cached in memory, inactive period, and kicked out of memory when timeout.
		#open_file_cache max=655350 inactive=20s
#The minimum number of times of use within the active period, otherwise it is regarded as inactive		
		#open_file_cache_min_uses 2;
#Time interval to verify whether the cache is active.		
		#open_file_cache_valid 30s;
				
				
  upstream myserver{
      #1. (default) polling: nginx is polling by default, and its weight is 1 by default. The order in which the server processes requests: ABAB		
      #2. backup hot standby: if you have two servers, only when one server has an accident can you enable the second server to provide services. The order in which the server processes requests: AAAAA suddenly A hangs up, bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
      #3. weight=2 weighted polling: distribute different numbers of requests to different servers according to the configured weight. If not set, it defaults to 1. The request order of the following server is: abbabbabbabbabbab		
      #4.ip_hash:nginx will make the same client IP request the same server.
      #Explain several state parameters of nginx load balancing configuration.
        #down indicates that the current server does not participate in load balancing temporarily.
        #Backup, reserved backup machine. When all other non backup machines fail or are busy, the backup machine will be requested, so the pressure of this machine is the least.
        #max_ Failures: the number of times a request is allowed to fail. The default value is 1. When the maximum number of times is exceeded, proxy is returned_ next_ Error in upstream module definition.
        #fail_timeout, after experiencing max_ The time when the service is suspended after failures. max_ Failures can be compared with fail_ Use with timeout.
       #5. Priority allocation of short response from Fair (third party)
       #6.url_hash (the third party) receives the hash result of the URL to allocate the request, so that the same URL can locate the request and the same back-end server. When the back-end server is cached, it is more effective.
    ip_hash;
	  server 127.0.0.1:7878 weight=2 max_fails=2 fail_timeout=2;  #Configure the target host port of nginx reverse proxy
		server 115.28.52.63:8081 weight=1;  #Configure the target host port of nginx reverse proxy
  }
		
    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;  #The root directory is specified as: / usr/local/etc/nginx/html 
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;  
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}

nginx.conf details

#For security issues, it is recommended to use nobody instead of root (the least important permission is given to the worker)
#user  nobody;

#It is most suitable that the number of worker s is equal to the number of CPUs on the server
worker_processes  2;

#Work binding cpu(4 work binding 4 CPU)
worker_cpu_affinity 0001 0010 0100 1000

#work binds CPUs (4 out of 8 CPUs).
worker_cpu_affinity 0000001 00000010 00000100 00001000  



#error_ Log path level path indicates log path, level indicates log level,
#The details are as follows: [debug | info | notice | warn | error | crit]
#From left to right, the log detail decreases step by step, that is, debug is the most detailed and crit is the least. Crit is the default. 

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    #This value represents the maximum number of connections that each worker process can establish. Therefore, the maximum number of connections that an nginx can establish should be worker_connections * worker_processes. 
    #Of course, here is the maximum number of connections. For local resources of HTTP requests, the maximum number of concurrent connections that can be supported is workers_ connections * worker_ processes,
    #If it supports http1 1's browser takes up two connections per visit,
    #Therefore, the maximum concurrent number of ordinary static access is: worker_connections * worker_processes /2,
    #If HTTP is used as the reverse proxy, the maximum concurrent number should be worker_connections * worker_processes/4. 
    #Because as a reverse proxy server, each concurrent server will establish a connection with the client and a connection with the back-end service, which will occupy two connections.

    worker_connections  1024;  

    #This value indicates which multiplexing io mode nginx supports.
    #epoll is selected for general Linux, and kquene is used for (* BSD) series Linux.
    #The windows version of nginx does not support IO multiplexing. This value is not required.
    use epoll;

    # When a worker preempts a link, whether to get more connections as much as possible. The default is off.
    multi_accept on;

    # The default is on, which enables the preemptive lock mechanism of nginx.
    accept_mutex  on;
}


http {
    #When the web server receives a static resource file request, find the corresponding MIME Type in the server's mime configuration file according to the suffix of the request file, set the content type of HTTP Response according to the MIME Type, and then the browser processes the file according to the value of content type.

    include       mime.types;

    #If not from mime If types finds a mapping, use the following as the default value
    default_type  application/octet-stream;
    
 

     #Log location
     access_log  logs/host.access.log  main;

     #A typical accesslog:
     #101.226.166.254 - - [21/Oct/2013:20:34:28 +0800] "GET /movie_cat.php?year=2013 HTTP/1.1" 200 5209 "http://www.baidu.com" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDR; .NET4.0C; .NET4.0E; .NET CLR 1.1.4322; Tablet PC 2.0); 360Spider"
 
     #1) 101.226.166.254: (user IP)
     #2) [21/Oct/2013:20:34:28 +0800]: (visit time) 
     #3) GET: http request mode, including GET and POST
     #4)/movie_cat.php?year=2013: the currently visited web page is a dynamic web page, movie_cat.php is the requested background interface, and year=2013 is the parameter of the specific interface
     #5) 200: service status. 200 indicates normal. Common ones include 301 permanent redirection, 4XX indicates request error, and 5XX server internal error
     #6) 5209: the number of transmitted bytes is 5209, and the unit is byte
     #7)" http://www.baidu.com ": refer: the previous page of the current page
     #8)"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; #.NET CLR 3.0.30729; Media Center PC 6.0; MDDR; .NET4.0C; .NET4.0E; .NET CLR 1.1.4322; Tablet PC 2.0); 360Spider ": agent field: usually used to record operating system, browser version, browser kernel and other information

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';


	
    #Open the file transfer from disk to network directly, which is applicable to the case of large file upload and download, and improve IO efficiency.
    sendfile        on;
 
   
    #How long will the connection be maintained after a request is completed? The default is 0, which means that the connection will be closed directly after the request is completed.
    #keepalive_timeout  0;
    keepalive_timeout  65;
 
 
	
    #Turn the gzip module on or off
    #gzip  on ;

    #Set the minimum number of bytes of the page that can be compressed. The number of bytes of the page is obtained from the content length in the header header.
    #gzip_min_lenth 1k;

    # gzip compression ratio: 1 compression ratio is the smallest, the processing speed is the fastest, and 9 compression ratio is the largest but the processing speed is the slowest (fast transmission but cpu consumption)
    #gzip_comp_level 4;

    #Match the MIME type for compression (whether specified or not). The "text/html" type will always be compressed.
    #gzip_types types text/plain text/css application/json  application/x-javascript text/xml  

 

    #Dynamic and static separation
    #Server side static resource cache, maximum file cached in memory, inactive period
    open_file_cache max=655350 inactive=20s;   
   
    #The minimum number of times used during the active period, otherwise it is considered inactive.
    open_file_cache_min_uses 2;

    #Time interval to verify whether the cache is active
    open_file_cache_valid 30s;


    
    upstream myserver{

    # 1. Polling (default)
    # Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.
    # 2. Specify weights
    # Specifies the polling probability. The weight is directly proportional to the access ratio. It is used in the case of uneven performance of the back-end server.
    #3. IP binding ip_hash
    # Each request is allocated according to the hash result of access ip, so that each visitor can access a back-end server regularly, which can solve the problem of session.
    #4. backup mode
    # The standby machine set as backup is not accessed under normal conditions. The service will only enter the standby machine when all non standby machines are down.
    #5. fair (third party)
    #Requests are allocated according to the response time of the back-end server, and those with short response time are allocated first.   
    #6,url_hash (third party)
    #The request is allocated according to the hash result of the access url, so that each url is directed to the same back-end server, which is more effective when the back-end server is cache.


      # ip_hash;
             server 192.168.161.132:8080 weight=1;
             server 192.168.161.132:8081 weight=1;
      
      #fair

      #hash $request_uri
      #hash_method crc32
      
      }

    server {
        #port number 
        listen       80;

        #service name
        server_name  192.168.161.130;

        #character set
        #charset utf-8;




	#location [=|~|~*|^~] /uri/ { ... }   
	# =Exact match
	# ~Regular matching, case sensitive
	# ~*Regular matching, case insensitive
	# ^~Turn off regular matching
	
	#Matching principle:
	 
	# 1. All matching is divided into two stages. The first is called ordinary matching and the second is regular matching.
	# 2. For normal matching, first use "=" to match the exact location
        #   If the prefix of 1 and 2 is not matched exactly, then
        #   2.2. If the matched location has ^ ~, the location will be used as the final matching result. If not, the matching result will be temporarily stored and regular matching will continue.
        # 3. Regular matching: match the location with the prefix of ~ or ~ * from top to bottom. Once the matching is successful once, this location will prevail immediately and regular matching will not continue downward.
        # 4. If the regular matching is not successful, continue to use the location that was successfully matched by the normal matching temporarily


        location / {   # Match any query because all requests start with /. However, regular expression rules and long block rules will be preferentially matched with the query.
	   
	    #Defines the default site root location for the server
            root   html;
            
	    #The name of the default access home page index file
	    index  index.html index.htm;

	    #Reverse proxy path
            proxy_pass http://myserver;

	    #Timeout of reverse proxy
            proxy_connect_timeout 10;

            proxy_redirect default;       

         }

         location  /images/ {    
	    root images ;
	 }

	 location ^~ /images/jpg/ {  # Match any queries that start with / images/jpg / and stop searching. Any regular expressions will not be tested. 
	    root images/jpg/ ;


	 }
         location ~*.(gif|jpg|jpeg)$ { 
	      
	      #All static files are read directly from the hard disk
              root pic ;
	      
	      #expires defines that the user's browser cache time is 3 days. If the static page is not updated frequently, it can be set longer, which can save bandwidth and relieve the pressure on the server
              expires 3d; #Cache for 3 days
         }


        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
 
    }

 

}

nginx file structure

...              #Global block

events {         #events block
   ...
}

http      #http block
{
    ...   #http global block
    server        #server block
    { 
        ...       #server global block
        location [PATTERN]   #location block
        {
            ...
        }
        location [PATTERN] 
        {
            ...
        }
    }
    server
    {
      ...
    }
    ...     #http global block
}
  • 1. Global block: configure instructions that affect nginx global. Generally, there are user groups running nginx server, pid storage path of nginx process, log storage path, introduction of configuration file, number of worker process es allowed to be generated, etc.
  • 2. events block: the configuration affects the nginx server or the network connection with the user. There is the maximum number of connections per process, which event driven model is selected to process connection requests, whether it is allowed to accept multiple network connections at the same time, and start the serialization of multiple network connections.
  • 3. http block: it can nest multiple server s, configure most functions such as proxy, cache, log definition and the configuration of third-party modules. Such as file import, MIME type definition, log customization, whether to use sendfile to transfer files, connection timeout, number of single connection requests, etc.
  • 4. server block: configure the relevant parameters of the virtual host. There can be multiple servers in one http.
  • 5. location block: configure the routing of requests and the processing of various pages.

Here is a configuration file for your understanding.

########### Each instruction must end with a semicolon.#################
#user administrator administrators;  #Configure users or groups. The default is nobody.
#worker_processes 2;  #The number of processes allowed to be generated. The default is 1
#pid /nginx/pid/nginx.pid;   #Specify the storage address of nginx process running files
error_log log/error.log debug;  #Make log path and level. This setting can be put into the global block, http block and server block. The level is: debug|info|notice|warn|error|crit|alert|emerg
events {
    accept_mutex on;   #Set the serialization of network connection to prevent group panic. The default is on
    multi_accept on;  #Set whether a process accepts multiple network connections at the same time. The default is off
    #use epoll;      #Event driven model, select|poll|kqueue|epoll|resig|/dev/poll|eventport
    worker_connections  1024;    #The maximum number of connections is 512 by default
}
http {
    include       mime.types;   #File extension and file type mapping table
    default_type  application/octet-stream; #The default file type is text/plain
    #access_log off; #Cancel service log    
    log_format myFormat '$remote_addr–$remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for'; #Custom format
    access_log log/access.log myFormat;  #combined is the default value for log format
    sendfile on;   #It is allowed to transfer files in sendfile mode, which is off by default. It can be in http block, server block and location block.
    sendfile_max_chunk 100k;  #The number of transfers per call of each process cannot be greater than the set value. The default value is 0, that is, there is no upper limit.
    keepalive_timeout 65;  #The connection timeout, which is 75s by default, can be set in http, server and location blocks.

    upstream mysvr {    #The server host port of the direction proxy. At the same time, load balancing can also be specified here,
      server 127.0.0.1:7878;
      server 192.168.10.121:3333 backup;  #Hot standby
    }
    error_page 404 https://www.baidu.com; # Error page
    server {
        keepalive_requests 120; #Maximum number of single connection requests.
        listen       4545;   #Listening port
        server_name  127.0.0.1;   #Listening address       
        location  ~*^.+$ {       #Request url filtering, regular matching, ~ is case sensitive, ~ * is case insensitive.
           #root path;  #root directory
           #index vv.txt;  #Set default page
           proxy_pass  http://mysvr;  # The request goes to the list of servers defined by mysvr
           deny 127.0.0.1;  #Rejected ip
           allow 172.18.5.54; #Allowed ip           
        } 
    }
}

The above is the basic configuration of nginx. The following points need to be noted:

1. Several common configuration items:

  • 1.$remote_addr and $http_x_forwarded_for is used to record the ip address of the client;
  • 2.$remote_user: used to record the client user name;
  • 3.$time_local: used to record access time and time zone;
  • 4.$request: used to record the url and http protocol of the request;
  • 5.$status: used to record the request status; Success is 200;
  • 6.$body_bytes_s ent: record the size of the main content of the file sent to the client;
  • 7.$http_referer: used to record the links accessed from that page;
  • 8.$http_user_agent: record the relevant information of the client browser;

2. Group startling phenomenon: when a network connection arrives, multiple sleeping processes are awakened at the same time, but only one process can get the link, which will affect the system performance.

3. Each instruction must end with a semicolon.

Original address: https://www.cnblogs.com/knowledgesea/p/5175711.html

Topics: Java Nginx