Functions and features of Nginx

Posted by php_coder_dvo on Mon, 27 Dec 2021 23:58:41 +0100

Functions and features of Nginx

Original 2021-12-22 20:12· Pears smell like milk

In the past 10 years, nginx has risen rapidly as a new generation in the Web server industry. Nginx was written by Igor Sysoev, a Russian engineer, and the first public version: 1.0.0 was released on October 4, 2004 1. Apache has always occupied the first place in the Web server industry, but since 2008, its market share has been imperceptibly transferred to nginx. Up to now, according to the statistics of Netcraft, more than 27% of the top 1 million websites in the world are using nginx as a Web server. Nginx quickly rose and gained a firm foothold in the always stable Web server industry

What is nginx

ngnix It is an open source, high-performance and reliable HTTP Middleware and proxy services

II. Function description of Nginx

1. Static HTTP server

Nginx is an HTTP server that can present static files (such as HTML and pictures) on the server to the client through HTTP protocol.

Configuration example:
server {  
       listen80; # Port number  
       location / {  
           root D:\frontproject\views; # Static file path  
       }  
}  

2. Reverse proxy server

The client requests Nginx, which requests the application server, and then returns the result to the client. At this time, Nginx is the reverse proxy server.

server{
       listen 80;  
       location / {  
           proxy_pass http://127.0. 0.1:8080; #  Application Server HTTP address  
       }  
 }  

3. Load balancing

When the website traffic is very large, deploy the same application on multiple servers and allocate a large number of user requests to multiple machines for processing. At the same time, the advantage is that if one server hangs, as long as there are other servers running normally, it will not affect users' use. Nginx can achieve load balancing through reverse proxy, and can use three built-in policies and two third-party policies.

(1) RR (by default, each request is allocated to different back-end servers one by one in chronological order)

upstream mypro {
       server 192.168.20.1:8080; # Application server 1  
       server 192.168.20.2:8080; # Application server 2  
    } 

   server {  
       listen 80;  
       location / {  
           proxy_pass http://mypro ;  
        }  
  } 

(2) Weight (specifies the polling probability. The weight is directly proportional to the access ratio, which is used in the case of uneven performance of the back-end server)

upstream mypro {  
       server 192.168.20.1:8080 weight=3; # The server handles 3 / 4 requests  
       server 192.168.20.2:8080; # weight defaults to 1, and the server handles 1 / 4 of the requests  
    }  

    server {  

     ... ...

    }  

(3) ip hash (the above configuration will allocate the request polling to the application server, that is, multiple requests from a client may be processed by multiple different servers (if there is a login session, you need to log in repeatedly). ip hash will allocate the request to a fixed server for processing according to the hash value of the client's ip address)

upstream mypro {  

        ip_hash; # According to the Hash value of the client IP address, the request is allocated to a fixed server for processing  
        server 192.168.20.1:8080;  
        server 192.168.20.2:8080;  
     }  

     server {  

        ... ...

     }  

(4) fair (the third party allocates requests according to the response time of the back-end server, and those with short response time are allocated first)

upstream mypro {

        fair;
        server localhost:8080;
        server localhost:8081;
 }

(5) url_hash (the third party allocates the request according to the hash result of the access url, so that each url is directed to the same back-end server, which is more effective when the back-end server is a cache. Add a hash statement in upstream, the server statement cannot write weight and other parameters, and the hash_method is the hash algorithm used)

upstream mypro {

        hash $request_uri;
        hash_method crc32;
        server localhost:8080;
        server localhost:8081;
}

4. Virtual host

Some websites have a large number of visits and need load balancing. Some websites need to save costs due to too little traffic. Multiple websites are deployed on the same server. For example, www.a COM and www.b Com two websites are deployed on the same server, and the two domain names resolve to the same IP address, but users can open two completely different websites through the two domain names without affecting each other, just like accessing two servers, so they are called two virtual hosts.

server {  

        listen 80 default_server;  
        server_name _;  
        return 444; # Filter requests from other domain names and return 444 status code  
    }  

    server {  

        listen 80;  
        server_name www.a.com; # www.a.com domain name  
        location / {  
          proxy_pass http://localhost:8080; #  Corresponding port number: 8080  
       }  
    }  

    server {  

      listen 80;  
      server_name www.b.com; # www.b.com domain name  
      location / {  
          proxy_pass http://localhost:8081; #  Corresponding port number 8081  
      }  
  } 

An application is opened on servers 8080 and 8081 respectively. The client accesses through different domain names according to the server_name can reverse proxy to the corresponding application server. The principle of virtual Host is to check whether the Host in the HTTP request header matches the server_name. In addition, server_ The name configuration can also filter that someone maliciously points some domain names to your Host server.

III. characteristics and advantages of Nginx

1. IO multiplexed epoll

Multithreading:
io Multiplexing: of multiple descriptors I/O Operations can be completed in a thread concurrently and alternately, which is called I/O Multiplexing here refers to multiplexing the same thread system call
io Implementation mode of multiplexing: select,poll,epoll

2. Lightweight

Few functional modules
 Code modularization

3. CPU affinity

cpu Affinity: it's a kind of handle cpu Core and nginx Work process binding mode, put each worker The process is fixed in a cpu Execute on, reduce switching cpu of cache miss,Get better performance

4,sendfile

No user space

Topics: Load Balance Nginx server