Foreword

 

I. Introduction to nginx

 

1. What is nginx and what can be done

  • Nginx is a high-performance HTTP and reverse proxy web server that handles high concurrency and is highly capable of withstanding high loads, with reports indicating support for up to 50,000 concurrent connections.
  • It is characterized by less memory and strong concurrent power. In fact, the concurrency capability of nginx does perform well in the same type of web server. Users in China using nginx website are: Baidu, Jingdong, Sina, Netease, Tencent, Taobao, etc.

 

2.Nginx as a web server

  • Nginx can be used as a web server for static pages, as well as dynamic languages ​​for CGI protocols such as perl, php, and more. But does not support java. Java programs can only be done by working with tomcat. Nginx was developed for performance optimization, performance is its most important consideration, implementation is very efficient, can withstand high load, and reports have been able to support up to 50,000 concurrent connections.

    Https://lnmp.org/nginx.html

 

Forward proxy

Nginx can not only do reverse proxy, but also load balancing. It can also be used as a forward proxy to perform functions such as surfing the Internet. Forward proxy: If you think of the Internet outside the LAN as a huge resource library, the clients in the LAN need to access the Internet to access the Internet. This proxy service is called forward proxy.

  • Too simple: The process of accessing a server through a proxy server is called a forward proxy.
  • Need to configure the proxy server on the client to perform specified website access
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_0.png

 

Reverse proxy

  • Reverse proxy,In fact, the client is not aware of the proxy, because the client can access without any configuration..
  • We only need to send the request to the reverse proxy server, and the reverse proxy server selects the target server to obtain the data and returns it to the client.At this time, the reverse proxy server and the target server are a server externally.The exposed proxy server address hides the real server IP address.
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_1.png

 

5. Load balancing

  • Increase the number of servers, then distribute the requests to each server, and concentrate the original requests on a single server instead of distributing the requests to multiple servers, distributing the load to different servers, which is what we call the load. balanced
  • The client sends multiple requests to the server, the server processes the request, and some may interact with the database. After the server finishes processing, the result is returned to the client.

This architectural pattern is relatively suitable for early systems and relatively low in the case of relatively few concurrent requests, and the cost is low. However, with the continuous increase in the amount of information, the rapid growth of traffic and data volume, and the increasing complexity of system services, this architecture will cause the server’s corresponding client requests to be slower and more likely to cause servers when the volume is particularly large. Crash directly. Obviously this is caused by the bottleneck of server performance, so how to solve this situation?

The first thing we might think of is to upgrade the configuration of the server, such as increasing the CPU execution frequency, increasing the memory, etc. to improve the physical performance of the machine to solve this problem, but we know that the increasing failure of Moore’s Law, the performance improvement of the hardware can not meet the increasing Needs. The most obvious example, on the day of Tmall Double Eleven, the instantaneous traffic of a hot item is extremely large. So similar to the above system architecture, adding machines to the existing top-level physical configuration is impossible. Meet the needs. So what should I do? The above analysis we removed the way to increase the physical configuration of the server to solve the problem, that is to say, the vertical solution to the problem does not work, then increase the number of servers horizontally? At this time, the concept of clusters has arisen. A single server can’t solve it. We increase the number of servers, then distribute the requests to each server, and concentrate the original requests on a single server instead of distributing the requests to multiple servers. The load is distributed to different servers, which is what we call load balancing.
Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_2.png

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_3.png

 

6. Dynamic separation

In order to speed up the resolution of the website, dynamic pages and static pages can be parsed by different servers to speed up the parsing. Reduce the pressure on the original single server.
Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_4.png

 

Second, the installation of Nginx (Linux: centos as an example)

Nginx installation, the package used, I am ready, easy to use:

https://download.csdn.net/download/qq_40036754/11891855

originally wanted to put Baidu cloud, but trouble, so I uploaded directly to me Resources, everyone can also contact me directly, I will give it directly to everyone.

 

Preparation work

  • Open the virtual machine and use the finallshell to link the Linux operating system.
  • Go to nginx download software

    http://nginx.org/
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_5.png
  • Install its dependent software first, and finally install nginx.
  • Dependency tools:Pcre-8.3.7.tar.gz, openssl-1.0.1t.tar.gz, zlib-1.2.8.tar.gz, nginx-1.11.1.tar.gz. I also provide it here.

 

2. Start the installation

  • There are two ways, one for direct download and the second for unpacking. Most of them use the decompression package method.
  • My installation path: /usr/feng/
  1. Install pcre
    mode one, wget http://downloads.sourceforge.net/project/pcre/pcre/8.37/pcre-8.37.tar.gz.
    The party picks up the second, uploads the source compression package, decompresses, compiles, and installs the trilogy.
    1), extract the file, enter the pcre directory,
    2), ./configure is completed,
    3), execute the command: make && make install
  2. Install openssl to
    download OpenSSL address:

    http://distfiles.macports.org/openssl/
    1), extract the file, go back to the pcre directory,
    2), ./configure ,
    3), execute the command: make && make install
  3. Install zlib
    1), extract the file, go back to the pcre directory,
    2), ./configure ,
    3), execute the command: make && make install
  4. Install nginx
    1), extract the file, go back to the pcre directory,
    2), ./configure ,
    3), execute the command: make && make install

 

3. Run nginx

  • After installing nginx, it will be in the nginx folder under the path /usr/local. This is automatically generated.
  • Enter this directory:
cd /usr/local/nginx
    • 1

 

 

The contents of the directory are as follows
Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_6.png

  • Go to the sbin folder, which has two files: nginx and nginx.old.
  • Excuting an order:./nginx Can be executed
  • Test start: Ps -ef | grep nginx
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_7.pngAlready started.
  • Check the nginx default port (default is 80) and test it in the form of a web page (like Tomcat.)
  • Go to the directory view port: nginx.conf file under cd /usr/local/nginx/conf. This file is also the configuration file for nginx. Under vim:
    as followsNginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_8.png
  • Enter IP: 80 to display:
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_9.png

 

4. Firewall issues

Accessing nginx in linux on windows system, it cannot be accessed by default, because of firewall problem (1) shut down the firewall (2) open access port number, port 80

View open port number

firewall-cmd --list-all
    • 1

 

 

Set the open port number

firewall-cmd --add-service=http –permanent 
firewall-cmd --add-port=80/tcp --permanent
    • 1

 

    • 2

 

 

Restart the firewall

firewall-cmd –reload
    • 1

 

 

 

Third, Nginx common commands and configuration files

 

1. Nginx common commands

 

a. Using nginx to operate the command premise

Use the nginx operation command premise:Must enter the /sbin folder under the auto-generated directory of nginx.
Nginx has two directories: the
first one : the installation directory, I placed:

/usr/feng/
    • 1

 

 

Second : automatically generate a directory:

/usr/local/nginx/
    • 1

 

 

 

b. View the version number of nginx

./nginx -v
    • 1

 

 

 

c. Start nginx

./nginx
    • 1

 

 

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_11.png

 

d. close nginx

./nginx -s stop
    • 1

 

 

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_12.png

 

e. Reload nginx

In the directory:Execute the command under /usr/local/nginx/sbin, do not need to restart the server, automatically compile.

./nginx -s reload
    • 1

 

 

 

2. Nginx configuration file

 

a. Configuration file location

/usr/local/nginx/conf/nginx.conf
Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_13.png

 

b. Components of nginx

There are a lot of # in the configuration file. The beginning of the comment indicates the content. We remove all the paragraphs starting with #. The content after the streamlining is as follows:

worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

    • 6

 

    • 7

 

    • 8

 

    • 9

 

    • 10

 

    • 11

 

    • 12

 

    • 13

 

    • 14

 

    • 15

 

    • 16

 

    • 17

 

    • 18

 

    • 19

 

    • 20

 

    • twenty one

 

    • twenty two

 

    • twenty three

 

    • twenty four

 

    • 25

 

    • 26

 

 

  • Nginx configuration file has three parts

 

Part 1: Global Blocks

From the beginning of the configuration file to the contents of the events block, mainly configure some configuration commands that affect the overall operation of the nginx server, including configuring the users (groups) running the Nginx server, the number of worker processes allowed to be generated, the process PID storage path, and the logs. Storage path and type, introduction of configuration files, etc.
For example, the first line above is configured:

  worker_processes  1;
    • 1

 

 

This is a key configuration of the Nginx server concurrent processing service. The larger the worker_processes value, the more concurrent processing that can be supported, but it is subject to hardware, software, and other devices.

 

Part II: the events block

For example, the above configuration:

events {
    worker_connections  1024;
}
    • 1

 

    • 2

 

    • 3

 

 

The instructions involved in the events block**It mainly affects the network connection between the Nginx server and the user. Common settings include whether to enable serialization of network connections under multiple work processes, whether to allow multiple network connections to be received at the same time, and which event-driven model to handle connection requests, each The maximum number of connections that the word process can support at the same time.**The
above example shows that the maximum number of connections supported by each work process is 1024.
This part of the configuration has a great impact on the performance of Nginx, and should be flexibly configured in practice.

 

the third part:

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

    • 6

 

    • 7

 

    • 8

 

    • 9

 

    • 10

 

    • 11

 

    • 12

 

    • 13

 

    • 14

 

    • 15

 

    • 16

 

    • 17

 

    • 18

 

    • 19

 

    • 20

 

 

This is the most frequent part of the Nginx server configuration, with most of the features of the proxy, cache, and log definitions, as well as the configuration of third-party modules.

Note that the http block can also include http global blocks and server blocks.

  • Http global block
    The instructions for the http global block configuration include file import, MIME-TYPE definition, log customization, connection timeout, and maximum number of single-link requests.
  • Server block
    This is closely related to the virtual host. From the user’s point of view, the virtual host is exactly the same as a separate hardware host. This technology is generated to save the cost of the Internet server hardware.
    Each http block can include multiple server blocks, and each server block is equivalent to one virtual host.
    Each server block is also divided into global server blocks, and can contain multiple locaton blocks at the same time.
  1. Global Server Block The
    most common configuration is the listening configuration of this virtual machine host and the name or IP configuration of this virtual host.
  2. Location block
    A server block can be configured with multiple location blocks.
    The main function of this block is based on the request string received by the Nginx server (for example, server_name/uri-string), and the string other than the virtual host name (which can also be an IP alias) (for example, the previous /uri-string). Match and process specific requests. Address orientation, data caching, and response control, as well as the configuration of many third-party modules are also here.

 

Fourth, Nginx reverse proxy configuration example 1.1

 

1. Realize the effect

  • Open a browser and enter the address www.123.com in the browser address bar to jump to the tomcat main page of the liunx system.

 

2. Preparation

(1) Install tomcat on the liunx system, using the default port 8080,I am here 8080 is occupied by other applications, so I have modified the port to 8081. In the server.xml configuration file in the conf directory, as follows, the port is changed to 8081. In fact, there is a similar Connector tag below, but it depends on the protocol modification of the HTTP/1.1 protocol.

<Connector port="8081" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    • 1

 

    • 2

 

    • 3

 

    • 4

 

 

  • The tomcat installation file is placed in the liunx system and unzipped.
    Tomcat path : /usr/feng/apach-tomcat/tomcat8081
  • Go to the tomcat bin directory and start the tomcat server with ./startup.sh.

(2) Ports for open access (I don’t need it here)

  • Firewall-cmd –add-port=8080/tcp –permanent
  • Firewall-cmd –reload
  • View the open port number firewall-cmd –list-all

(3) Access the tomcat server through the browser in the windows system
Don’t forget to open tomcat, in the bin directory, use the command:

./startup.sh
    • 1

 

 

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_14.png

 

3. Analysis of the access process

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_15.png

 

4, the specific configuration

 

a. The first step is to configure the domain name and ip correspondence in the host file of the windows system.

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_16.png

Add content in the host file

 

2. The second step is the configuration of request forwarding in nginx (reverse proxy configuration)

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_18.png

 

5, the final test

As configured above, we listen on port 80 and access the domain name to www.123.com. If the port number is not added, it defaults to port 80. Therefore, when accessing the domain name, it will jump to the path 127.0.0.1:8081. Enter www.123.com on the browser side and the results are as follows:

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_19.png

 

Five, Nginx reverse proxy configuration example 1.2

 

1. Realize the effect

Implementation effect: use nginx reverse proxy, jump to the service of different ports according to the accessed path, the
nginx listening port is 9001,
visit http://127.0.0.1:9001/edu/ and directly jump to 127.0.0.1:8081 to
access Http://127.0.0.1:9001/vod/ Direct jump to 127.0.0.1:8082

 

2. Preparation

 

a. The first step, two tomcat ports and test page

  • Prepare two tomcats, one 8081 port and one 8082 port. Create two tomcat8081 and tomcat8082 folders under
    **/usr/feng/apach-tomcat/ , upload the Tomcat installation package to two folders, and unzip the installation. The 8081 Tomcat only changes one.Http protocol default port number** Just fine, just start it.
    Here you need to change the port number of the 8082. You need to modify the three ports. If you only change one port number, it won’t start. I have already tested it. (If you only modify the default port of the http protocol, the 8081 and 8082 will only start one). Because the default is 8080 ( there is no direct creation of folders, many are just built, with a little change in the first example above )
  1. Tomcat8081 unpack the package, then go to /bin and start with the command ./startup
  2. Tomcat8082
    uses the command to edit the file: /conf/server.xml The file
    vim server.xml is
    modified as follows:
    1, modify the default port of the server, by default 8005->8091Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_20.png2, modify the default port of the http protocol, the default 8080->8082
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_21.png

    3, modify the default port of the default ajp protocol, by default 8009-> 9001
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_22.png

  • And ready to test the page
    Write an a.html page,
    tomcat8081 tomcat, put in the directory /webapp/vod, content:
<h1>fengfanchen-nginx-8081!!!</h1>
    • 1

 

 

Tomcat of tomcat8082, put it in the directory /webapp/edu, content:

<h1>fengfanchen-nginx-8082!!!</h1>
    • 1

 

 

  • Test page
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_23.pngNginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_24.png

 

b. The second step is to modify the nginx configuration file.

Modify the nginx configuration file to add the server{} in the http block to
modify the comment.
Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_25.png

After the modification is successful
Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_26.png

  • Developed ports: nginx listening port: 8001, tomcat8081 port: 8081, tomcat8082 port: 8082.
  • Test Results
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_27.pngNginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_28.png
  • Location instruction

This directive is used to match the URL.
The syntax is as follows:

1, = : Before the uri without the regular expression, the request string is strictly matched with the uri. If the match is successful, the search is stopped and the request is processed immediately.
2, ~: is used to indicate that uri contains regular expressions and is case sensitive.
3, ~ *: Used to indicate that uri contains regular expressions and is not case sensitive.
4, ^~: Before using the uri with no regular expression, ask the Nginx server to find the location with the highest matching degree between the uri and the request string. Immediately use this location to process the request instead of using the regular uri in the location block. Matches the request string.

Note: If the uri contains a regular expression, it must have a ~ or ~* flag.

 

Sixth, Nginx load balancing configuration example 2

 

1. Realize the effect

Enter the address of the browser address bar http://208.208.128.122/edu/a.html, load balancing effect, average 8081 and 8082 ports

 

2. Preparation

 

a. Prepare two tomcat servers

  • Prepare two tomcat servers, one 8081, one 8082
  • The second instance of the reverse proxy above has been configured successfully. But you need to add something, as follows.

 

b. Modify one place

  • In the webapps directory of the two tomcats, create the name as the edu folder and create the page a.html in the edu folder for testing.
  • Since the second instance has the edu folder in the 8082, it can be created only under the 8081 folder.
    Then use the command under the vod file:
cp a.html ../edu/
    • 1

 

 

Can be completed,
view the command

cd ../edu/  # 进入到 edu 目录下
cat a.html  #查看内容
    • 1

 

    • 2

 

 

 

c. Test page

Test URL

http://208.208.128.122:8081/edu/a.html
    • 1

 

 

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_29.png

http://208.208.128.122:8082/edu/a.html
    • 1

 

 

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_30.png

 

3. Load balancing configuration in the nginx configuration file

Modified the configuration of the first example
Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_31.png

    upstream myserver {
        server 208.208.128.122:8081;
        server 208.208.128.122:8082;
    }
    server {
        listen       80;
        server_name  208.208.128.122;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            proxy_pass   http://myserver;
            #proxy_pass   http://127.0.0.1:8081;
            index  index.html index.htm;
    }
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

    • 6

 

    • 7

 

    • 8

 

    • 9

 

    • 10

 

    • 11

 

    • 12

 

    • 13

 

    • 14

 

    • 15

 

    • 16

 

    • 17

 

    • 18

 

 

 

4. Final test

Test url

http://208.208.128.122/edu/a.html
    • 1

 

 

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_32.png

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_33.png

 

4. nginx allocation server policy

With the explosive growth of Internet information, load balance is no longer a strange topic. As the name implies, load balancing is to distribute the load to different service units, ensuring the availability of services and ensuring fast response. , give users a good experience. Fast-growing traffic and data traffic has spawned a wide range of load-balancing products. Many professional load-balancing hardware provide good functionality, but they are expensive, making load-balancing software popular, and nginx is one of them. One, under Nginx, LVS, Haproxy, etc., can provide load balancing services, and Nginx provides several distribution methods (policies):

 

a. Polling (default)

Each request is assigned to a different back-end server one by one in chronological order. If the back-end server is down, it can be automatically culled.
Configuration method:

 

b. weight

Weight represents the weight, the default is 1, the higher the weight, the more clients are allocated

    upstream myserver {
        server 208.208.128.122:8081 weight=10;   #  在这儿
        server 208.208.128.122:8082 weight=10;
    }
    server {
        listen       80;
        server_name  208.208.128.122;
        location / {
            root   html;
            proxy_pass   http://myserver;
            index  index.html index.htm;
    }
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

    • 6

 

    • 7

 

    • 8

 

    • 9

 

    • 10

 

    • 11

 

    • 12

 

 

 

c. ip_hash

Ip_hash Each request is allocated by the hash result of the access ip, so that each visitor has a fixed access to a backend server

    upstream myserver {
    	ip_hash;							//  在这儿
        server 208.208.128.122:8081 ;   
        server 208.208.128.122:8082 ;
    }
    server {
        listen       80;
        server_name  208.208.128.122;
        location / {
            root   html;
            proxy_pass   http://myserver;
            index  index.html index.htm;
    }
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

    • 6

 

    • 7

 

    • 8

 

    • 9

 

    • 10

 

    • 11

 

    • 12

 

    • 13

 

 

 

d. fair (third party)

Fair (third party), the request is allocated according to the response time of the backend server, and the response time is shortly assigned.

    upstream myserver {					
        server 208.208.128.122:8081 ;   
        server 208.208.128.122:8082 ;
        fair; 														#  在这儿
    }
    server {
        listen       80;
        server_name  208.208.128.122;
        location / {
            root   html;
            proxy_pass   http://myserver;
            index  index.html index.htm;
    }
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

    • 6

 

    • 7

 

    • 8

 

    • 9

 

    • 10

 

    • 11

 

    • 12

 

    • 13

 

 

 

Sixth, Nginx dynamic and static separation configuration example 3

 

1. What is dynamic separation?

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_34.png

Nginx separation is simply to separate dynamics from static requests. It cannot be understood as simply separating physical pages from static pages. Strictly speaking, dynamic requests should be separated from static requests. It can be understood as using Nginx to process static pages and Tomcat to process dynamic pages. Dynamic and static separation is roughly divided into two types from the current implementation perspective:

  • One is to separate static files into separate domain names and put them on independent servers. It is also the mainstream mainstream program;
  • Another way is to dynamically mix and publish with static files, separated by nginx.

Different request forwarding is implemented by specifying different suffix names. With the expires parameter setting, you can make the browser cache expiration time and reduce the previous requests and traffic with the server. The specific Expires definition: is to set an expiration time for a resource, that is to say, without having to go to the server to verify, it can be confirmed directly by the browser itself, so no additional traffic will be generated. This approach is ideal for resources that do not change often. (If the file is updated frequently, it is not recommended to use Expires for caching.) I set 3d here, indicating that the URL is accessed within 3 days, and a request is sent. If the last update time of the file is unchanged, the file will not be changed. The server fetches and returns the status code 304. If there is any modification, it is directly downloaded from the server and returns the status code 200.

 

2. Preparation

  • Prepare static resources for access in a Linux system.
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_35.png
  1. A.html in the www folder
<h1>fengfanchen-test-html</h1>
    • 1

 

 

  1. 01.jpg
    my photo in the image ! ! ! (automatically ignored)
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_36.png

 

3. Specific configuration

 

a. Configure in the nginx configuration file

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_37.png

 

4. Final test

 

Test image

http://208.208.128.122/image/
http://208.208.128.122/image/01.jpg
    • 1

 

    • 2

 

 

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_38.png

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_39.png

 

b. Test www

http://208.208.128.122/www/a.html
    • 1

 

 

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_40.png

 

Seven, Nginx’s high availability cluster

 

1. What is nginx high availability

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_41.png

Configuration example process:

  1. Need two nginx servers
  2. Need keepalived
  3. Need virtual IP

 

2. Configure high availability preparations

  1. Requires two servers 208.208.128.122 and 208.208.128.85
  2. On the two servers install nginx (the top of the process) the
    second server’s default port is changed to 9001, run and test, as follows:
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_42.png
  3. Install keepalived on two servers

 

2. Install keepalived on both servers

 

a) Installation:

The first way: command installation

yum install keepalived -y
# 查看版本:
rpm -q -a keepalived
    • 1

 

    • 2

 

    • 3

 

 

The second way: install package mode (here I use this)
upload the zip package to: /usr/feng/
command as follows:

cd /usr/feng/
tar -zxvf keepalived-2.0.18.tar.gz
cd keepalived-2.0.18
./configure
make && make install
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

 

 

b) configuration file

After installation, the directory keepalived is generated in etc, and there is a file keepalived.conf.
This is the main configuration file.
The master-slave mode is mainly configured in this file.

 

Complete high availability configuration (master-slave configuration)

 

a) Modify the keepalived.conf configuration file

Modify the /etc/keepalived/keepalivec.conf configuration file

global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc 
   smtp_server 208.208.128.122
   smtp_connect_timeout 30 
   router_id LVS_DEVEL 
} 
  
vrrp_script chk_http_port { 
  
   script "/usr/local/src/nginx_check.sh" 
   
   interval 2      #(检测脚本执行的间隔) 
  
   weight 2 
  
} 
  
vrrp_instance VI_1 {     
	state MASTER   # 备份服务器上将 MASTER 改为 BACKUP       
	interface ens192  //网卡     
	virtual_router_id 51   # 主、备机的 virtual_router_id 必须相同     
	priority 100     # 主、备机取不同的优先级,主机值较大,备份机值较小 
    advert_int 1 
    authentication { 
        auth_type PASS 
        auth_pass 1111 
    } 
    virtual_ipaddress {         
		208.208.128.50 // VRRP H 虚拟地址 
    } 
}
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

    • 6

 

    • 7

 

    • 8

 

    • 9

 

    • 10

 

    • 11

 

    • 12

 

    • 13

 

    • 14

 

    • 15

 

    • 16

 

    • 17

 

    • 18

 

    • 19

 

    • 20

 

    • twenty one

 

    • twenty two

 

    • twenty three

 

    • twenty four

 

    • 25

 

    • 26

 

    • 27

 

    • 28

 

    • 29

 

    • 30

 

    • 31

 

    • 32

 

    • 33

 

    • 34

 

    • 35

 

    • 36

 

 

 

b) Add a detection script

Add a detection script to /usr/local/src

#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
    /usr/local/nginx/sbin/nginx
    sleep 2
    if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
        killall keepalived
    fi
fi
    • 1

 

    • 2

 

    • 3

 

    • 4

 

    • 5

 

    • 6

 

    • 7

 

    • 8

 

    • 9

 

 

 

c) turn on nginx and keepalived

Start nginx and keepalived on both servers:
start nginx:./nginx to
start keepalived: systemctl start keepalived.service
Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_43.png

85 services are the same.

 

4. Final test

 

a) Enter the virtual IP address 192.168.17.50 in the browser address bar.

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_44.png

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_45.png

 

b) Stop the main server (192.168.17.129) nginx and keepalived, then enter 192.168.17.50

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_46.png

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_47.png

 

Eight, the principle of Nginx

 

Mater and worker

  • After nginx starts, it consists of two processes. Master and worker.
  • A nginx has only one master. But there can be multiple workers
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_48.png
  • The request coming over is managed by the master, and the worker performs a contention method to obtain the request.
    Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_49.pngNginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_50.png

 

2. The benefits of the master-workers mechanism

  • First of all, for each worker process, the independent process does not need to be locked, so the cost of the lock is saved, and it is also convenient when programming and problem finding.
  • You can use nginx –s reload hot deployment and use nginx for hot deployment operations
  • Secondly, independent processes can be used to prevent each other from affecting each other. After a process exits, other processes are still working, services are not interrupted, and the master process starts a new worker process. Of course, the abnormal exit of the worker process is definitely a bug in the program. If the program exits abnormally, all requests on the current worker will fail, but it will not affect all requests, thus reducing the risk.

 

3. How many workers are set?

Nginx uses the io multiplexing mechanism similar to redis. Each worker is a separate process, but there is only one main thread in each process, and the request is processed through asynchronous non-blocking, even thousands of. The request is not in the words. Each worker’s thread can maximize the performance of a cpu. So the number of workers and the number of CPUs on the server are the most appropriate. If you set it less, it will waste CPU. If you set it too much, it will cause the loss caused by cpu frequently switching context.

  • The number of workers is equal to the number of CPUs of the server.

 

4. Connection number worker_connection

First : Send a request, taking up a few connections to the woker?

  • Answer: 2 or 4

Second : nginx has a master with four wokers. Each woker supports a maximum of 1024 connections. What is the maximum number of concurrent supports?

  • The maximum number of concurrent static accesses is: worker_connections * worker_processes /2,
  • In the case of HTTP as a reverse proxy, the maximum number of concurrency should be worker_connections * worker_processes/4.

This value is the maximum value that each worker process can establish, so the maximum number of connections a nginx can create should be worker_connections * worker_processes. Of course, here is the maximum number of connections. For HTTP requests, the maximum number of concurrent supports is worker_connections * worker_processes. If the browser that supports http1.1 takes up two connections per access, it is normal. The maximum number of concurrent accesses is: worker_connections * worker_processes /2, and if HTTP is the reverse proxy, the maximum number of concurrency should be worker_connections * worker_processes/4. Because as a reverse proxy server, each concurrent connection establishes a connection to the client and a connection to the backend service, which occupies two connections.

Nginx_install,_forward_proxy,_reverse_proxy,_load_balancing_Common_commands_and_configuration_files_51.png

The article was last published on: 2019-10-25 14:30:23

Orignal link:https://blog.csdn.net/qq_40036754/article/details/102463099