1. What is? Nginx?

Nginx It's a high performance one HTTP And reverse proxy server , It's also a IMAP/POP3/SMTP The server

Nginx It's lightweight Web The server / Reverse proxy server and e-mail (IMAP/POP3) proxy server
Most used web Server or proxy server , Like Taobao , Sina , Netease , Thunderbolt and others are using it

2. Why use it Nginx?

advantage :

Cross platform , Simple configuration
Non blocking , High concurrency connection : handle 2-3 10000 concurrent connections , Official monitoring can support that 5 Million concurrent
Low memory consumption : open 10 individual nginx Cai Zhan 150M Memory Low cost : Open Source
Built in health check function : If one server goes down , Will have a health check-up , Requests sent again will not be sent to the down server . Re submit the request to another node .
Saving broadband : support GZIP compress , Browser local cache can be added
High stability : The probability of downtime is very small
master/worker structure : One master process , Generate one or more worker process
Receiving user requests is asynchronous : Browser sends request to nginx The server , It first receives all user requests , Send it to the back end once again web The server , Greatly reduced web Server stress
Receive on one side web Data returned by the server , Send it to the browser client
The network dependence is relatively low , as long as ping Then load balancing can be achieved
There can be more than one nginx The server
event driven : Communication mechanism epoll Model
3. Why? Nginx The performance is so high ?

Thanks to its event handling mechanism : Asynchronous non blocking event handling mechanism : Used epoll Model , Provides a queue , Queuing for solution

4.Nginx How to achieve high concurrency

service nginx start after , Then enter #ps -ef|grep
nginx, You'll find out Nginx There is one master Process and several worker process , these ones here worker The process is equal , It's all by master
fork Come here . stay master inside , Establish needs first listen Of socket(listenfd), And then again fork More than one worker process . When the user enters nginx Service time , each worker Of listenfd Become readable , And these worker Will grab a call accept_mutex Something about ,accept_mutex They are mutually exclusive , One worker Got it , other worker It's time to stop . And grab this accept_mutex Of worker It starts “ Read request – Parse request – Processing requests ”, After the data is completely returned to the client ( The target page appears on the computer screen ), This is a complete end .

nginx This method is the bottom worker Requirements of process preemption users , At the same time “ Asynchronous non blocking ” The way , Achieve high concurrency .

5. Why not use multithreading ?

Because thread creation and context switching are very resource intensive , Threads take up a lot of memory , Context switch occupancy cpu It's also very high , use epoll The model avoids this shortcoming

6.Nginx How is a request processed ?

first ,nginx On startup , Will parse the configuration file , Get the port to listen to and ip address , And then in the nginx Of master In the process

Initialize this monitor first socket( establish socket, set up addrreuse And so on , Bind to the specified ip Address port , again listen)

And then again fork( An existing process can call fork Function to create a new process . from fork A new process created is called a child process ) Many sub processes come out

Then the child processes compete accept New connections . here , The client can send the nginx Connection initiated . When the client and nginx Shake hands three times , And nginx After establishing a connection

here , A subprocess will accept success , To get this established connection socket, Then create nginx Encapsulation of connections , Namely ngx_connection_t structural morphology

next , Set read / write event handling function and add read / write event to exchange data with client . last ,nginx Or client to actively close the connection , Here we are , One connection is dead

7. Forward agent

One is located on the client and the original server (origin server) Between servers , To get content from the original server , The client sends a request to the agent and specifies the destination ( Original server )

The proxy then forwards the request to the original server and returns the obtained content to the client . The client can only use forward proxy

Forward agency summary in one sentence : The proxy is the client

8. Reverse proxy

Reverse proxy (Reverse Proxy) Mode means to accept by proxy server internet Connection request on , Then request , To servers on the internal network

And returns the results from the server to the internet Client requesting connection on , At this time, the proxy server is represented as a reverse proxy server

Reverse agency summary in one sentence : The proxy is the server

9. dynamic resource , Static resource separation

dynamic resource , Static resource separation is to make the dynamic web pages in dynamic websites distinguish the unchanging resources and the frequently changing resources according to certain rules , After the separation of dynamic and static resources

We can cache static resources according to their characteristics , This is the core idea of website static processing

dynamic resource , Static resource separation can be summarized as follows : Separation of dynamic file and static file

10. Why move , Static separation ?

In our software development , Some requests need to be processed in the background ( as :.jsp,.do wait ), Some requests do not need to be processed in the background ( as :css,html,jpg,js And so on )

These files that do not require background processing are called static files , Otherwise, the dynamic file . So we ignore static files in the background . It will be said that if I ignore static files in the background, it will be over

Of course, it can , However, the number of requests in the background has increased significantly . When we need the response speed of resources , We should use this strategy to solve this problem

move , Static separation of static resources (HTML,JavaScript,CSS,img Etc ) Separate deployment from background application , Speed up user access to static code , Reduce access to background applications

Here we put static resources into nginx in , Forward dynamic resources to tomcat Server

11. load balancing

Load balancing is that the proxy server distributes the received requests to each server in a balanced manner

Load balancing mainly solves the problem of network congestion , Improve server response speed , Services provided nearby , Achieve better access quality , Reduce the pressure of background server concurrency

Welcome to do it Java The exchange group made up of friends (785779447) Get free Java Structure learning materials ( It has high availability , High concurrency , High performance and distributed ,Jvm performance tuning ,Spring Source code ,MyBatis,Netty,Redis,Kafka,Mysql,Zookeeper,Tomcat,Docker,Dubbo,Nginx And other knowledge points )
It covers all aspects of the Internet , During this period, we encountered various problems in various scenarios of various products , It is worth learning from , Expand their technical breadth and knowledge .

©2020 ioDraw All rights reserved
synchronized Fair lock or unfair lock R language | Processing method of missing value —— multiple imputation : utilize mice() package @ControllerAdvice Intercept abnormal return data GitHub HR supervisor resigns due to dismissal of Jewish employee python Basic exercises ( One )Can not deserialize instance of java.util.ArrayList out of VALUE_TRUE tokenpytorch torchvision.transforms.Normalize() In mean and std parameter --- Dispelling doubts NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver. First order low pass filter - Continuous to discrete Sources said Alibaba will cancel “361” system No coercion 10% No year-end bonus for employees