See the name and know the meaning , Message queue is mainly used to send, receive and process messages , But it can not only solve the problem of communication between applications .
<>1 The origin of message queue
We can see all kinds of conveyor belts in factories , Many processes have replaced the labor-intensive round-trip movement , And divide a business into several parts , The required materials can be transferred between the processes . Using programming ideas , We can think that the invention of conveyor belt solved the problem between upstream and downstream processes “ signal communication ” problem .
The use of conveyor belts has really increased the production time of social necessary labor , Let the efficiency of human industrial society be improved significantly . But is Bailey really harmless ?
We will find that the production speed of each process is not the same . Sometimes the upstream material just came in , Workers may be working on the last batch of materials , There is no time to receive . Workers in different processes must coordinate when to place materials on the conveyor belt , If the upstream and downstream process speed is inconsistent , Upstream and downstream workers have to wait for each other , Make sure that there is not too much extrusion of semi-finished material on the conveyor belt , No one received it !
To solve this problem , A temporary warehouse is provided downstream of each process , In this way, upstream workers don't have to wait for downstream workers to be free , The finished semi-finished products can be thrown into the conveyor belt at any time , Those that cannot be received are temporarily stored in the warehouse , Downstream workers can pick it up at any time .
The equipped warehouse will work “ signal communication ” In the process “ cache ” effect .
This is the real version of message queuing .
<>2 Application scenarios of message queue
Understand the origin of message queuing , Look at the development , When is it needed MQ What about ?
<>2.1 Asynchronous processing
For example, the interview frequent visitor second kill system , A seckill request can involve many steps ：
* risk management
* Lock inventory
* Generate order
* Update statistics
The process without optimization is ：App Send request to gateway , Call the above process in turn , The result is then returned to the APP.
In fact, only risk control and lock inventory determine the success of second kill . As long as the user's request passes the risk control , And complete the inventory locking in the server , You can return the result of the second kill to the user , For subsequent orders , SMS notification and update statistics, etc , It doesn't have to be processed in the seckill request .
So when the server is finished 2 step , Determine the result of this request , It can give the user response , Then put the requested data into MQ, from MQ Perform subsequent operations asynchronously .
Five steps to two , Not only faster response , And in seconds , A large number of server resources can be used to process seckill requests . After the second kill, the resource is used to process the next step , Squeeze out limited server resources .
* Return results faster
* Less waiting , The concurrency between steps is realized naturally , Improve the overall performance of the system , Concentrate on major events ( Synchronization part ), Time for small things ( Asynchronous part )
* Reduced data consistency , To maintain strong consistency , High cost compensation is needed ( Such as distributed transaction , Reconciliation )
* Risk of data loss , Such as downtime and restart , To ensure that queue data is available , Need additional mechanism guarantee ( Such as disaster recovery )
<>2.2 flow control
Although MQ The asynchronous processing of a considerable number of services is realized , But there's a problem ： How to avoid too many requests crushing the second kill system ?
Good programs have the ability to protect themselves , That is to say, it should be able to use massive requests , You can also process as many requests as you can within your ability , Refuse requests that can't be processed and keep themselves running normally , It's as smooth as a thread pool . Instead of simply rejecting requests and returning errors like you and me , It's not a good user experience .
The idea is to use MQ Isolate gateway and back end services , Achieve flow control and protect back-end services .
After joining message queue , The whole seckill process becomes ：
* After the gateway receives the request , Put the request into the request MQ
* Back end service from request MQ obtain APP request , Complete the subsequent second kill process , Then return the result
After the start of the second kill , When a large number of seckill requests arrive at the gateway in a short time , It will not directly impact the back-end seckill service , But it was first accumulated in the MQ, Back end services try their best to MQ Consumption request and processing .
If the amount of information is particularly large , The message is suitable to exist to redis Is it suitable to save in rabbitmq in ? It's just a small warehouse after all , What to do if the volume is large ?
first redis It's definitely not suitable for storing messages , although redis Very good performance , But that's compared to mainstream databases , Generally, it can reach tens of thousands tps about ; And modern message queue can easily achieve hundreds of thousands tps Level performance .
When there's a lot of news , You need to consider using the MQ, Because once the consumption is slow , A lot of news will pile up MQ in , This is not a good situation to use RabbitMQ, It can be considered RocketMQ,Kafka and Pulsar.
The timeout request can be discarded directly ,APP Processing a timeout unresponsive request as a second kill failure . The operation and maintenance personnel can also increase the number of seckill instances at any time to expand the capacity horizontally , There is no need to change other parts of the system .
It can automatically adjust the flow rate according to the downstream processing capacity , achieve “ Peak shaving and valley filling ”.
* Add system call chain link , The overall response delay is prolonged
* Both upstream and downstream systems need to change synchronous calls to asynchronous messages , Increase system complexity
Is there a simple point flow control method ? If you can estimate the ability of seckill service , You can use it MQ Implement token bucket , Simpler flow control .
<>2.2.3 Token bucket flow control principle
Only a fixed number of tokens are issued to the token bucket per unit time , Specifies that the service must take a token from the token bucket before processing the request , If there is no token in the token bucket , The request is rejected .
This guarantees unit time , Can handle requests up to the number of issued tokens , Achieve flow control .
The implementation is simple , There is no need to destroy the original call chain , As long as the gateway is processing APP Add a token acquisition process when requesting .
A token bucket can be simply added to a message queue with a fixed capacity “ Token generator ” To achieve ： Token generator according to the estimated processing capacity , The token is produced at a constant speed and put into the token queue （ If the queue is full, the token is discarded ）, The gateway consumes a token from the token queue when it receives the request , If the token is obtained, it will continue to call the backend seckill service , If the token cannot be obtained, it will directly return the second kill failure .
Token bucket can be implemented with message queue , It can also be used Redis realization , You can also write a simple token bucket service , The principle is the same .
The above commonly used two design methods of traffic control using message queue , It can be reasonably selected according to their advantages and disadvantages and different applicable scenarios .
<>2.3 Service decoupling
For example, when a new order is created ：
* The payment system needs to initiate the payment process
* The risk control system needs to review the legality of the order
* Customer service system needs to send short messages to users to inform them
* The business analysis system needs to update the statistics ;
The downstream system of these orders needs to obtain the order data in real time . With business development , Orders are constantly changing downstream , Each system may only need a subset of order data , The order service team had to work hard , Dealing with the increasing downstream , Constantly modify the interface between order system and downstream system . Any downstream system interface change , The order module should be re launched , Order service for the core , This is not acceptable .
All e-businesses choose to use MQ Solve the similar system high coupling problem .
The order service sends a message to the MQ A theme of Order, All downstream systems subscribe to this topic , In this way, each downstream system can obtain a real-time complete order data .
No matter increase , How to reduce the downstream system or how the demand of downstream system changes , Order service does not need to be changed , The decoupling of order service and downstream service is realized .
* It is available in the module , service , Interface and other granularity to achieve decoupling
* subscribe / Consumption patterns can also be decoupled from data granularity
MQ It's not just about these scenarios ：
* As a release / Subscription system implements an observer pattern between microservice level systems
* Connection flow computing tasks and data
* Broadcast the message to a large number of recipients
In single application, it needs to be solved by queue , Most of them are available in distributed systems MQ solve .
On the whole , There are many applicable scenarios for message queuing , Such as the second kill , send emails , send message , High concurrent orders, etc
Inappropriate scenarios such as bank transfers , Telecom account opening , Third party payment, etc .
The key is to realize the advantages and disadvantages of message queuing , Then analyze whether the scenario is applicable .
<>3 Is shared memory available ,RDMA increase MQ performance ?
If you mean shared memory PageCache, Many message queues are used ,RDMA As far as I know, several common message queues should not have been used , image Kafka When it is consumed , Direct use Zero
Copy, Data directly from PageCache write to NIC In the buffer of , No need to enter the application memory space .
in addition , The bottleneck of modern message queuing is not in the local memory data exchange , It is mainly limited by network card bandwidth or disk IO, image JMQ,Kafka These message queues , Can be filled with 10 Gigabit network card or disk read and write speed full .
<>4 APP⇆ gateway – production –> Message queuing – consumption –> Seckill service
<>4.1 Massive requests are placed in MQ,MQ How to measure the overall capacity ?
Message queuing cannot hold unlimited messages , There should also be a rejection policy when the message queue is full , For example, the task queue of thread pool , Task queue full , And exceeded the maximum number of thread pools , Four rejection strategies .
actually , As long as there is enough disk capacity , Message queuing can hold unlimited messages . It's like seckill asking for this kind of data , High peak concurrency , But the total amount of data is not very large , therefore , It's no problem piling up in the message queue .
<>4.2 APP Response timeout , That is, the gateway does not return after a certain period of time
The message is still in the task queue , Or will it be processed by seckill service , In this case , Return to APP Second kill failed , But the seckill service has consumed news ? Is it compensation at the gateway ? If the connection has been disconnected , Will the seckill service roll back the processing of this message ?
According to the second kill failure processing can be .
<>4.3 Gateway and seckill service communicate through message queue , Is the response message returned through the queue ?
There will be APP Corresponding address, such as IP Something like that ? In this case ,APP The gateway is connected to a large number of connections at the same time , Isn't there going to be a problem ?
The response is generally adopted RPC To achieve . Timeout or before returning to the second kill result , Gateway and APP Are you sure you want to stay connected , This is HTTP By agreement . As for the gateway, can it withstand a large number of APP connect , You don't have to worry about this , Gateway is used to resist massive connection , It will also have various ways to solve this problem .
<>4.4 Message queuing should also have a multi standby strategy ? For example, the service for Queuing messages is down , All the news disappeared , Isn't there a problem with this ?
yes , In most production systems, message queues are configured as clusters , Ensure availability and data reliability , We will talk about this later .
* reference resources
《 Message queuing master class 》