One , Cache processing flow

      Front desk request , The background fetches data from the cache first , Get the direct return result , If not, it will be retrieved from the database , Database fetches update cache , And return the result , We didn't get the database either , That returns the null result directly .

      

 

Two , Cache penetration

        describe :

     
  Cache penetration refers to data that is not in the cache or database , And users keep making requests , If initiated as id by “-1” Or id For very large non-existent data . The user is likely to be the attacker , Attacks can lead to excessive database pressure .

      Solution :

* Add verification to interface layer , Such as user authentication verification ,id Do basic verification ,id<=0 Direct interception of ;
*
Data not fetched from cache , It is not retrieved from the database , At this time, you can also key-value To write as key-null, The effective time of cache can be set as short as possible , as 30 second ( If the setting is too long, it can not be used under normal conditions ). The same user can prevent repeated attacks id Violent attacks
 

Three , Buffer breakdown

      describe :

     
Cache breakdown refers to the data not in the cache but in the database ( Generally, the cache time is expired ), At this time, there are many concurrent users , At the same time, no data was read from the read cache , At the same time go to the database to get data , Causes the database pressure to increase instantaneously , Cause too much pressure

      Solution :

* Set hotspot data to never expire .
* Add mutex lock , The mutex reference code is as follows :
         

 

          explain :

          1) Data in cache , Go directly to the above code 13 The result is returned after the line

       
 2) There is no data in the cache , The first 1 Incoming threads , Get the lock and fetch the data from the database , Before you release the lock , Other threads entering in parallel will wait 100ms, Then cache the data again . In this way, we can prevent repeated data fetching from the database , Repeatedly updating data to the cache .

         
3) This is, of course, a simplification , In theory, if we can key It would be better to lock the value , It's threads A Get from database key1 Does not interfere with threads B take key2 Data for , The above code obviously can't do this .

 

Four , Cache avalanche

      describe :

      Cache avalanche refers to a large number of data in the cache to the expiration time , The amount of query data is huge , It causes too much pressure on the database down machine . Unlike cache breakdown ,       
Cache breakdown refers to concurrent query of the same data , Cache avalanche means that different data are out of date , A lot of data can't be found, so check the database .

      Solution :

* The expiration time of cache data is set randomly , Prevent a large number of data expiration at the same time .
* If the cache database is a distributed deployment , The hot data is evenly distributed in different cache databases .
* Set hotspot data to never expire .

Technology