<>1. Redis Cache penetration

<><1> describe

key The corresponding data does not exist in the data source , Every time for this
key The request for could not be obtained from the cache , Requests are pushed to the data source , This may crush the data source . For example, use a non-existent user id Get user information , Neither cache nor database , If hackers exploit this vulnerability, they may crush the database .

<><2> phenomenon

* Application server pressure increases
* redis Hit rate reduced
* Always query database , Cause database crash
<><3> Solution

* Cache for null values
Cache results that cannot exist , Set a shorter expiration time for empty results , Up to 5 minute
* Set accessible list ( White list )
* Using Bloom filter
* Real time monitoring
When found Redis The hit rate of began to drop sharply , You need to check the access objects and data , Cooperate with operation and maintenance personnel , Blacklist restriction service can be set
<>2. Redis Buffer breakdown

<><1> describe

key The corresponding data does not exist , But in redis
Medium expiration , At this time, if a large number of concurrent requests come , These requests find that the cache expires, which is usually from the back end DB Load data and reset to cache , At this time, large concurrent requests may be sent to the back end in an instant DB Crush .

<><2> phenomenon

* Database access pressure increases instantaneously
* Redis Some key be overdue , Massive access to key, Redis There's not a lot in there key be overdue
* Redis normal operation
<><3> Solution

* Preset hot data , And increase the effective frequency
* Real time adjustment
Real time monitoring Redis Hot data , Real time adjustment key Overdue often
* Use lock

<>3. Redis Cache avalanche

<><1.> describe

key The corresponding data does not exist , But in redis
Medium expiration , At this time, if a large number of concurrent requests come , These requests are usually sent from the back end when the cache expires DB Load data and reset to cache , At this time, large concurrent requests may be sent to the back end in an instant DB Crush .
The difference between cache avalanche and cache breakdown is that there are many key cache , The former is a key

<><2> phenomenon

* Database pressure increases , Server crash
* In very few time periods , of large number key Centralized expiration
<><3> Solution

* Build multi-level cache architecture
nginx cache +redis cache + Other cache
* Using locks or queues
Lock or queue is used to ensure that a large number of threads will not read and write to the database at one time , So as to avoid a large number of concurrent requests falling on the underlying storage system in case of failure . Not applicable to high concurrency .
* Set expiration flag to update cache
Whether the record cache data has expired ( Set foreground ), If it expires, it will trigger another thread to be notified to update the actual in the background key Cache of
* Scatter cache expiration time
A random value can be added to the original failure time , such as 1-5 Minute random , In this way, the repetition rate of the expiration time of each cache will be reduced , It is difficult to cause collective failure .

Technology