1. View the status of a container , See why you quit
[root@docker ~]# docker inspect a6fb3d53a55b | grep -i status -A 10 "Status":
"exited", "Running": false, "Paused": false, "Restarting": false, "OOMKilled":
true, "Dead": false, "Pid": 0, "ExitCode": 137, "Error": "", "StartedAt":
"2021-11-11T00:48:00.806908787Z", "FinishedAt": "2021-11-11T00:48:39.15824301Z"
2.vm.overcommit_memory

Redis Such logs may appear at startup :
WARNING overcommit_memory is set to 0! Background save may fail under low
memory condition. To fix this issue add 'vm.overcommit_memory = 1' to
/etc/sysctl.conf and then reboot or run the command 'sysctl
vm.overcommit_memory=1' for this to take effect.
Before analyzing this problem , First of all, find out what is overcommit? Linux The operating system responds to most requests for memory yes, So that more programs can run .
Because after applying for memory , Memory is not used immediately , This technology is called overcommit. If Redis The above log is available at startup ,
explain vm.overcommit_memory=0, Redis Prompt to set it to 1.
vm.overcommit_memory Used to set the memory allocation policy , There are three optional values , As shown in the table : Available memory represents physical memory and swap Sum of

In the log Background save Represents bgsave and bgrewriteaof, If the currently available memory is insufficient , What should the operating system do fork operation . If
vm.overcommit_memory=0, Represents if no memory is available , Failed to apply for memory , Corresponding to Redis It's execution fork fail , stay Redis Your log will appear :
Cannot allocate memory
Redis It is recommended to set this value to 1, To make fork The operation can be performed successfully under low memory .

 3.oom_badness() function

In occurrence OOM When ,Linux What criteria are used to select the process of being killed ? This is to mention one in Linux There is one in the kernel oom_badness()
function , It is it that defines the criteria for selecting the process . In fact, the judgment standard here is also very simple , Two conditions are involved in the function : 

* first , The number of physical memory pages used by the process .
* second , Per process OOM Calibration value oom_score_adj. stay /proc File system , Each process has a
/proc/<pid>/oom_score_adj Interface file for . We can enter in this file -1000 reach 1000 Any value between , The adjustment process is OOM
Kill Probability of . adj = (long)p->signal->oom_score_adj; points = get_mm_rss(p->mm) +
get_mm_counter(p->mm, MM_SWAPENTS) +mm_pgtables_bytes(p->mm) / PAGE_SIZE; adj
*= totalpages / 1000; points += adj;
Combine the two conditions mentioned above , function oom_badness() This is the final calculation method in : Total number of available pages in the system , To multiply OOM Calibration value
oom_score_adj, Plus the number of physical pages that the process has used , The larger the calculated value , So this process is OOM Kill The more likely you are .

4.Memory Cgroup? 

Memory Cgroup Also Linux Cgroups One of the subsystems , Its function is to control a group of processes Memory Use as a limitation .

First parameter , be known as memory.limit_in_bytes. Please pay attention , this memory.limit_in_bytes It is the most important parameter in each control group .
This is because the maximum amount of memory available to all processes in a control group , It is directly limited by the value of this parameter . 

Second parameter memory.oom_control Yes . this memory.oom_control
What do you do ? When the process memory usage in the control group reaches the upper limit , This parameter can determine whether it will be triggered OOM Killer.

If there is no artificial setting ,memory.oom_control The default value of is triggered OOM Killer. This is within a control group OOM Killer, And the whole system
OOM Killer The functions are similar , The difference is only the choice of the killed process : Within the control group OOM Killer Of course, it can only kill the processes in the control group , You cannot select other processes on the node .

Third parameter , that is memory.usage_in_bytes. This parameter is read-only , The value in it is the total memory actually used by all processes in the current control group .
We can see this value , Then compare it with memory.limit_in_bytes Compare the values in , We can make a prediction according to the proximity . The closer these two values are ,OOM
The higher the risk . Through this method , We can know , Is the total amount of memory used in the current control group available OOM The risk of . 

 

control group

  There is also a tree hierarchy between control groups , In this structure , In the control group of the parent node memory.limit_in_bytes value , You can limit the memory usage of all processes in its child nodes . 

Let me illustrate with a concrete example , For example, as shown in the figure below ,group1 Inside memory.limit_in_bytes The set value is 200MB, Its sub control group
group3 in memory.limit_in_bytes Value is 500MB. that , We are group3 The total amount of memory used by all processes in the system cannot exceed
200MB, instead of 500MB.

okay , Here we introduce Memory Cgroup The most basic concept , To sum up : 

  first ,Memory Cgroup Each control group in the can limit memory usage for a group of processes , Once the total amount of memory used by all processes reaches the limit , By default , Will trigger OOM
Killer. thus , In the control group “ A process ” Will be killed .

second , Kill here “ A process ” The selection criteria are , The total number of available pages in the control group multiplied by the number of processes oom_score_adj, Add the physical memory pages already used by the process ,
The process with the largest value , Will be selected and killed by the system .

 

Technology