History of concurrency
actually , In the early days, computers did not contain an operating system , This time , This computer only runs one program , This program has exclusive access to all computer resources , There is no concurrency problem at this time , But for computer resources , It's really a waste . Early programming was based on a single process , With the development of computer technology , therefore , The operating system appears , The operating system has changed this situation , Enables the computer to run multiple programs , And different programs occupy independent computer resources , Such as memory ,CPU etc .
After the operating system appears ：
* Resource utilization ： You can wait while other programs are executing , To perform other procedures , So as to improve the utilization of the program
* Fairness ： All programs can share computer resources , An effective way is to let programs share computer resources through time slices
* Task communication ： When writing multitasking programs , A program can perform a task , When necessary , Communication between programs is sufficient
When a computer changes from a single program to a multi program , At this time, multithreading was developed , A thread is each execution control flow in a process , Or execution route . If there is no clear coordination mechanism , Then each thread will run independently , Shared process memory and CPU resources , Although the ability of multi task parallel processing is greatly improved between multi processes and multi threads , But it is essentially a time-sharing system , It's not really parallel in time , The way to solve this problem is obvious , Is to make multiple CPU Ability to calculate tasks simultaneously , So as to realize the real multi task parallelism .
Advantages of concurrency
Concurrency can meet the needs of multitasking , For example, listening to music while writing code , Even writing multithreaded programs is challenging , But it is still in use , Because it can bring the following benefits ：
* Better resource utilization
* In some scenarios, the design of the program will be simpler
* Improve program responsiveness
* Multi process / Multitasking ： single CPU Concurrency under , If in use QQ At the same time, open iqiyi to watch the play
* Multithreading / Subtask ： Concurrency in a single application , For example, Blog websites can handle access requests from different users
Mononuclear CPU You can only perform one task at a time , Want to multitask , Need to CPU
The running time is cut into time slices , Run one program per time slice , Circularly allocate time slices to different applications , Because the time slice is very short , In the user's opinion , It's like multiple tasks running at the same time .
In the world of computers , Kernel handle CPU The execution time of is divided into many time slices , such as 1 Seconds can be divided into 100 individual 10 Millisecond time slice , Each time slice is distributed to different processes , usually , Each process requires multiple time slices to complete a request . such , Although on the micro level , Like this 10 Millisecond time CPU Only one process can be executed , But macroscopically 1 Seconds 100 Time slice , Thus, the requests in the process to which each time slice belongs are also executed , This enables concurrent execution of requests .
CPU It's like a worker on an assembly line , Continuously process all kinds of information packages on the pipeline , Open the package read command and execute it , Slow execution encountered IO call （ Or the execution time slice ends ） It will be temporarily placed in the waiting area , Continue processing the next package waiting to be processed on the pipeline . There are many such packages in the waiting area , Waiting for the of the system IO Execution complete , When IO After the call , It starts to enter the waiting queue again .
Disadvantages of process ：
In the operating system , The memory space of each process is independent , In this way, using multiple processes to achieve concurrency has two disadvantages ： First, the management cost of the kernel is high , Second, it is impossible to simply synchronize data through memory , It's inconvenient, so multithreading mode appears .
thread , It is the smallest unit that the operating system can schedule operations . It is included in the process , It is the actual operation unit in the process . A thread refers to a single sequence of control flows in a process , Multiple threads can be concurrent in a process , Each thread performs different tasks in parallel . Threads are always within the process . A process contains at least one thread .
If a process contains only one thread , Then all the code in it will only be executed serially . The first thread of each process is created as the process starts , They can be called the main thread of the process to which they belong . Corresponding , If a process contains multiple threads , Then the code can be executed concurrently . Except for the first thread of the process , Other threads are created by threads that already exist in the process . therefore , Threads can share memory address space , Address kernel management costs , Memory synchronization data problem .
Because the essence of thread running is function running , The function runtime information is stored in the stack frame , therefore Each thread has its own independent , Private stack area .
Code area in process address space , What is stored here ? Some students may have guessed from the name , you 're right , Here is the code we wrote , More accurately Compiled executable machine instructions .
heap Obviously , Just know the address of the variable , That is, the pointer , Any thread can access the data pointed to by the pointer , Therefore, the heap is also a process resource shared by threads .
Generally speaking, the stack area is thread private , Since there are ordinary times, there are unusual times , Not usually because, unlike strict isolation between process address spaces , There is no strict isolation mechanism to protect the stack area of threads , therefore
If one thread can get the pointer from another thread stack frame , Then this thread can change the stack area of another thread , In other words, these threads can arbitrarily modify variables that belong to another thread stack area .
Thread scheduling and its disadvantages :
Running multiple threads in parallel in the same process , It is a simulation of multiple processes running in parallel on the same computer . therefore , Threads are also called lightweight processes . Similar to process scheduling ,CPU Fast switching between threads , It creates the illusion that threads run in parallel . Because each thread can access each memory address in the process address space , So a thread can read , write , Even clear the stack of another thread . in other words , There is no protection between threads . But note that , Each thread has its own stack , Program counter , Register and other information , These are not shared .
Thread switching is controlled by the kernel , When will threads be switched ? Not only did the time slice run out , When a blocking method is called , Kernel for CPU
Full work , It will also switch to other threads for execution . The cost of a context switch ranges from tens of nanoseconds to a few microseconds , When threads are busy and numerous , These switches consume most of the time CPU Computing power .
A coroutine is a thread in user mode . Usually when creating a collaboration , A section of memory will be allocated from the process heap as the stack of the process . The stack of threads has 8MB, The size of the co process stack is usually only a few tens
KB. and ,C The library memory pool also does not pre allocate memory for coroutines , It does not perceive the existence of synergy . such , Lower memory footprint guarantees high concurrency , After all, 100000 concurrent requests , That means 10 Wan Gexie Cheng .
In the two-level threading model, user threads and kernel threads are one-to-one
Each coroutine has its own stack , The stack retains the value of the variable , The function calling relationship is also preserved , Parameters and return values ,CPU Stack register in SP Points to the stack of the current collaboration , Instruction register IP Save the address of the next instruction to be executed . therefore , Slave process 1 Switch to co process 2 Time , First of all SP,IP The value of the register is thread 1 Save it , Then find the co process from memory 2 Register value saved before the last switch , write in CPU Register of , This completes the coprocess switching .
stay GO In language , The runtime system of language will help us automatically create and destroy system level threads . The system level thread here refers to the thread provided by the operating system we just mentioned . The corresponding user level collaboration refers to the one built on the system level thread , The program we write completely controls the code execution process . Creation of user level collaboration , Destroy , dispatch , State changes and the code and data in them all need to be implemented and processed by our program itself . This brings many advantages , such as , Because they are not created and destroyed through the operating system , So it will be fast , Another example , Because you don't have to wait for the operating system to schedule them , So it's often easy to control and flexible .
* G - Goroutine,Go Synergetic process , It is the smallest unit involved in scheduling and execution
* M - Machine, Refers to system level threads
* P - Processor, Refers to the logical processor ,P Associated local runnable G Queue of ( also known as LRQ), Up to 256 individual G.
Among them M Refers to system level threads . and P It refers to a method that can carry several G, And can make these G Timely and M Docking , And get the real running intermediary .
When one is working with someone M Docked and running G, Due to an event （ Like waiting
I/O Or release of lock ） When the operation is suspended , The scheduler always finds out in time , And put this G With that M Separation , To free computing resources for those waiting to run G use . And when one G When you need to resume operation , The scheduler will find free computing resources for it as soon as possible （ include M） And arrange operation . in addition , When M Not enough time , The scheduler will help us request new system level threads from the operating system , And when someone M When useless , The scheduler will be responsible for destroying it in time . therefore Go Programs always make efficient use of operating system and computer resources . All in the program goroutine Will also be fully scheduled , The code will also be run concurrently , Even so goroutine There are hundreds of thousands , It can still be .
process , thread , The difference of synergetic process is mainly reflected in the particle size of the actuator . The initial executive task is relatively simple , Requirements can be met in one process , As the executor does more and more complex things , There is a need for in-process multitasking .
Processes and threads are system level tasks , Switch process , Threads need to go through user state transition to kernel state , After successful switching, switch back from kernel state to user state .
To achieve high performance , We should reduce asynchronous threads as much as possible . Because the process has no local storage , Relatively speaking, the space cost is much smaller , At the same time, it can meet the needs .