Algorithm flow

1. The main architecture can be broken down into the following parts to understand :

     
 * first DDPG Is characterized by actor Although it is PG Architecture of , however actor The output action value is not a probability distribution , It's a deterministic continuous action choice ( It is suitable for continuous action ), Its network is to give state as input , The network then gives an action as output , In this state , The best action value to take ,  And then this online Network update is based on critic The network provides the basis for updating , That is the following formula , Will take advantage of this s and a obtain critic Evaluated Q value , Then change the gradient to tell the network how to take the action value next time .【 That is, its optimized training online Network time , Need to be based on critic A description of the action provided Q Value evaluation for gradient adjustment 】Q yes critic Extracted ,Q The gradient of values is actually the same critic Network computing provides .

                        

        stay actor Network target The network part is entirely a segment   Time of soft update , that is  
Save the parameters after a period of time ( this target The main function of the network is to use  
critic Network target The real part selects the next action , That is to say, the behavior will be selected according to the actual network situation here ) 
In fact, there is a question here ,actor The input to the network is state , Output is the only action , that critic What about the Internet , It's input state and behavior , output Q Is it worth it ?  Or input status , Enter the Q value , And then choose from them ? Which of the two ways .....【 After searching, we found that , here critic The network should be the first form , The input is   State and action , The output is corresponding to this action Q value 】

        * stay critic part , It also includes two parts , Respectively online and target part ,
online Some of them are very similar to those before DDQN mode , That is, the network will be based on input behavior and state Q Value output ,  And then according to the utilization target Real value calculation provided by network TD-error Network training , And after a period of time target Network update .

       
* The third part is   Behavior exploration and memory renewal .  Behavioral exploration corresponds to 1 and 2 Two parts , The early stage tends to be a kind of random exploration with noise , This will increase their ability to explore , Every behavior exploration will produce state transition and immediate reward .

2. The use of memory corresponds to 3 and 4 Two parts , that is actor While exploring the network, some intuitive environmental feedback will be saved ( Prophase biased random , In the later stage, they become more intelligent ), And then online training   It is the continuous sampling of memory for two parts of network training , But remember that it should be a fragmented way of memory , There is no need for a strong correlation between the memories used by the two networks , 
  Only when updating , There will be some use of each other between networks .

Algorithm structure

 

Symbolic meaning

1: According to certain noisy behavior strategy β choice behavior . 

2: The environment is given according to the selected action   Rewards and new status responses .   

3: be similar to DDQN That kind of thing , Memory with or without intelligence will be stored .

4: Select batches from the memory and train the two networks in different forms and utilization .   

5: stay critic in ,Q The real network takes the next state and the next selected action as the network input ,r+Q(next) Computational acquisition Q Realistic value .

6:critic Network through computing td-error The network is updated after gradient .   

7:actor Network  
online When the network is updated , Focus on your current behavior , It is necessary to calculate and adjust the gradient , Make the network in the same state tend to generate better action choice ,  So rely on this action Of Q value ,Q Value, you need to critic Network computing results . 
 

8: It is the gradient result calculated by the optimizer to update the network parameters . 

9: Soft update between two network forms .

         

Technology