1 dropout brief introduction

1.1 dropout The reason for this

In the model of machine learning , If the model has too many parameters , And there are too few training samples , The trained model is easy to over fit .

Over fitting performance : The loss function of the model is small in training data , The prediction accuracy is high , But in the test data, the loss function is relatively large , The prediction accuracy is low .

dropout Can be more effective to alleviate the occurrence of over fitting , To a certain extent, it can achieve the effect of regularization .

dropout It refers to the training process of deep learning network , For neural network unit , It will be discarded from the network temporarily according to a certain probability . For random gradient descent , Because it is randomly discarded , So every one of them mini-batch They're training different networks .

1.2 What is? dropout

When it comes to forward propagation , Let the activation value of a neuron with a certain probability p Stop working , This can make the model more generalization , Because it doesn't rely too much on some local features .

2 dropout Workflow and usage

2.1 dropout The application of neural network

dropout The specific workflow has been described in detail above , But how to make some neurons stop working with a certain probability ? How to achieve the code level ?

(1) In the training model stage

A probability flow is added to each unit of the training network .

The corresponding formula changes as follows :

No, dropout The network calculation formula of network :

use dropout The network calculation formula of network :

be careful : We're going through it to block some neurons , The activation value is 0 in the future , We also need to look at vectors y1, ... , y1000 Zoom in and out , That is, multiply by 1 / (1-p)
. If you're training , After setting 0 after , No right y1, ... , y1000 Zoom in and out (rescale), So when it comes to testing , You need to scale the weight . The operation is as follows :

(2) In the test model phase

When forecasting models , The weight parameter of each neuron is multiplied by probability p.

Testing phase dropout formula :

3 Why dropout Over fitting can be solved ?

1 The role of averaging

Let's go back to the standard model dropout, We train with the same training data 5 There are two different neural networks , You'll get it 5 It's a different result , At this point we can use
“5 Take the average of the results ” perhaps “ The voting strategy of majority victory ” To decide the final result .

such “ All in all, take the average ” In general, this strategy can effectively prevent over fitting . Because different networks may produce different over fitting , Taking an average may make some people feel better “ contrary ” Fitting offsets each other .

dropout Dropping different hidden neurons is like training different networks , Randomly deleting half of the hidden neurons leads to different network structure , whole dropout The process is equivalent to averaging many different neural networks . Different networks produce different over fitting , Some are mutual “ reverse ” The over fitting can be reduced by offsetting each other .

 

 

 

 

Technology