Supervised learning is mainly divided into two categories :

* classification : The target variable is discrete , For example, judge whether a watermelon is good or bad , Then the target variable can only be 1( Good melon ),0( Bad melon )
* regression : The target variable is continuous , Such as predicting the sugar content of watermelon (0.00~1.00)
        Classification is mainly divided into :

* Dichotomy : For example, judge whether a watermelon is good or bad
* Multiclassification : Such as judging a watermelon variety , Black Beauty , Te Xiaofeng , Annong 2, etc
        The cross entropy loss function is the most commonly used loss function in classification , Cross entropy is used to measure the difference between two probability distributions , It is used to measure the difference between the learned distribution and the real distribution .

<> Dichotomy

        In the case of dichotomy , Only two values can be predicted for each category , Suppose the prediction is good melon (1) The probability is P P P, Bad melon (0) The probability is 1 − P 1-P 1−P:
        be , The general form of cross entropy loss is , among y Label :

How to understand this formula ?

        It is predicted to be a good melon (1) The probability is P P P: P ( y = 1 ∣ x ) = P P(y=1|x)=P P(y=1∣x)=P,
        Predicted as bad melon (0) The probability is 1 − P 1-P 1−P: P ( y = 0 ∣ x ) = 1 − P P(y=0|x)=1-P P(y=0
∣x)=1−P
        be , P ( y ∣ x ) = P y ( 1 − P ) 1 − y P(y|x)=P^y(1-P)^{1-y} P(y∣x)=Py(1
−P)1−y, When y by 1 Time , P ( y ∣ x ) = P P(y|x)=P P(y∣x)=P, When y by 0 Time , P ( y ∣ x ) = 1 − P
P(y|x)=1-PP(y∣x)=1−P.

Principle of cross entropy function formula :

        Learn something , We need to get to know him , Otherwise, it will stay on the surface , We can't go any further , How is the cross entropy obtained , How to understand him ?
First, understand the following concepts :

*
information content
       
The amount of information represents the degree to which an information eliminates uncertainty , For example, China's current high-speed rail technology ranks first in the world , The probability is zero 1, The sentence itself is certain , There is no elimination of any uncertainty . China's high-speed rail technology will always remain the first in the world , This sentence is an uncertain event , It contains a large amount of information .
The amount of information is inversely proportional to the probability of an event .

*
Information entropy
        Information entropy is the expectation of the amount of information that may be produced before the results come out , Expectation can be understood as the probability of all possible results multiplied by the corresponding result .

        Information entropy is used to measure the uncertainty of things . The bigger the information entropy is ( The more information you have ,P The smaller ), The more uncertain things become , The more complex things are .

*
Relative entropy ( Namely KL divergence )
        Relative entropy is also called mutual entropy , set up p ( x ) p(x) p(x) and q ( x ) q(x) q(x) Is the value of the two probability distributions , The relative entropy is used to express
The difference between two probability distributions , When two random distributions are the same , Their relative entropy is zero , When the difference between two random distributions increases , Their relative entropy also increases .:

( You can think of it that way , For a binary classification ,p Either for 0 Either for 1, from log Image knowledge ,p by 0 The time value is 0, by 1 Time ,q The closer it gets 1, l o g p q log{p \over q} l
ogqp​ The smaller , The closer to 0)( there log The base number is 2)

*
Cross entropy
        By expanding the relative entropy, we can get the following results :

        It can be seen from the above formula , Relative entropy = Cross entropy - Information entropy ,H(p,q) It's cross entropy :

        Because in machine learning and deep learning , Samples and labels are known ( Namely p Known ), So information entropy H(p) Equivalent to constant , here , Just fit the cross entropy , Make the cross entropy fit to 0 that will do .

<> Multiclassification

       
Multiclass is similar to dichotomy , The second category label is 1 and 0, And multi classification can be used one-hot Code , Now we need to predict watermelon varieties , There are black beauties , Te Xiaofeng , Annong 2 , If the real label is texiaofeng, i.e (0,1,0), The forecast label is anlong-2 (0,0,1), Bring in the probabilities of prediction tags , The case of multi classification is actually an extension of two classification :

        y i k y_{ik} yik​ Denotes the second i i i The real label of samples is k k k , share K K K Label values N N N Samples ,
p i , k p_{i,k}pi,k​ Denotes the second i i i The second sample is predicted to be the third k k k The probability of a tag value . By fitting the loss function , It also increases the distance between classes to a certain extent .

Technology