C. Guo et al. “Zero-Reference Deep Curve Estimation for Low-Light Image
Enhancement,” 2020 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 2020, pp. 1777-1786, doi: 10.1109/CVPR42600.2020.00185.

* This is CVPR2020 A paper on low illumination image quality enhancement , The index is very high , The method is also novel
<> Brightness adjustment formula

The article does not directly predict the results after increment , Instead, first design a brightness adjustment formula with parameters , Then the network is used to predict the parameters in this formula , The formula is as follows :


This is the simplest version of the formula , I ( x ) I(x) I(x) That is, enter the picture ( Normalized to 0-1 between ), The left side of the equation is the brightening result , α \alpha α
That is, the parameters required for brightness adjustment , You can divide the equation by I ( x ) I(x) I(x), The brightness adjustment factor is 1 + α ( 1 − I ( x ) )
1+\alpha(1-I(x))1+α(1−I(x)), Namely when α \alpha α greater than 0 Time , The adjustment multiple is greater than 1, It is brightened , also I ( x ) I(x) I(x)
The larger the multiple, the lower , That is to say, the dark brightening factor is large , The brightening factor of bright is small ; α \alpha α greater than 0 Time , α \alpha α Bigger , The greater the brightening multiple . Therefore, this implements a parameter α
\alphaα Adaptive brightening formula of control

Then in fact, this formula can be applied repeatedly , That is, after brightening once with this formula , You can do it again with the brightening result as input , Therefore, an iterative version of the formula is obtained :

Then if α \alpha α yes pixel-wise of , In other words, each pixel has its own α \alpha α, The prediction result of the network is a α \alpha α
of map Not a value , The final version of the brightness adjustment formula is obtained :


Here is another noteworthy point , about RGB Three channels , They also have their own A of , therefore , If the number of iterations is 8, Image size is 224 × 224 224\times224 224×2
24, So what is the output of the network shape then is 24 × 224 × 224 24\times224\times224 24×224×224

<> network

* Then the predicted value is generated A n ( x ) A_n(x) An​(x)
The network is a simple one 7 Layer convolution neural network , And No BN And down sampling , All by 3x3 of 32 Channel convolution sum relu Active layer composition

* The article uses four non-reference of loss:
* Spatial Consistency Loss: It measures before and after brightening “ Difference between each pixel and surrounding pixels ”
Poor , Personally, I think this loss To keep the texture and content unchanged . among Y Y Y Is the pixel value after brightening , I I I Is the input value before brightening , Ω \Omega Ω Is an adjacent pixel of a pixel

* Exposure Control Loss:E It is a constant value set artificially , by 0.6( The article said 0.4-0.7 Between them , The effect changes little ), Y k Y_k Yk
​ It's a patch Mean value of , Divide the picture into disjoint patch, Then count each patch Mean and 0.6 Absolute difference between , As a loss, It is said to inhibit overexposure and underexposure .

* Color Constancy
Loss: That is, the incremental results RGB Calculate the difference between the mean values of the three channels , It is hoped that the mean values of the three channels will be the same as far as possible , In other words, the whole picture looks neither red nor blue or green .

* Illumination Smoothness Loss: Namely TV Loss, Constraint gradient , Make the prediction result as smooth as possible

* Final loss function:
<> train

In view of these four loss Paired or unpaired images are not required , So it is zero-reference of , You can train as long as you have a picture , Therefore, the article uses the existing image data set , The detail is when the exposure of the data set is more diverse and balanced , The effect of the model will be better .