torchvision.transforms.Normalize() In mean and std What are the parameters for ?
According to my understanding , Normalization is to normalize the image 3 The data in each channel is sorted into [-1, 1] section .
x = (x - mean(x))/stddev(x)
Just input the data set x Yes ,mean(x) and stddev(x) That's the definite value , Why? Normalize() The function also requires input mean and std What's the value ?
RGB The value of a single channel is not [0, 255] Do you ? So the average value of a channel should be 127 It's near here .
If Normalize() Functions are calculated in the following format x = (x - mean)/std because RGB yes [0, 255], It worked out x It's impossible to fall behind [-1,
1] It's the interval .
But I see a lot of code like this ：
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229,
How does this set of values come from ? Why are the mean values of these three channels less than 1 What's the value ?
mean and std It must be in the normalize I calculated before I passed it on , Or every time normalize The program that reads both images counts …
2, There are two cases ：
a) If it is imagenet data set , that ImageNet When the data is loaded, it has been converted to [0, 1].
b) Applied torchvision.transforms.ToTensor, Its function is
（ Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to
a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] ）
3,[0.485, 0.456, 0.406] The average of this group is from imagenet It is calculated by sampling in the training center .