|
In one of his talks, Yann LeCun stated that the divisive (and subtractive) normalization that he does in his convolutional neural nets is like a poor-man's whitening? I don't see the parallels here. Whitening,
On the other hand, divisive normalization,
(I'm assuming 0-mean, where needed, here, and that Whitening and divisive normalization seem like two very different things. Why is one a poor man's version of the other? |
|
Whitening is used in machine learning to refer to making the data points have 0 mean and unit covariance. Then, divisive and subtractive normalization makes them have zero mean and unit variance in each direction, which is an approximation to full whitening.
(May 15 '13 at 18:44)
Max
I was, sorry. I thought what I described is what Yann did.
(May 15 '13 at 21:18)
Alexandre Passos ♦
|