|
Hi, Hello everyone, I have a 100-dimensional feature vector for all the images in my database and I want to design a deep restricted boltzmann machine with real-valued input units. The literature recommends modelling a Gaussian-Bernoulli RBM as the first layer. However, Hinton et. al also recommend in their article that the data be normalized along every feature dimension to have zero mean and unit variance. Without normalization, if I try to model real-valued visible units and binary output units using a variation of code provided here , I get a high classification accuracy. But, if I normalize my data and repeat the process, the performance of the classifier is terrible. Is such a behavior plausible at all? I have checked the learning rates and other model parameters and everything seems to be correct. How important is it to have a data normalization step? |