Hi,

Hello everyone,

I have a 100-dimensional feature vector for all the images in my database and I want to design a deep restricted boltzmann machine with real-valued input units. The literature recommends modelling a Gaussian-Bernoulli RBM as the first layer. However, Hinton et. al also recommend in their article that the data be normalized along every feature dimension to have zero mean and unit variance.

Without normalization, if I try to model real-valued visible units and binary output units using a variation of code provided here , I get a high classification accuracy. But, if I normalize my data and repeat the process, the performance of the classifier is terrible.

Is such a behavior plausible at all? I have checked the learning rates and other model parameters and everything seems to be correct. How important is it to have a data normalization step?

asked Jun 08 '14 at 13:52

Deepti%20Ghadiyaram's gravatar image

Deepti Ghadiyaram
1111

Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.