Dear all,

I've been having a blast reading about Restricted Boltzmann Machines and (sparse/denoising/contractive) Autoencoders. Putting autoencoders into practice, however, leaves me stranded. I've been trying to reproduce the Denoising Autoencoder results in Figure 2 of An Analysis of Single-Layer Networks in Unsupervised Feature Learning (or Figure 5 of Higher Order Contractive Auto-Encoder).

I'm experimenting with Pylearn2 and I believe I should use the following configuration to reproduce the results in the said papers:

  • CIFAR10
  • use patches of 8x8
  • localized contrast normalization
  • ZCA whitening
  • learning rate of 0.01
  • sigmoid activation
  • 800 hidden units
  • tied weights

I'm coming up empty though. My weights often look like this: learnt weights

Needless to say this is nothing like the Gabors I'd expect. I've checked the whitening step and that seems to be in order. Using RBMs does yield similar results as described.

What am I doing wrong?

Bonus question: It is said that RBMs learn the statistical distribution of the inputs. But isn't that what AEs learn too?

asked Jul 17 '13 at 06:58

haarts's gravatar image

haarts
96355

edited Jul 17 '13 at 11:17

ogrisel's gravatar image

ogrisel
498995591

Did you have any progress with this problem?

(Nov 15 '14 at 19:58) Saul Berardo
Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.