|
Dear all, I've been having a blast reading about Restricted Boltzmann Machines and (sparse/denoising/contractive) Autoencoders. Putting autoencoders into practice, however, leaves me stranded. I've been trying to reproduce the Denoising Autoencoder results in Figure 2 of An Analysis of Single-Layer Networks in Unsupervised Feature Learning (or Figure 5 of Higher Order Contractive Auto-Encoder). I'm experimenting with Pylearn2 and I believe I should use the following configuration to reproduce the results in the said papers:
I'm coming up empty though. My weights often look like this:
Needless to say this is nothing like the Gabors I'd expect. I've checked the whitening step and that seems to be in order. Using RBMs does yield similar results as described. What am I doing wrong? Bonus question: It is said that RBMs learn the statistical distribution of the inputs. But isn't that what AEs learn too? |

Did you have any progress with this problem?