|
Predictive Sparse Decomposition was developed after sparse autoencoders, but, to me, looks very similar to them. Does PSD have any advantages over sparse autoencoders? Is there any empirical evidence? |
|
If I recall correctly, PSD actually does sparse coding and at the same time approximates it with a feed-forward encoding step. Sparse autoencoders only have the feed-forward encoding step. Especially for smaller datasets, "smarter" encoding procedures such as sparse coding tend to result in better features. Adam Coates has done some very interesting work on this topic, particularly his ICML 2011 paper. Sparse autoencoders only have the feed-forward encoding step Nope. Just like PSD (and unlike SC), AE has a 1-step encoder and 1-step decoder. The rather esoteric model without a decoder is called Sparse Filtering.
(Nov 03 '13 at 01:43)
Max
PS: ICA also doesn't have a decoder
(Nov 03 '13 at 02:30)
Max
Sorry, I didn't express myself clearly :) I wanted to say that the encoding step is "just feedforward", i.e. not iterative (so there are no lateral interactions). I didn't mean to say there is no decoder.
(Nov 04 '13 at 08:32)
Sander Dieleman
|
This is the corrected link to the paper: http://yann.lecun.com/exdb/publis/pdf/koray-psd-08.pdf