|
Hi all, The auto-encoder is used for dimensional reduction, and as a tool for unsupervised feature learning. Further, auto-encoder has been used for building and training multi-layer neural networks. When we talk about auto-encoders, we introduced a sparsity and I what to know what is the purpose of introducing an sparsity term for auto-encoders. Thnaks |
|
With many hidden units and no sparsity term, you can get an auto-encoder that reconstructs the input perfectly, but it's kind of useless (its hidden units are bad feature extractors). With sparsity term you get good feature extractors. Also, biological motivations since biological neurons are active 1-4% of the time +1 thanks a lot. Is there any published papers, that I can extract above information?
(Jul 22 '12 at 00:48)
Upul Bandara
See section 7.1.2 in http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf
(Jul 22 '12 at 20:16)
Yaroslav Bulatov
|
|
Let me answer your question: 1:if the number of units in hidden layers is equal to it in input layer, we will learn a trivial solution, may be the representation is equal to the input data. 2: if the number of units is less than the input layer, obviously, we will learn a compressed representation, may be it is more useful the solution 1 3: if the number is larger than input layer, what will we get? From the formulation data = W*a. a it the representation, typically, for difference data ,we will learn the same representation, this is useless too. so under this condition, we motivate by the solution 2, we add some sparsity constrain to it. 4: some motivations by biological. |
cross-posted to stats.stackexchange.com