Hi all,

The auto-encoder is used for dimensional reduction, and as a tool for unsupervised feature learning. Further, auto-encoder has been used for building and training multi-layer neural networks.

When we talk about auto-encoders, we introduced a sparsity and I what to know what is the purpose of introducing an sparsity term for auto-encoders.

Thnaks

asked Jul 21 '12 at 09:52

Upul%20Bandara's gravatar image

Upul Bandara
1047912

cross-posted to stats.stackexchange.com

(Jul 21 '12 at 13:36) alto

2 Answers:

With many hidden units and no sparsity term, you can get an auto-encoder that reconstructs the input perfectly, but it's kind of useless (its hidden units are bad feature extractors). With sparsity term you get good feature extractors. Also, biological motivations since biological neurons are active 1-4% of the time

answered Jul 21 '12 at 21:47

Yaroslav%20Bulatov's gravatar image

Yaroslav Bulatov
2333214365

+1 thanks a lot. Is there any published papers, that I can extract above information?

(Jul 22 '12 at 00:48) Upul Bandara

See section 7.1.2 in http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf

(Jul 22 '12 at 20:16) Yaroslav Bulatov

Let me answer your question: 1:if the number of units in hidden layers is equal to it in input layer, we will learn a trivial solution, may be the representation is equal to the input data. 2: if the number of units is less than the input layer, obviously, we will learn a compressed representation, may be it is more useful the solution 1 3: if the number is larger than input layer, what will we get? From the formulation data = W*a. a it the representation, typically, for difference data ,we will learn the same representation, this is useless too. so under this condition, we motivate by the solution 2, we add some sparsity constrain to it. 4: some motivations by biological.

answered Sep 18 '14 at 01:32

sunkevin's gravatar image

sunkevin
1

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.