I've read several different sources on pretraining a neural network using RBMs and auto-encoders. While I understand a neural network structure: input neurons (features), hidden layers (error minimization black box), and output neurons (the answer), I'm having difficulty understanding what is being accomplished with pretraining and how it is being done. I do understand that the hidden layer weights are adjusted for a better starting point, but I'm not sure if I should just be running my training data through an RBM until the error is minimized, and then placing these weights into the hidden layers. Is one RBM used for each hidden layer? Without heavy math, and without spoon-feeding me, can someone please explain pretraining to a five-year old?

asked Jul 17 '14 at 23:43

human's gravatar image

human
1223

edited Jul 17 '14 at 23:44

Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.