Some examples of what I'm looking for are Shapeset and NORB. The actual data is related to the underlying factors of variation by a very complicated function, but those factors (lighting, camera angle, etc.) are a known and included as metadata for the dataset. Bonus points if there are published papers where deep learning models perform well on the dataset.

asked Oct 04 '10 at 18:34

Ian%20Goodfellow's gravatar image

Ian Goodfellow
1072162734


2 Answers:

You could generate 3D scenes using povray. In that case, you would know all the underlying factors.

answered Oct 06 '10 at 11:34

Joseph%20Turian's gravatar image

Joseph Turian ♦♦
579051125146

Yes, on that note another dataset I know of but forgot to mention is Wiskott's group's OpenGL fish and spheres, which is technically available though not super public.

(Oct 06 '10 at 13:15) Ian Goodfellow

There are papers by geoff hinton on recognizing emotions. Generating Facial Expressions with Deep Belief Nets. 2008 Susskind, J.M., Hinton, G.~E., Movellan, J.R., and Anderson, A.K. They use a lot of meta information to learn layer by layer... Like example the different location of the face has different muscular tension when displaying different emotions. Thus one layer could learn features like frowning of the forehead. While another could learn similes using just the part of the picture from lips. I think the data is easily available too... But you can check..

answered Oct 06 '10 at 03:36

kpx's gravatar image

kpx
541182636

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.