|
I recently saw a video of Josh Tenenbaum's very interesting talk How to Grow a Mind: Statistics, Structure and Abstraction. After the talk, Geoff Hinton asked him why not use RBMs all the way up and get the trees to emerge as a property of the energy landscape. Josh Tenebaum replied that he would discuss this at the Transfer Learning with Rich Models workshop. It seems that the videos of the NIPS 2010 workshops have already been posted. However, as of this writing, I'm not seeing anything there related to Transfer Learning with Rich Models. Was this workshop not recorded on video? If there are no videos from the workshop, can anyone tell me what was said about Geoff Hinton's question there? |
|
I was one of the speakers at that workshop; there were no video recordings, unfortunately. I missed most of Josh's talk, but I think that was his way of politely saying that he prefers hierarchical Bayesian models to RBMs. |
|
I didn't catch Josh's workshop talk either, but a good reason to use tree priors instead of the all-purpose RBMs is that trees can be a good source of strong inductive bias, and assuming that directly might require less trainiing dats (and less optimization) to reach compatible results. |
|
I thought Josh Tenenbaum replied to Geoff Hinton directly but maybe I'm mistaken. I wrote down some things about the workshop in my blog. I think Josh's reply was in the spirit of what Alexandre said. Modeling structure in data more explicitly makes learning it much easier. A given structure might be learned by a deep network but it is not clear how this would happen. I wrote this right after the talk so maybe this is pretty close to what he actually said. |
Thanks for the video. Very interesting work!