What's your favourite NIPS poster/talk so far?

Also: Whats your favourite NIPS quote so far?

asked Dec 08 '10 at 01:35

Andreas%20Mueller's gravatar image

Andreas Mueller
2686185893


4 Answers:

I liked Variational Inference over Combinatorial Spaces because of the breadth of applications, they have at least half a dozen diverse examples

answered Dec 08 '10 at 01:57

Yaroslav%20Bulatov's gravatar image

Yaroslav Bulatov
2333214365

People seemed to say that "Multiple Kernel Learning and the SMO Algorithm" was a very important breakthrough. I can't tell, as I have almost not looked at SVMs and kernels.

This answer is marked "community wiki".

answered Dec 11 '10 at 09:15

Gael%20Varoquaux's gravatar image

Gael Varoquaux
92141426

There was a great poster by Honglak Lee and others in the deep learning workshop that showed that k means outperforms all deep architecture. I thought that was pretty neat. But there where many others, too.

answered Dec 12 '10 at 02:31

Andreas%20Mueller's gravatar image

Andreas Mueller
2686185893

Which performance was considered? For example, k-means does not seem to be suitable to learn to compute parity of N input bits.

(Dec 12 '10 at 04:29) Ivo Danihelka

Are the papers from that workshop availiable anywhere?

(Dec 12 '10 at 05:36) Alexandre Passos ♦
1

Ivo: I couldn't find paper but Andreas' blog post gives more details -- http://peekaboo-vision.blogspot.com/2010/12/nips-2010-single-layer-networks-in.html

Alexandre: some papers are here -- http://deeplearningworkshopnips2010.wordpress.com/schedule/acceptedpapers/

(Dec 12 '10 at 15:59) Yaroslav Bulatov

This is a very interesting result but are the different types of RBM's and Autoencoders actually stacked to form deep architectures? Otherwise I wouldn't necessarily see this as a failure of the idea behind deep learning but of the submodules that are used to construct deep networks. Single RBM's are shallow models as well.

(Dec 13 '10 at 06:00) Philemon Brakel
1

@Alexandre: This particular paper is not online, but was submitted to AISTATS. You should have come to the workshop, I saw you wandering around ;) @Philemon: In this paper, the RBMS and Autoencoders were not stacked. But the performance was compared to the literature, which consists of many deep architectures that were specifically designed for these datasets. As Yaroslav said: there are more details in my blog.

(Dec 14 '10 at 04:23) Andreas Mueller

I see. In your blog I only saw results about single layer architectures. If the convolutional RBM was the previous state-of-the-art method beating deeper architectures, that was already a sign I guess. A very interesting development and I'm eager to read the actual paper.

(Dec 14 '10 at 05:35) Philemon Brakel

The convolutional RBM and the mcRBM = mean covariance RBM are deep architectures. The convolutional is actually also a paper by Honglak Lee and Andrew Ng (so they beat their own deep method). The mcRBM is work by Hintons group. I'm not sure at the moment who the first author was.

(Dec 14 '10 at 05:52) Andreas Mueller

@Andreas: I definitely should have gone. I wish I had two or three copies to wander around the workshops, there were so many interestinh things going on.

(Dec 14 '10 at 06:09) Alexandre Passos ♦
2

The paper on soft-K-means feature extractor by Adam Coates, Honglak Lee and Andrew Ng presented during the NIPS 2010 deep learning workshop is now available online: An Analysis of Single-Layer Networks in Unsupervised Feature Learning

(Dec 19 '10 at 08:16) ogrisel

Thanks ogrisel, that's great :)

(Dec 19 '10 at 08:53) Andreas Mueller
showing 5 of 10 show all

By the way, my favourite quote is from David W. Hogg in the Sam Roweis symposium: "In astronomy, we work at the photon level. We don't have big bright letters with number written with black markers."

Oh and how could I forget, Jitendra Malik himself on object recognition: "Learning object recognition from bounding boxes is like learning language from a list of sentences."

answered Dec 14 '10 at 10:37

Andreas%20Mueller's gravatar image

Andreas Mueller
2686185893

edited Dec 14 '10 at 11:22

Your answer
toggle preview

Subscription:

Once you sign in you will be able to subscribe for any updates here

Tags:

×6

Asked: Dec 08 '10 at 01:35

Seen: 2,433 times

Last updated: Dec 19 '10 at 08:53

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.