I've been taking courses on machine learning at my university this semester. Largely we have been going through different ways to separate data, but I have yet to learn about their weaknesses and strenghts.

Particularly, we had a class project where the winner, a very simple naive Bayes model, beat out very 'advanced' competition, including SVM's with Gaussian kernels, decision tress etc, and I started wondering if it'd be more efficient to teach us about how and where to use certain models. As of now, we know many different models but I'd personally feel unprepared if I'd have to choose a model to handle some specific task.

So my question is, is there a book or a web page where I could specifically learn about how to apply machine learning models?

asked Feb 06 '11 at 07:04

tsiki's gravatar image


edited Feb 06 '11 at 07:07


For the specific example of naive Bayes consistently outperforming expectations, the classic text is:

Domingos & Pazzani, 1997: On the optimality of the simple Bayesian classifier under zero-one loss

That paper can at least help you understand when naive Bayes is a good choice.

(Feb 08 '11 at 08:19) Paul Barba

One Answer:

Good Question, there is a whole field of ML that deals on how to choose a model called regularization and Model Selection.

A couple of good resources:

Last Asian ML Conference had a good tutorial on Model Selection: Here it is

Andrew NG from Stanford actually focuses a couple of his lectures on this specific topic


I think it's lecture 10, and you can always check the attached notes

answered Feb 07 '11 at 04:46

Leon%20Palafox's gravatar image

Leon Palafox ♦

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.