2
2

When a classifier outputs a probability for an instance (let's assume binary classification for simplicity), the probability distribution is often used as a measure of how "certain/confident" the classifier is on that prediction. For example, if the classifier outputs a [0.49, 0.51] probability for an instance, then the classifier is assumed to be uncertain on that instance. However, what if the real and correct probability distribution for that instance was in fact [0.49, 0.51]?

The question is, what kind of work is there out there that outputs both a probability and a measure of confidence on that probability estimate? I am especially interested in work on classification but work on other tasks would be just fine.

asked Jul 22 '10 at 13:12

mbilgic's gravatar image

mbilgic
31134


One Answer:

Any bayesian algorithm that performs full bayesian inference will do just fine. For example, with the naive bayes classifier and a bit of priors and inference you can get a mean probability that the element is in one class (which is what is usually reported), but it's trivial do get the full distribution, from which you can compute a confidence interval. The same is true for all other sorts of probabilistic classifiers (with a prior on the weights you can get the same thing from logistic regression, for example, and with judicious use gibbs sampling you can get this sort of probability from deep belief nets).

However, these are not the true probabilities, only the estimates from the model, and they will necessarily be biased in the same way your model is biased.

answered Jul 22 '10 at 13:24

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2549653277421

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.