What is the connection between Learning and Inference in a bayesian setting ? Are these two the same ? (Learning is inference on the parameters of the model) or is there some other connection (for instance in Markov Networks , the learning (mle) algorithm will have inference as an integral step ) .

Thanks in advance

asked Apr 01 '13 at 02:45

turbo364's gravatar image

turbo364
3191012

edited Apr 02 '13 at 06:07

larsmans's gravatar image

larsmans
67651424


One Answer:

From a purely bayesian perspective both what is called learning and what is called inference can be interpreted as bayesian inference: estimating the posterior probability distribution of a set of variables conditioned on their prior and on distributions for other variables in your model.

However, this perspective can miss some important details. In many parametric models you can split the variables in two sets: one that is constant-sized, containing what are called the parameters, and another whose size scales linearly with the dataset size containing what are called the latent and observed variables. In a mixture model, for example, the priors for each mixture component are the parameters and the variables representing each data point's cluster assignment are the latent variables. Then "learning" is what is usually meant by doing bayesian inference on the parameters, and "inference" commonly means doing bayesian inference on the latent variables. The reason why this is a useful distinction is that "learning" is something you do once when fitting the model, but "inference" is something you do every time you see a new datapoint.

Specifically in directed graphical models each factor's conditional distribution on its child variable is assumed to be normalized, so "learning" from fully observed models (either by MLE or by bayesian methods) can be done by just counting and renormalizing (this is not true in undirected factor graphs because normalization is global, which is what requires "inference" as a subroutine). When some variables are latent "inference" is always required, in both directed and undirected factor graphs.

answered Apr 01 '13 at 09:29

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2554154278421

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.