In a sufficiently complicated multivariate model where you can't analytically integrate out the parameters, how do you evaluate the evidence (AKA normalizing constant Z, AKA Bayes factor) when sampling in order to compare different models?

I recently came across Skilling's Nested Sampling algorithm and had some success with it in a toy problem and I was curious whether it works well in the real world, and curious about what else works in the real world.

asked Jun 30 '10 at 17:35

a1k0n's gravatar image

a1k0n
36128

edited Jul 07 '10 at 10:18

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2554154278421


3 Answers:

I think in general this is very hard. Radford Neal has worked on this -- see his 1993 monograph and his paper on Annealed Importance Sampling (both cited in the first link below). You might find the following blog entries to be a good way into the literature.

Mark

An entry in Radford Neal's blog

An entry in the Xian blog

answered Jun 30 '10 at 18:42

Mark%20Johnson's gravatar image

Mark Johnson
266136

As a first thought, a recent paper by Murray and Salakhutdinov comes to my mind (also with regards to references within the paper). Maybe go straight to section 6 to see if it is applicable.

answered Jun 30 '10 at 18:12

osdf's gravatar image

osdf
67031119

Is the sampling a necessary requirement? If not, I've had a bit of success in the past with using Variational Bayes or Expectation Propagation. To be perfectly honest, they can fail miserably as well; it all depends on the model?

answered Jun 30 '10 at 18:20

Jurgen's gravatar image

Jurgen
99531419

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.