|
In a sufficiently complicated multivariate model where you can't analytically integrate out the parameters, how do you evaluate the evidence (AKA normalizing constant Z, AKA Bayes factor) when sampling in order to compare different models? I recently came across Skilling's Nested Sampling algorithm and had some success with it in a toy problem and I was curious whether it works well in the real world, and curious about what else works in the real world. |
|
I think in general this is very hard. Radford Neal has worked on this -- see his 1993 monograph and his paper on Annealed Importance Sampling (both cited in the first link below). You might find the following blog entries to be a good way into the literature. Mark |
|
Is the sampling a necessary requirement? If not, I've had a bit of success in the past with using Variational Bayes or Expectation Propagation. To be perfectly honest, they can fail miserably as well; it all depends on the model? |