|
I have learned a bunch of inference in methods from classes/books, but don't really understand when a particular method is appropriate. I have seen various descriptions of the pros and cons of various approximate inference methods, such as this and it seems that people just use their favorite, but as a beginner, I don't have a favorite and am roughly familiar with several, though I have yet to use them for something "in reality" beyond textbook examples. I was wondering if there were some general heuristics (or a decision tree sorta thing) for choosing among various approximate inference methods (Laplace, Expectation Propagation, Sampling (Gibbs, Metropolis, Slice, ...), Variational, etc) for practical applications based on computation, ease of use/derivation/coding, popularity, etc. I don't really know if there are ways that I can count out a particular method for a given model or decide right away that a particular inference method is best. Sorry for such a generic question (I would imagine the answer would be fairly application independent)... I'm looking for a place to go beyond the books and maybe try to become an expert 1 or 2 particular methods. Thanks... any help/suggestions/references much appreciated! |
|
Unfortunately, as you expected, there is no one true answer to this problem. Indeed, even for specific models (such as latent Dirichlet allocation) different people or different applications use different inference methods and still argue about which are more appropriate. As a rule of thumb you should be minimally fluent in all of them, enough to know when you want/can apply them, and what are their disadvantages.
|