It's well known that Gibbs sampler could fail in certain cases and do not provide samples from target distribution. Good examples were provided in http://metaoptimize.com/qa/questions/7831/disadvatage-of-gibbs-sampler

But still it satisfies the detailed balance equation and thus it must work. I see some contradiction here and can not resolve it by myself.

Could someone explain in details (or get a link for explanation) how it is possible for the Gibbs sampler to have theoretical guarantees for convergence to a target distribution and be able to stuck in some region at the same time?

asked Sep 26 '12 at 15:21

Sergey%20Bartunov's gravatar image

Sergey Bartunov
81111116

edited Sep 26 '12 at 15:21


One Answer:

Well, just because it is guaranteed to converge in expectation in the limit of infinite time assuming infinitely perfect random numbers does not mean it will converge in any finite number of steps. Indeed, there's nothing in the usual theorems about Gibbs sampling that measures the time it takes to converge, and for all we know it might be exponential in the number of variables, hidden states, dimensionality, etc for some models, which is why we often see it performing suboptimally.

answered Sep 26 '12 at 18:23

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2554154278421

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.