|
I have implemented Griddy Gibbs sampling in R, since I need its capability to deal with non-conjugate full conditionals. Due to large size of my observations, the full conditionals underflow R, so I use log transformation. The problem is, with Griddy Gibbs, I need to integrate the function at hand from lower bounds to every point on the grid, then use a draw from uniform distribution (0,1) to simulate a draw from this probability distribution/density. Transferring the likelihood to log scale avoids the underflow problem, but then the whole sampling via uniform(0,1) mechanism becomes unavailable to me. Or does it? Can I still do it in the log scale? I could not get my head around it. Regards Seref Ps: This is only half of the problem of course, because if I can do this, then I'll have to find a way of transferring key parameters (mean,var) of log-posterior to normal scale. |
|
If x is distributed according to Uniform(0,1), -log x is distributed according to Exponential(1). Also, the parameters of the distributions stay the same when you work in log space, all that happens is that multiplications become additions and exp()s go away. So the unnormalized density of a gaussian in a log-space is - 0.5 log(sigma) - 0.5(x - mu)Sigma(x -mu). Correct me if I'm wrong here; but uniform(0,1) is used in this mechanism just to get a value that falls somewhere in the CDF of a probability density. With transformation to log-space, the CDF that I'd obtain via integration has no upper bound, so CDF is no longer between 0 and 1. This is what breaks the mechanism I've described above, is it not?
(Oct 24 '11 at 07:33)
sarikan
The upper bound then becomes 0. Note that all logarithms of numbers smaller than one are negative and that all exponentially-distributed random variables are positive. I'm not sure I understand your sampling method, however.
(Oct 24 '11 at 07:41)
Alexandre Passos ♦
Ah, silly me. You're right of course. Still, I need some work to get my head around it. My method is described here: http://www.jstor.org/stable/2290225
(Oct 24 '11 at 09:43)
sarikan
The funny thing is, you've actually given the answer to my question in a different thread: http://metaoptimize.com/qa/questions/2920/fair-sampling-from-log-probabilities I guess I was not clear enough in my question. For anyone else who'd like to do the same, these urls should help for details: https://facwiki.cs.byu.edu/nlp/index.php/Log_Domain_Computations http://mikelove.wordpress.com/2011/06/06/log-probabilities-trick/ http://blog.smola.org/post/987977550/log-probabilities-semirings-and-floating-point-numbers
(Oct 28 '11 at 06:49)
sarikan
|