Hi everyone,

I have a question about the bias (or lack of?) occurring in particle filters and the relation to the lack of bias in approximating data likelihood. More concretely, it seems to me that particle filters will give biased estimates of P(x_t|y_{1:t}), but will not give biased estimates for P(y_t|y_{1:t-1}). I hope to understand where I am mistaken.

For simplicity, I'll assume that I'm using the typical Bootstrap particle filter. The general algorithm is

input: particles (w_{t}^{i}, x_{t}^{i}) for i = 1...N sampled approximately according to Pr(x_t | y_{1:t})
output: particles (w_{t+1}^{i}, x_{t+1}^{i}) sampled approximately according to Pr(x_{t+1}|y_{1:t+1}) and estimates for Pr(y_{t+1} | y_{1:t})
a) propagate particles forward sampling from x'^{i} ~ Pr(x_{t+1}|x_{t}^{i}) for i = 1...N
b) weight samples w'^{i} = Pr(y_{t} | x'^{i})
c) Output sum_{i} w'^{i} as an estimate of Pr(y_{t+1} | y_{1:t})
d) resample particles x_{t+1}^{i} ~ Multinomial(w'^{i}, x'^{i}) and give them all weight w_{t+1}^{i} = 1/N
e) output samples (w_{t+1}^{i}, x_{t+1}^{i})

My concern lies in resampling -- it seems evident to me that sampling particles according to an approximation of Pr(x_{t+1} | y_{1:t+1}) instead of the true distribution would introduce bias into the estimate. If in turn Pr(x_{t} | y_{1:t}) is biased, then I don't see how Pr(y_{t} | y_{1:t-1}) would be unbiased either. If no resampling were performed, we would have unbiased (though extremely noise) estimates. Am I missing something here?

asked Sep 20 '11 at 01:21

Daniel%20Duckwoth's gravatar image

Daniel Duckwoth
954222938

edited Sep 20 '11 at 01:23


One Answer:

If no resampling were performed, we would have unbiased (though extremely noise) estimates. Am I missing something here?

When we choose a resampling procedure, we require that our resampling procedure is unbiased, i.e. that it will produce the same number of particles as we started with in expectation. This is the case for multinomial resampling. We also want to get rid of low-weight particles though, as we'll see below.

See http://www.stat.columbia.edu/~liam/teaching/neurostat-spr12/papers/EM/resampling.pdf in the introduction, equation (3) for the unbiasedness condition.

But why do we resample? Suppose we look at the "genealogy" of a particle, i.e. by tracing its parent to the first stage's initial particles. As the number of steps t increases, it'll become more probable that we'll have one particle that is the parent of all the rest, and that particle isn't necessarily an 'important one'. This results in one particle making us overly confident in our estimate.

If we resample, however, we'll reduce the chance that we'll put all our weight into one particle that was low-weight in the first place, so that our particle genealogies will be more likely to go through likely paths. But, the unbiasedness condition guarantees that this doesn't affect the biasedness of our estimate.

answered Jun 26 '13 at 20:30

Brendan%20Shillingford's gravatar image

Brendan Shillingford
176144

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.