Hi everyone,

I wanted to get an idea of what methods are known for performing inference when you have a large number of stochastic processes that are nearly independent, but are not always so. For example, cars on a road obviously pay a lot of attention to the behavior of adjacent vehicles but absolutely nothing to cars on the next street over. In a sampling-based algorithm, it would make sense to have multiple, independent samplers for each car when they're far apart, but when closer together these samplers should somehow interact.

Is there any literature on methods for handling these sorts of scenarios? All I could find was Ng, Peshkin, Pfeffer 2002 at UAI ("Factored Particles for Scalable Monitoring"), but their approach isn't directly able to handle continuous variable domains. Thanks!

asked Apr 06 '12 at 23:32

Daniel%20Duckwoth's gravatar image

Daniel Duckwoth
954222938


One Answer:

Wouldn't it make sense to build this independence in the model itself, and then use some generic sampling routine? Your description suggests that there should be factors between cars which are nearby only (you can "implement" these with deterministic gates if you want), plus some singleton factors for each car which codify the actual stochastic process you talk about.

Then you can use something like Joey Gonzalez's splash sampler to parallelize it by following the edges in this graph.

answered Apr 07 '12 at 09:05

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2554154278421

You're absolutely right; the model provides the independence, but it is up to the algorithm to make use of it. I hadn't heard of the Splash Sampler before, and it is a good direction to look. I wonder if a Particle Filter equivalent can be formulated. Thanks!

(Apr 07 '12 at 18:09) Daniel Duckwoth
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.