FIXED-LAG SEQUENTIAL MONTE CARLO (WedPmSS1)
Author(s) :
Doucet Arnaud (Cambridge University, United Kingdom)
Senecal Stephane (The Institute of Statistical Mathematics, Japan)
Abstract : Sequential Monte Carlo methods, aka particle methods, are an efficient class of simulation techniques to approximate sequences of complex probability distributions. These probability distributions are approximated by a large number of random samples called particles which are propagated over time using a combination of importance sampling and resampling steps. The efficiency of these algorithms is highly dependent on the importance distribution used. Even if the optimal importance distribution is chosen, the algorithm can be inefficient. Indeed, current standard sampling strategies extend the paths of particles over one time step and weight them consistently but do not modify the locations of the past of the paths. Consequently, if the discrepancy between two successive probability distributions is high, then this strategy can be highly inefficient. In this paper, we propose an extended importance sampling technique that allows us to modify the past of the paths and weight them consistently without having to perform any local Monte Carlo integration. This approach reduces the depletion of samples. An application to an optimal filtering problem for a toy nonlinear state space model illustrates this methodology.

Menu