In this article we consider sequential inference on partially observed deterministic systems. Examples include: inference on the expected position of a dynamical system, with random initial position, or Bayesian static parameter inference for unobserved partial differential equations (PDEs), both associated to sequentially observed real data. Such statistical models are found in a wide variety of real applications, including weather prediction. In many practical scenarios one must discretize the system, but even under such discretization, it is not possible to compute the associated expected value (integral) required for inference. Such quantities are then approximated by Monte Carlo methods, and the associated cost to achieve a given level of error in this context can substantially be reduced by using multilevel Monte Carlo (MLMC). MLMC relies upon exact sampling of the model of interest, which is not always possible. We devise a sequential Monte Carlo (SMC) method, which does not require exact sampling, to leverage the MLMC method. We prove that for some models with n data points, that to achieve a mean square error (MSE) in estimation of O(ɛ2) (for some 0 < ɛ < 1) our MLSMC method has a cost of O(n2ɛ−2) versus an SMC method that just approximates the most precise discretiztion of O(n2ɛ−3). This is illustrated on two numerical examples.
|Original language||English (US)|
|Journal||International Journal for Uncertainty Quantification|
|State||Published - Jan 1 2019|