Sequential Monte Carlo Methods For Nonlinear Discrete Time Filtering

Download Sequential Monte Carlo Methods For Nonlinear Discrete Time Filtering PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Sequential Monte Carlo Methods For Nonlinear Discrete Time Filtering book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Sequential Monte Carlo Methods for Nonlinear Discrete-Time Filtering

Author: Marcelo G. S. Bruno
language: en
Publisher: Morgan & Claypool Publishers
Release Date: 2013-01-01
In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable. We begin the notes with a review of Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in linear, Gaussian dynamic models, which corresponds to the well-known Kalman (or Kalman-Bucy) filter. Finally, we move to the general nonlinear, non-Gaussian stochastic filtering problem and present particle filtering as a sequential Monte Carlo approach to solve that problem in a statistically optimal way. We review several techniques to improve the performance of particle filters, including importance function optimization, particle resampling, Markov Chain Monte Carlo move steps, auxiliary particle filtering, and regularized particle filtering. We also discuss Rao-Blackwellized particle filtering as a technique that is particularly well-suited for many relevant applications such as fault detection and inertial navigation. Finally, we conclude the notes with a discussion on the emerging topic of distributed particle filtering using multiple processors located at remote nodes in a sensor network. Throughout the notes, we often assume a more general framework than in most introductory textbooks by allowing either the observation model or the hidden state dynamic model to include unknown parameters. In a fully Bayesian fashion, we treat those unknown parameters also as random variables. Using suitable dynamic conjugate priors, that approach can be applied then to perform joint state and parameter estimation. Table of Contents: Introduction / Bayesian Estimation of Static Vectors / The Stochastic Filtering Problem / Sequential Monte Carlo Methods / Sampling/Importance Resampling (SIR) Filter / Importance Function Selection / Markov Chain Monte Carlo Move Step / Rao-Blackwellized Particle Filters / Auxiliary Particle Filter / Regularized Particle Filters / Cooperative Filtering with Multiple Observers / Application Examples / Summary
Sequential Monte Carlo Methods for Nonlinear Discrete-Time Filtering

Author: Marcelo G. S. Bruno
language: en
Publisher: Springer Nature
Release Date: 2022-06-01
In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable. We begin the notes with a review of Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in linear, Gaussian dynamic models, which corresponds to the well-known Kalman (or Kalman-Bucy) filter. Finally, we move to the general nonlinear, non-Gaussian stochastic filtering problem and present particle filtering as a sequential Monte Carlo approach to solve that problem in a statistically optimal way. We review several techniques to improve the performance of particle filters, including importance function optimization, particle resampling, Markov Chain Monte Carlo move steps, auxiliary particle filtering, and regularized particle filtering. We also discuss Rao-Blackwellized particle filtering as a technique that is particularly well-suited for many relevant applications such as fault detection and inertial navigation. Finally, we conclude the notes with a discussion on the emerging topic of distributed particle filtering using multiple processors located at remote nodes in a sensor network. Throughout the notes, we often assume a more general framework than in most introductory textbooks by allowing either the observation model or the hidden state dynamic model to include unknown parameters. In a fully Bayesian fashion, we treat those unknown parameters also as random variables. Using suitable dynamic conjugate priors, that approach can be applied then to perform joint state and parameter estimation. Table of Contents: Introduction / Bayesian Estimation of Static Vectors / The Stochastic Filtering Problem / Sequential Monte Carlo Methods / Sampling/Importance Resampling (SIR) Filter / Importance Function Selection / Markov Chain Monte Carlo Move Step / Rao-Blackwellized Particle Filters / Auxiliary Particle Filter / Regularized Particle Filters / Cooperative Filtering with Multiple Observers / Application Examples / Summary
Sequential Monte Carlo Methods in Practice

Author: Arnaud Doucet
language: en
Publisher: Springer Science & Business Media
Release Date: 2013-03-09
Monte Carlo methods are revolutionising the on-line analysis of data in fields as diverse as financial modelling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survial of the fittest, have made it possible to solve numerically many complex, non-standarard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modelling, neural networks,optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practicioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris- XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning.