Efficient particle filtering for stochastic Korteweg–de Vries equations

2017 ◽  
Vol 17 (02) ◽  
pp. 1750008
Author(s):  
Feng Bao ◽  
Yanzhao Cao ◽  
Xiaoying Han ◽  
Jinglai Li

We propose an efficient algorithm to perform nonlinear data assimilation for Korteweg–de Vries solitons. In particular we develop a reduced particle filtering method to reduce the dimension of the problem. The method decomposes a solitonic pulse into a clean soliton and small radiative noise, and instead of inferring the complete pulse profile, we only infer the two soliton parameters with particle filter. Numerical examples are provided to demonstrate that the proposed method can provide rather accurate results, while being much more computationally affordable than a standard particle filter.

2018 ◽  
Vol 25 (4) ◽  
pp. 765-807 ◽  
Author(s):  
Alban Farchi ◽  
Marc Bocquet

Abstract. Particle filtering is a generic weighted ensemble data assimilation method based on sequential importance sampling, suited for nonlinear and non-Gaussian filtering problems. Unless the number of ensemble members scales exponentially with the problem size, particle filter (PF) algorithms experience weight degeneracy. This phenomenon is a manifestation of the curse of dimensionality that prevents the use of PF methods for high-dimensional data assimilation. The use of local analyses to counteract the curse of dimensionality was suggested early in the development of PF algorithms. However, implementing localisation in the PF is a challenge, because there is no simple and yet consistent way of gluing together locally updated particles across domains. In this article, we review the ideas related to localisation and the PF in the geosciences. We introduce a generic and theoretical classification of local particle filter (LPF) algorithms, with an emphasis on the advantages and drawbacks of each category. Alongside the classification, we suggest practical solutions to the difficulties of local particle filtering, which lead to new implementations and improvements in the design of LPF algorithms. The LPF algorithms are systematically tested and compared using twin experiments with the one-dimensional Lorenz 40-variables model and with a two-dimensional barotropic vorticity model. The results illustrate the advantages of using the optimal transport theory to design the local analysis. With reasonable ensemble sizes, the best LPF algorithms yield data assimilation scores comparable to those of typical ensemble Kalman filter algorithms, even for a mildly nonlinear system.


2013 ◽  
Vol 9 (1) ◽  
pp. 43-74
Author(s):  
S. Dubinkina ◽  
H. Goosse

Abstract. In an idealized framework, we assess reconstructions of the climate state of the Southern Hemisphere during the past 150 yr using the climate model of intermediate complexity LOVECLIM and three data-assimilation methods: a nudging, a particle filter with sequential importance resampling, and an extremely efficient particle filter. The methods constrain the model by pseudo-observations of surface air temperature anomalies obtained from a twin experiment using the same model but different initial conditions. The net of the pseudo-observations is chosen to be either dense (when the pseudo-observations are given at every grid cell of the model) or sparse (when the pseudo-observations are given at the same locations as the dataset of instrumental surface temperature records HADCRUT3). All three data-assimilation methods provide with good estimations of surface air temperature and of sea ice concentration, with the extremely efficient particle filter having the best performance. When reconstructing variables that are not directly linked to the pseudo-observations of surface air temperature as atmospheric circulation and sea surface salinity, the performance of the particle filters is weaker but still satisfactory for many applications. Sea surface salinity reconstructed by the nudging, however, exhibits a patterns opposite to the pseudo-observations, which is due to a spurious impact of the nudging on the ocean mixing.


2011 ◽  
Vol 15 (10) ◽  
pp. 3237-3251 ◽  
Author(s):  
S. J. Noh ◽  
Y. Tachikawa ◽  
M. Shiiba ◽  
S. Kim

Abstract. Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC) methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP), is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.


2020 ◽  
Vol 8 (3) ◽  
pp. 1215-1235
Author(s):  
Linjie Wen ◽  
Jiangqi Wu ◽  
Linjun Lu ◽  
Jinglai Li

2013 ◽  
Vol 9 (3) ◽  
pp. 1141-1152 ◽  
Author(s):  
S. Dubinkina ◽  
H. Goosse

Abstract. Using the climate model of intermediate complexity LOVECLIM in an idealised framework, we assess three data-assimilation methods for reconstructing the climate state. The methods are a nudging, a particle filter with sequential importance resampling, and a nudging proposal particle filter and the test case corresponds to the climate of the high latitudes of the Southern Hemisphere during the past 150 yr. The data-assimilation methods constrain the model by pseudo-observations of surface air temperature anomalies obtained from the same model, but different initial conditions. All three data-assimilation methods provide with good estimations of surface air temperature and of sea ice concentration, with the nudging proposal particle filter obtaining the highest correlations with the pseudo-observations. When reconstructing variables that are not directly linked to the pseudo-observations such as atmospheric circulation and sea surface salinity, the particle filters have equivalent performance and their correlations are smaller than for surface air temperature reconstructions but still satisfactory for many applications. The nudging, on the contrary, obtains sea surface salinity patterns that are opposite to the pseudo-observations, which is due to a spurious impact of the nudging on vertical exchanges in the ocean.


2016 ◽  
Vol 144 (3) ◽  
pp. 861-875 ◽  
Author(s):  
Laura Slivinski ◽  
Chris Snyder

Abstract Particle filtering methods for data assimilation may suffer from the “curse of dimensionality,” where the required ensemble size grows rapidly as the dimension increases. It would, therefore, be useful to know a priori whether a particle filter is feasible to implement in a given system. Previous work provides an asymptotic relation between the necessary ensemble size and an exponential function of , a statistic that depends on observation-space quantities and that is related to the system dimension when the number of observations is large; for linear, Gaussian systems, the statistic can be computed from eigenvalues of an appropriately normalized covariance matrix. Tests with a low-dimensional system show that these asymptotic results remain useful when the system is nonlinear, with either the standard or optimal proposal implementation of the particle filter. This study explores approximations to the covariance matrices that facilitate computation in high-dimensional systems, as well as different methods to estimate the accumulated system noise covariance for the optimal proposal. Since may be approximated using an ensemble from a simpler data assimilation scheme, such as the ensemble Kalman filter, the asymptotic relations thus allow an estimate of the ensemble size required for a particle filter before its implementation. Finally, the improved performance of particle filters with the optimal proposal, relative to those using the standard proposal, in the same low-dimensional system is demonstrated.


2018 ◽  
Author(s):  
Alban Farchi ◽  
Marc Bocquet

Abstract. Particle filtering is a generic weighted ensemble data assimilation method based on sequential importance sampling, suited for nonlinear and non-Gaussian filtering problems. Unless the number of ensemble members scales exponentially with the problem size, particle filter (PF) algorithms lead to weight degeneracy. This phenomenon is a consequence of the curse of dimensionality that prevents one from using PF methods for high-dimensional data assimilation. The use of local analyses to counteract the curse of dimensionality was suggested early on. However, implementing localisation in the PF is a challenge because there is no simple and yet consistent way of gluing locally updated particles together across domains. In this article, we review the ideas related to localisation and the PF in the geosciences. We introduce a generic and theoretical classification of local particle filter (LPF) algorithms, with an emphasis on the advantages and drawbacks of each category. Alongside with the classification, we suggest practical solutions to the difficulties of local particle filtering, that lead to new implementations and improvements in the design of LPF algorithms. The LPF algorithms are systematically tested and compared using twin experiments with the one-dimensional Lorenz 40-variables model and with a two-dimensional barotropic vorticity model. The results illustrate the advantages of using the optimal transport theory to design the local analysis. With reasonable ensemble sizes, the best LPF algorithms yield data assimilation scores comparable to those of typical ensemble Kalman filter algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1090
Author(s):  
Wenxu Wang ◽  
Damián Marelli ◽  
Minyue Fu

A popular approach for solving the indoor dynamic localization problem based on WiFi measurements consists of using particle filtering. However, a drawback of this approach is that a very large number of particles are needed to achieve accurate results in real environments. The reason for this drawback is that, in this particular application, classical particle filtering wastes many unnecessary particles. To remedy this, we propose a novel particle filtering method which we call maximum likelihood particle filter (MLPF). The essential idea consists of combining the particle prediction and update steps into a single one in which all particles are efficiently used. This drastically reduces the number of particles, leading to numerically feasible algorithms with high accuracy. We provide experimental results, using real data, confirming our claim.


Sign in / Sign up

Export Citation Format

Share Document