scholarly journals A local particle filter for high-dimensional geophysical systems

2016 ◽  
Vol 23 (6) ◽  
pp. 391-405 ◽  
Author(s):  
Stephen G. Penny ◽  
Takemasa Miyoshi

Abstract. A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard sampling importance resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each grid point. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the local ensemble transform Kalman filter (LETKF) using the 40-variable Lorenz-96 (L96) model. The results show that (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.

2015 ◽  
Vol 2 (6) ◽  
pp. 1631-1658 ◽  
Author(s):  
S. G. Penny ◽  
T. Miyoshi

Abstract. A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard Sampling Importance Resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each gridpoint. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the Local Ensemble Transform Kalman Filter (LETKF) using the 40-variable Lorenz-96 model. The results show that: (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.


2019 ◽  
Vol 148 (1) ◽  
pp. 3-20 ◽  
Author(s):  
Takuya Kawabata ◽  
Genta Ueno

Abstract Non-Gaussian probability density functions (PDFs) in convection initiation (CI) and development were investigated using a particle filter with a storm-scale numerical prediction model and an adaptive observation error estimator (NHM-RPF). An observing system simulation experiment (OSSE) was conducted with a 90-min assimilation period and 1000 particles at a 2-km grid spacing. Pseudosurface observations of potential temperature (PT), winds, water vapor (QV), and pseudoradar observations of rainwater (QR) in the lower troposphere were created in a nature run that simulated a well-developed cumulonimbus. The results of the OSSE (PF) show a significant improvement in comparison to ensemble simulations without any observations. The Gaussianity of the PDFs for PF in the CI area was evaluated using the Bayesian information criterion to compare goodness-of-fit of Gaussian, two-Gaussian mixture, and histogram models. The PDFs are strongly non-Gaussian when NHM-RPF produces diverse particles over the CI period. The non-Gaussian PDF of the updraft is followed by the upper-bounded PDF of the relative humidity, which produces non-Gaussian PDFs of QV and PT. The PDFs of the cloud water and QR are strongly non-Gaussian throughout the experimental period. We conclude that the non-Gaussianity of the CI originated from the non-Gaussianity of the updraft. In addition, we show that the adaptive observation error estimator significantly contributes to the stability of PF and the robustness to many observations.


2018 ◽  
Vol 25 (4) ◽  
pp. 765-807 ◽  
Author(s):  
Alban Farchi ◽  
Marc Bocquet

Abstract. Particle filtering is a generic weighted ensemble data assimilation method based on sequential importance sampling, suited for nonlinear and non-Gaussian filtering problems. Unless the number of ensemble members scales exponentially with the problem size, particle filter (PF) algorithms experience weight degeneracy. This phenomenon is a manifestation of the curse of dimensionality that prevents the use of PF methods for high-dimensional data assimilation. The use of local analyses to counteract the curse of dimensionality was suggested early in the development of PF algorithms. However, implementing localisation in the PF is a challenge, because there is no simple and yet consistent way of gluing together locally updated particles across domains. In this article, we review the ideas related to localisation and the PF in the geosciences. We introduce a generic and theoretical classification of local particle filter (LPF) algorithms, with an emphasis on the advantages and drawbacks of each category. Alongside the classification, we suggest practical solutions to the difficulties of local particle filtering, which lead to new implementations and improvements in the design of LPF algorithms. The LPF algorithms are systematically tested and compared using twin experiments with the one-dimensional Lorenz 40-variables model and with a two-dimensional barotropic vorticity model. The results illustrate the advantages of using the optimal transport theory to design the local analysis. With reasonable ensemble sizes, the best LPF algorithms yield data assimilation scores comparable to those of typical ensemble Kalman filter algorithms, even for a mildly nonlinear system.


2019 ◽  
Vol 147 (1) ◽  
pp. 345-362 ◽  
Author(s):  
Roland Potthast ◽  
Anne Walter ◽  
Andreas Rhodin

Particle filters are well known in statistics. They have a long tradition in the framework of ensemble data assimilation (EDA) as well as Markov chain Monte Carlo (MCMC) methods. A key challenge today is to employ such methods in a high-dimensional environment, since the naïve application of the classical particle filter usually leads to filter divergence or filter collapse when applied within the very high dimension of many practical assimilation problems (known as the curse of dimensionality). The goal of this work is to develop a localized adaptive particle filter (LAPF), which follows closely the idea of the classical MCMC or bootstrap-type particle filter, but overcomes the problems of collapse and divergence based on localization in the spirit of the local ensemble transform Kalman filter (LETKF) and adaptivity with an adaptive Gaussian resampling or rejuvenation scheme in ensemble space. The particle filter has been implemented in the data assimilation system for the global forecast model ICON at Deutscher Wetterdienst (DWD). We carry out simulations over a period of 1 month with a global horizontal resolution of 52 km and 90 layers. With four variables analyzed per grid point, this leads to 6.6 × 106 degrees of freedom. The LAPF can be run stably and shows a reasonable performance. We compare its scores to the operational setup of the ICON LETKF.


2015 ◽  
Vol 144 (1) ◽  
pp. 59-76 ◽  
Author(s):  
Jonathan Poterjoy

Abstract This paper presents a new data assimilation approach based on the particle filter (PF) that has potential for nonlinear/non-Gaussian applications in geoscience. Particle filters provide a Monte Carlo approximation of a system’s probability density, while making no assumptions regarding the underlying error distribution. The proposed method is similar to the PF in that particles—also referred to as ensemble members—are weighted based on the likelihood of observations in order to approximate posterior probabilities of the system state. The new approach, denoted the local PF, extends the particle weights into vector quantities to reduce the influence of distant observations on the weight calculations via a localization function. While the number of particles required for standard PFs scales exponentially with the dimension of the system, the local PF provides accurate results using relatively few particles. In sensitivity experiments performed with a 40-variable dynamical system, the local PF requires only five particles to prevent filter divergence for both dense and sparse observation networks. Comparisons of the local PF and ensemble Kalman filters (EnKFs) reveal advantages of the new method in situations resembling geophysical data assimilation applications. In particular, the new filter demonstrates substantial benefits over EnKFs when observation networks consist of densely spaced measurements that relate nonlinearly to the model state—analogous to remotely sensed data used frequently in weather analyses.


2018 ◽  
Author(s):  
Alban Farchi ◽  
Marc Bocquet

Abstract. Particle filtering is a generic weighted ensemble data assimilation method based on sequential importance sampling, suited for nonlinear and non-Gaussian filtering problems. Unless the number of ensemble members scales exponentially with the problem size, particle filter (PF) algorithms lead to weight degeneracy. This phenomenon is a consequence of the curse of dimensionality that prevents one from using PF methods for high-dimensional data assimilation. The use of local analyses to counteract the curse of dimensionality was suggested early on. However, implementing localisation in the PF is a challenge because there is no simple and yet consistent way of gluing locally updated particles together across domains. In this article, we review the ideas related to localisation and the PF in the geosciences. We introduce a generic and theoretical classification of local particle filter (LPF) algorithms, with an emphasis on the advantages and drawbacks of each category. Alongside with the classification, we suggest practical solutions to the difficulties of local particle filtering, that lead to new implementations and improvements in the design of LPF algorithms. The LPF algorithms are systematically tested and compared using twin experiments with the one-dimensional Lorenz 40-variables model and with a two-dimensional barotropic vorticity model. The results illustrate the advantages of using the optimal transport theory to design the local analysis. With reasonable ensemble sizes, the best LPF algorithms yield data assimilation scores comparable to those of typical ensemble Kalman filter algorithms.


Author(s):  
Tung T. Vu ◽  
Ha Hoang Kha

In this research work, we investigate precoder designs to maximize the energy efficiency (EE) of secure multiple-input multiple-output (MIMO) systems in the presence of an eavesdropper. In general, the secure energy efficiency maximization (SEEM) problem is highly nonlinear and nonconvex and hard to be solved directly. To overcome this difficulty, we employ a branch-and-reduce-and-bound (BRB) approach to obtain the globally optimal solution. Since it is observed that the BRB algorithm suffers from highly computational cost, its globally optimal solution is importantly served as a benchmark for the performance evaluation of the suboptimal algorithms. Additionally, we also develop a low-complexity approach using the well-known zero-forcing (ZF) technique to cancel the wiretapped signal, making the design problem more amenable. Using the ZF based method, we transform the SEEM problem to a concave-convex fractional one which can be solved by applying the combination of the Dinkelbach and bisection search algorithm. Simulation results show that the ZF-based method can converge fast and obtain a sub-optimal EE performance which is closed to the optimal EE performance of the BRB method. The ZF based scheme also shows its advantages in terms of the energy efficiency in comparison with the conventional secrecy rate maximization precoder design.


Author(s):  
Wei Zhang ◽  
Saad Ahmed ◽  
Jonathan Hong ◽  
Zoubeida Ounaies ◽  
Mary Frecker

Different types of active materials have been used to actuate origami-inspired self-folding structures. To model the highly nonlinear deformation and material responses, as well as the coupled field equations and boundary conditions of such structures, high-fidelity models such as finite element (FE) models are needed but usually computationally expensive, which makes optimization intractable. In this paper, a computationally efficient two-stage optimization framework is developed as a systematic method for the multi-objective designs of such multifield self-folding structures where the deformations are concentrated in crease-like areas, active and passive materials are assumed to behave linearly, and low- and high-fidelity models of the structures can be developed. In Stage 1, low-fidelity models are used to determine the topology of the structure. At the end of Stage 1, a distance measure [Formula: see text] is applied as the metric to determine the best design, which then serves as the baseline design in Stage 2. In Stage 2, designs are further optimized from the baseline design with greatly reduced computing time compared to a full FEA-based topology optimization. The design framework is first described in a general formulation. To demonstrate its efficacy, this framework is implemented in two case studies, namely, a three-finger soft gripper actuated using a PVDF-based terpolymer, and a 3D multifield example actuated using both the terpolymer and a magneto-active elastomer, where the key steps are elaborated in detail, including the variable filter, metrics to select the best design, determination of design domains, and material conversion methods from low- to high-fidelity models. In this paper, analytical models and rigid body dynamic models are developed as the low-fidelity models for the terpolymer- and MAE-based actuations, respectively, and the FE model of the MAE-based actuation is generalized from previous work. Additional generalizable techniques to further reduce the computational cost are elaborated. As a result, designs with better overall performance than the baseline design were achieved at the end of Stage 2 with computing times of 15 days for the gripper and 9 days for the multifield example, which would rather be over 3 and 2 months for full FEA-based optimizations, respectively. Tradeoffs between the competing design objectives were achieved. In both case studies, the efficacy and computational efficiency of the two-stage optimization framework are successfully demonstrated.


2013 ◽  
Vol 683 ◽  
pp. 824-827
Author(s):  
Tian Ding Chen ◽  
Chao Lu ◽  
Jian Hu

With the development of science and technology, target tracking was applied to many aspects of people's life, such as missile navigation, tanks localization, the plot monitoring system, robot field operation. Particle filter method dealing with the nonlinear and non-Gaussian system was widely used due to the complexity of the actual environment. This paper uses the resampling technology to reduce the particle degradation appeared in our test. Meanwhile, it compared particle filter with Kalman filter to observe their accuracy .The experiment results show that particle filter is more suitable for complex scene, so particle filter is more practical and feasible on target tracking.


2021 ◽  
Author(s):  
Marie Turčičová ◽  
Jan Mandel ◽  
Kryštof Eben

<p>A widely popular group of data assimilation methods in meteorological and geophysical sciences is formed by filters based on Monte-Carlo approximation of the traditional Kalman filter, e.g. <span>E</span><span>nsemble Kalman filter </span><span>(EnKF)</span><span>, </span><span>E</span><span>nsemble </span><span>s</span><span>quare-root filter and others. Due to the computational cost, ensemble </span><span>size </span><span>is </span><span>usually </span><span>small </span><span>compar</span><span>ed</span><span> to the dimension of the </span><span>s</span><span>tate </span><span>vector. </span><span>Traditional </span> <span>EnKF implicitly uses the sample covariance which is</span><span> a poor estimate of the </span><span>background covariance matrix - singular and </span><span>contaminated by </span><span>spurious correlations. </span></p><p><span>W</span><span>e focus on modelling the </span><span>background </span><span>covariance matrix by means of </span><span>a linear model for its inverse. This is </span><span>particularly </span><span>useful</span> <span>in</span><span> Gauss-Markov random fields (GMRF), </span><span>where</span> <span>the inverse covariance matrix has </span><span>a banded </span><span>structure</span><span>. </span><span>The parameters of the model are estimated by the</span><span> score matching </span><span>method which </span><span>provides</span><span> estimators in a closed form</span><span>, cheap to compute</span><span>. The resulting estimate</span><span> is a key component of the </span><span>proposed </span><span>ensemble filtering algorithms. </span><span>Under the assumption that the state vector is a GMRF in every time-step, t</span><span>he Score matching filter with Gaussian resamplin</span><span>g (SMF-GR) </span><span>gives</span><span> in every time-step a consistent (in the large ensemble limit) estimator of mean and covariance matrix </span><span>of the forecast and analysis distribution</span><span>. Further, we propose a filtering method called Score matching ensemble filter (SMEF), based on regularization of the EnK</span><span>F</span><span>. Th</span><span>is</span><span> filter performs well even for non-Gaussian systems with non-linear dynamic</span><span>s</span><span>. </span><span>The performance of both filters is illustrated on a simple linear convection model and Lorenz-96.</span></p>


Sign in / Sign up

Export Citation Format

Share Document