scholarly journals A local particle filter for high dimensional geophysical systems

2015 ◽  
Vol 2 (6) ◽  
pp. 1631-1658 ◽  
Author(s):  
S. G. Penny ◽  
T. Miyoshi

Abstract. A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard Sampling Importance Resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each gridpoint. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the Local Ensemble Transform Kalman Filter (LETKF) using the 40-variable Lorenz-96 model. The results show that: (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.

2016 ◽  
Vol 23 (6) ◽  
pp. 391-405 ◽  
Author(s):  
Stephen G. Penny ◽  
Takemasa Miyoshi

Abstract. A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard sampling importance resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each grid point. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the local ensemble transform Kalman filter (LETKF) using the 40-variable Lorenz-96 (L96) model. The results show that (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.


2015 ◽  
Vol 144 (1) ◽  
pp. 59-76 ◽  
Author(s):  
Jonathan Poterjoy

Abstract This paper presents a new data assimilation approach based on the particle filter (PF) that has potential for nonlinear/non-Gaussian applications in geoscience. Particle filters provide a Monte Carlo approximation of a system’s probability density, while making no assumptions regarding the underlying error distribution. The proposed method is similar to the PF in that particles—also referred to as ensemble members—are weighted based on the likelihood of observations in order to approximate posterior probabilities of the system state. The new approach, denoted the local PF, extends the particle weights into vector quantities to reduce the influence of distant observations on the weight calculations via a localization function. While the number of particles required for standard PFs scales exponentially with the dimension of the system, the local PF provides accurate results using relatively few particles. In sensitivity experiments performed with a 40-variable dynamical system, the local PF requires only five particles to prevent filter divergence for both dense and sparse observation networks. Comparisons of the local PF and ensemble Kalman filters (EnKFs) reveal advantages of the new method in situations resembling geophysical data assimilation applications. In particular, the new filter demonstrates substantial benefits over EnKFs when observation networks consist of densely spaced measurements that relate nonlinearly to the model state—analogous to remotely sensed data used frequently in weather analyses.


2019 ◽  
Vol 148 (1) ◽  
pp. 3-20 ◽  
Author(s):  
Takuya Kawabata ◽  
Genta Ueno

Abstract Non-Gaussian probability density functions (PDFs) in convection initiation (CI) and development were investigated using a particle filter with a storm-scale numerical prediction model and an adaptive observation error estimator (NHM-RPF). An observing system simulation experiment (OSSE) was conducted with a 90-min assimilation period and 1000 particles at a 2-km grid spacing. Pseudosurface observations of potential temperature (PT), winds, water vapor (QV), and pseudoradar observations of rainwater (QR) in the lower troposphere were created in a nature run that simulated a well-developed cumulonimbus. The results of the OSSE (PF) show a significant improvement in comparison to ensemble simulations without any observations. The Gaussianity of the PDFs for PF in the CI area was evaluated using the Bayesian information criterion to compare goodness-of-fit of Gaussian, two-Gaussian mixture, and histogram models. The PDFs are strongly non-Gaussian when NHM-RPF produces diverse particles over the CI period. The non-Gaussian PDF of the updraft is followed by the upper-bounded PDF of the relative humidity, which produces non-Gaussian PDFs of QV and PT. The PDFs of the cloud water and QR are strongly non-Gaussian throughout the experimental period. We conclude that the non-Gaussianity of the CI originated from the non-Gaussianity of the updraft. In addition, we show that the adaptive observation error estimator significantly contributes to the stability of PF and the robustness to many observations.


2018 ◽  
Vol 25 (4) ◽  
pp. 765-807 ◽  
Author(s):  
Alban Farchi ◽  
Marc Bocquet

Abstract. Particle filtering is a generic weighted ensemble data assimilation method based on sequential importance sampling, suited for nonlinear and non-Gaussian filtering problems. Unless the number of ensemble members scales exponentially with the problem size, particle filter (PF) algorithms experience weight degeneracy. This phenomenon is a manifestation of the curse of dimensionality that prevents the use of PF methods for high-dimensional data assimilation. The use of local analyses to counteract the curse of dimensionality was suggested early in the development of PF algorithms. However, implementing localisation in the PF is a challenge, because there is no simple and yet consistent way of gluing together locally updated particles across domains. In this article, we review the ideas related to localisation and the PF in the geosciences. We introduce a generic and theoretical classification of local particle filter (LPF) algorithms, with an emphasis on the advantages and drawbacks of each category. Alongside the classification, we suggest practical solutions to the difficulties of local particle filtering, which lead to new implementations and improvements in the design of LPF algorithms. The LPF algorithms are systematically tested and compared using twin experiments with the one-dimensional Lorenz 40-variables model and with a two-dimensional barotropic vorticity model. The results illustrate the advantages of using the optimal transport theory to design the local analysis. With reasonable ensemble sizes, the best LPF algorithms yield data assimilation scores comparable to those of typical ensemble Kalman filter algorithms, even for a mildly nonlinear system.


2014 ◽  
Vol 142 (4) ◽  
pp. 1631-1654 ◽  
Author(s):  
Derek J. Posselt ◽  
Daniel Hodyss ◽  
Craig H. Bishop

Abstract If forecast or observation error distributions are non-Gaussian, the true posterior mean and covariance depends on the distribution of observation errors and the observed values. The posterior distribution of analysis errors obtained from ensemble Kalman filters and smoothers is independent of observed values. Hence, the error in ensemble Kalman smoother (EnKS) state estimates is closely linked to the sensitivity of the true posterior to observed values. Here a Markov chain Monte Carlo (MCMC) algorithm is used to document the dependence of the errors in EnKS-based estimates of cloud microphysical parameters on observed values. It is shown that EnKS analysis distributions are grossly inaccurate for nonnegative microphysical parameters when parameter values are close to zero. Furthermore, numerical analysis is presented that shows that, by design, the posterior distributions given by EnKS and even nonlinear extensions of these smoothers approximate the average of all possible posterior analysis distributions associated with all possible observations given the prior. Multiple runs of the MCMC are made to approximate this distribution. This empirically derived average of Bayesian posterior analysis errors is shown to be qualitatively similar to the EnKS posterior. In this way, it is demonstrated that, in the presence of nonlinearity, EnKS algorithms do not estimate the true posterior error distribution given the specific values of the observations. Instead, they produce an error distribution that is consistent with an average of the true posterior variance, weighted by the probability of obtaining each possible observation. This seemingly subtle distinction gives rise to fundamental differences between the approximate EnKS posterior and the true Bayesian posterior distribution.


2010 ◽  
Vol 138 (1) ◽  
pp. 282-290 ◽  
Author(s):  
William F. Campbell ◽  
Craig H. Bishop ◽  
Daniel Hodyss

Abstract A widely used observation space covariance localization method is shown to adversely affect satellite radiance assimilation in ensemble Kalman filters (EnKFs) when compared to model space covariance localization. The two principal problems are that distance and location are not well defined for integrated measurements, and that neighboring satellite channels typically have broad, overlapping weighting functions, which produce true, nonzero correlations that localization in radiance space can incorrectly eliminate. The limitations of the method are illustrated in a 1D conceptual model, consisting of three vertical levels and a two-channel satellite instrument. A more realistic 1D model is subsequently tested, using the 30 vertical levels from the Navy Operational Global Atmospheric Prediction System (NOGAPS), the Advanced Microwave Sounding Unit A (AMSU-A) weighting functions for channels 6–11, and the observation error variance and forecast error covariance from the NRL Atmospheric Variational Data Assimilation System (NAVDAS). Analyses from EnKFs using radiance space localization are compared with analyses from raw EnKFs, EnKFs using model space localization, and the optimal analyses using the NAVDAS forecast error covariance as a proxy for the true forecast error covariance. As measured by mean analysis error variance reduction, radiance space localization is inferior to model space localization for every ensemble size and meaningful observation error variance tested. Furthermore, given as many satellite channels as vertical levels, radiance space localization cannot recover the true temperature state with perfect observations, whereas model space localization can.


2018 ◽  
Author(s):  
Alban Farchi ◽  
Marc Bocquet

Abstract. Particle filtering is a generic weighted ensemble data assimilation method based on sequential importance sampling, suited for nonlinear and non-Gaussian filtering problems. Unless the number of ensemble members scales exponentially with the problem size, particle filter (PF) algorithms lead to weight degeneracy. This phenomenon is a consequence of the curse of dimensionality that prevents one from using PF methods for high-dimensional data assimilation. The use of local analyses to counteract the curse of dimensionality was suggested early on. However, implementing localisation in the PF is a challenge because there is no simple and yet consistent way of gluing locally updated particles together across domains. In this article, we review the ideas related to localisation and the PF in the geosciences. We introduce a generic and theoretical classification of local particle filter (LPF) algorithms, with an emphasis on the advantages and drawbacks of each category. Alongside with the classification, we suggest practical solutions to the difficulties of local particle filtering, that lead to new implementations and improvements in the design of LPF algorithms. The LPF algorithms are systematically tested and compared using twin experiments with the one-dimensional Lorenz 40-variables model and with a two-dimensional barotropic vorticity model. The results illustrate the advantages of using the optimal transport theory to design the local analysis. With reasonable ensemble sizes, the best LPF algorithms yield data assimilation scores comparable to those of typical ensemble Kalman filter algorithms.


Author(s):  
Tung T. Vu ◽  
Ha Hoang Kha

In this research work, we investigate precoder designs to maximize the energy efficiency (EE) of secure multiple-input multiple-output (MIMO) systems in the presence of an eavesdropper. In general, the secure energy efficiency maximization (SEEM) problem is highly nonlinear and nonconvex and hard to be solved directly. To overcome this difficulty, we employ a branch-and-reduce-and-bound (BRB) approach to obtain the globally optimal solution. Since it is observed that the BRB algorithm suffers from highly computational cost, its globally optimal solution is importantly served as a benchmark for the performance evaluation of the suboptimal algorithms. Additionally, we also develop a low-complexity approach using the well-known zero-forcing (ZF) technique to cancel the wiretapped signal, making the design problem more amenable. Using the ZF based method, we transform the SEEM problem to a concave-convex fractional one which can be solved by applying the combination of the Dinkelbach and bisection search algorithm. Simulation results show that the ZF-based method can converge fast and obtain a sub-optimal EE performance which is closed to the optimal EE performance of the BRB method. The ZF based scheme also shows its advantages in terms of the energy efficiency in comparison with the conventional secrecy rate maximization precoder design.


Author(s):  
Wei Zhang ◽  
Saad Ahmed ◽  
Jonathan Hong ◽  
Zoubeida Ounaies ◽  
Mary Frecker

Different types of active materials have been used to actuate origami-inspired self-folding structures. To model the highly nonlinear deformation and material responses, as well as the coupled field equations and boundary conditions of such structures, high-fidelity models such as finite element (FE) models are needed but usually computationally expensive, which makes optimization intractable. In this paper, a computationally efficient two-stage optimization framework is developed as a systematic method for the multi-objective designs of such multifield self-folding structures where the deformations are concentrated in crease-like areas, active and passive materials are assumed to behave linearly, and low- and high-fidelity models of the structures can be developed. In Stage 1, low-fidelity models are used to determine the topology of the structure. At the end of Stage 1, a distance measure [Formula: see text] is applied as the metric to determine the best design, which then serves as the baseline design in Stage 2. In Stage 2, designs are further optimized from the baseline design with greatly reduced computing time compared to a full FEA-based topology optimization. The design framework is first described in a general formulation. To demonstrate its efficacy, this framework is implemented in two case studies, namely, a three-finger soft gripper actuated using a PVDF-based terpolymer, and a 3D multifield example actuated using both the terpolymer and a magneto-active elastomer, where the key steps are elaborated in detail, including the variable filter, metrics to select the best design, determination of design domains, and material conversion methods from low- to high-fidelity models. In this paper, analytical models and rigid body dynamic models are developed as the low-fidelity models for the terpolymer- and MAE-based actuations, respectively, and the FE model of the MAE-based actuation is generalized from previous work. Additional generalizable techniques to further reduce the computational cost are elaborated. As a result, designs with better overall performance than the baseline design were achieved at the end of Stage 2 with computing times of 15 days for the gripper and 9 days for the multifield example, which would rather be over 3 and 2 months for full FEA-based optimizations, respectively. Tradeoffs between the competing design objectives were achieved. In both case studies, the efficacy and computational efficiency of the two-stage optimization framework are successfully demonstrated.


2013 ◽  
Vol 683 ◽  
pp. 824-827
Author(s):  
Tian Ding Chen ◽  
Chao Lu ◽  
Jian Hu

With the development of science and technology, target tracking was applied to many aspects of people's life, such as missile navigation, tanks localization, the plot monitoring system, robot field operation. Particle filter method dealing with the nonlinear and non-Gaussian system was widely used due to the complexity of the actual environment. This paper uses the resampling technology to reduce the particle degradation appeared in our test. Meanwhile, it compared particle filter with Kalman filter to observe their accuracy .The experiment results show that particle filter is more suitable for complex scene, so particle filter is more practical and feasible on target tracking.


Sign in / Sign up

Export Citation Format

Share Document