scholarly journals On the Selection of Localization Radius in Ensemble Filtering for Multiscale Quasigeostrophic Dynamics

2018 ◽  
Vol 146 (2) ◽  
pp. 543-560 ◽  
Author(s):  
Yue Ying ◽  
Fuqing Zhang ◽  
Jeffrey L. Anderson

Covariance localization remedies sampling errors due to limited ensemble size in ensemble data assimilation. Previous studies suggest that the optimal localization radius depends on ensemble size, observation density and accuracy, as well as the correlation length scale determined by model dynamics. A comprehensive localization theory for multiscale dynamical systems with varying observation density remains an active area of research. Using a two-layer quasigeostrophic (QG) model, this study systematically evaluates the sensitivity of the best Gaspari–Cohn localization radius to changes in model resolution, ensemble size, and observing networks. Numerical experiment results show that the best localization radius is smaller for smaller-scale components of a QG flow, indicating its scale dependency. The best localization radius is rather insensitive to changes in model resolution, as long as the key dynamical processes are reasonably well represented by the low-resolution model with inflation methods that account for representation errors. As ensemble size decreases, the best localization radius shifts to smaller values. However, for nonlocal correlations between an observation and state variables that peak at a certain distance, decreasing localization radii further within this distance does not reduce analysis errors. Increasing the density of an observing network has two effects that both reduce the best localization radius. First, the reduced observation error spectral variance further constrains prior ensembles at large scales. Less large-scale contribution results in a shorter overall correlation length, which favors a smaller localization radius. Second, a denser network provides more independent pieces of information, thus a smaller localization radius still allows the same number of observations to constrain each state variable.

2021 ◽  
Vol 9 (10) ◽  
pp. 1054
Author(s):  
Ang Su ◽  
Liang Zhang ◽  
Xuefeng Zhang ◽  
Shaoqing Zhang ◽  
Zhao Liu ◽  
...  

Due to the model and sampling errors of the finite ensemble, the background ensemble spread becomes small and the error covariance is underestimated during filtering for data assimilation. Because of the constraint of computational resources, it is difficult to use a large ensemble size to reduce sampling errors in high-dimensional real atmospheric and ocean models. Here, based on Bayesian theory, we explore a new spatially and temporally varying adaptive covariance inflation algorithm. To increase the statistical presentation of a finite background ensemble, the prior probability of inflation obeys the inverse chi-square distribution, and the likelihood function obeys the t distribution, which are used to obtain prior or posterior covariance inflation schemes. Different ensemble sizes are used to compare the assimilation quality with other inflation schemes within both the perfect and biased model frameworks. With two simple coupled models, we examined the performance of the new scheme. The results show that the new inflation scheme performed better than existing schemes in some cases, with more stability and fewer assimilation errors, especially when a small ensemble size was used in the biased model. Due to better computing performance and relaxed demand for computational resources, the new scheme has more potential applications in more comprehensive models for prediction initialization and reanalysis. In a word, the new inflation scheme performs well for a small ensemble size, and it may be more suitable for large-scale models.


2014 ◽  
Vol 142 (12) ◽  
pp. 4499-4518 ◽  
Author(s):  
Yicun Zhen ◽  
Fuqing Zhang

Abstract This study proposes a variational approach to adaptively determine the optimum radius of influence for ensemble covariance localization when uncorrelated observations are assimilated sequentially. The covariance localization is commonly used by various ensemble Kalman filters to limit the impact of covariance sampling errors when the ensemble size is small relative to the dimension of the state. The probabilistic approach is based on the premise of finding an optimum localization radius that minimizes the distance between the Kalman update using the localized sampling covariance versus using the true covariance, when the sequential ensemble Kalman square root filter method is used. The authors first examine the effectiveness of the proposed method for the cases when the true covariance is known or can be approximated by a sufficiently large ensemble size. Not surprisingly, it is found that the smaller the true covariance distance or the smaller the ensemble, the smaller the localization radius that is needed. The authors further generalize the method to the more usual scenario that the true covariance is unknown but can be represented or estimated probabilistically based on the ensemble sampling covariance. The mathematical formula for this probabilistic and adaptive approach with the use of the Jeffreys prior is derived. Promising results and limitations of this new method are discussed through experiments using the Lorenz-96 system.


Water ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 1520
Author(s):  
Zheng Jiang ◽  
Quanzhong Huang ◽  
Gendong Li ◽  
Guangyong Li

The parameters of water movement and solute transport models are essential for the accurate simulation of soil moisture and salinity, particularly for layered soils in field conditions. Parameter estimation can be achieved using the inverse modeling method. However, this type of method cannot fully consider the uncertainties of measurements, boundary conditions, and parameters, resulting in inaccurate estimations of parameters and predictions of state variables. The ensemble Kalman filter (EnKF) is well-suited to data assimilation and parameter prediction in Situations with large numbers of variables and uncertainties. Thus, in this study, the EnKF was used to estimate the parameters of water movement and solute transport in layered, variably saturated soils. Our results indicate that when used in conjunction with the HYDRUS-1D software (University of California Riverside, California, CA, USA) the EnKF effectively estimates parameters and predicts state variables for layered, variably saturated soils. The assimilation of factors such as the initial perturbation and ensemble size significantly affected in the simulated results. A proposed ensemble size range of 50–100 was used when applying the EnKF to the highly nonlinear hydrological models of the present study. Although the simulation results for moisture did not exhibit substantial improvement with the assimilation, the simulation of the salinity was significantly improved through the assimilation of the salinity and relative solutetransport parameters. Reducing the uncertainties in measured data can improve the goodness-of-fit in the application of the EnKF method. Sparse field condition observation data also benefited from the accurate measurement of state variables in the case of EnKF assimilation. However, the application of the EnKF algorithm for layered, variably saturated soils with hydrological models requires further study, because it is a challenging and highly nonlinear problem.


2007 ◽  
Vol 64 (11) ◽  
pp. 3766-3784 ◽  
Author(s):  
Philippe Lopez

Abstract This paper first reviews the current status, issues, and limitations of the parameterizations of atmospheric large-scale and convective moist processes that are used in numerical weather prediction and climate general circulation models. Both large-scale (resolved) and convective (subgrid scale) moist processes are dealt with. Then, the general question of the inclusion of diabatic processes in variational data assimilation systems is addressed. The focus is put on linearity and resolution issues, the specification of model and observation error statistics, the formulation of the control vector, and the problems specific to the assimilation of observations directly affected by clouds and precipitation.


2021 ◽  
Author(s):  
Jan Chylik ◽  
Dmitry Chechin ◽  
Regis Dupuy ◽  
Birte S. Kulla ◽  
Christof Lüpkes ◽  
...  

Abstract. Late springtime Arctic mixed-phase convective clouds over open water in the Fram Strait as observed during the recent ACLOUD field campaign are simulated at turbulence-resolving resolutions. The main research objective is to gain more insight into the coupling of these cloud layers to the surface, and into the role played by interactions between aerosol, hydrometeors and turbulence in this process. A composite case is constructed based on data collected by two research aircraft on 18 June 2017. The boundary conditions and large-scale forcings are based on weather model analyses, yielding a simulation that freely equilibrates towards the observed thermodynamic state. The results are evaluated against a variety of independent aircraft measurements. The observed cloud macro- and microphysical structure is well reproduced, consisting of a stratiform cloud layer in mixed-phase fed by surface-driven convective transport in predominantly liquid phase. Comparison to noseboom turbulence measurements suggests that the simulated cloud-surface coupling is realistic. A joint-pdf analysis of relevant state variables is conducted, suggesting that locations where the mixed-phase cloud layer is strongly coupled to the surface by convective updrafts act as hot-spots for invigorated interactions between turbulence, clouds and aerosol. A mixing-line analysis reveals that the turbulent mixing is similar to warm convective cloud regimes, but is accompanied by hydrometeor transitions that are unique for mixed-phase cloud systems. Distinct fingerprints in the joint-pdf diagrams also explain i) the typical ring-like shape of ice mass in the outflow cloud deck, ii) its slightly elevated buoyancy, and iii) an associated local minimum in CCN.


2021 ◽  
Author(s):  
Dino Zivojevic ◽  
Muhamed Delalic ◽  
Darijo Raca ◽  
Dejan Vukobratovic ◽  
Mirsad Cosovic

The purpose of a state estimation (SE) algorithm is to estimate the values of the state variables considering the available set of measurements. The centralised SE becomes impractical for large-scale systems, particularly if the measurements are spatially distributed across wide geographical areas. Dividing the large-scale systems into clusters (\ie subsystems) and distributing the computation across clusters, solves the constraints of centralised based approach. In such scenarios, using distributed SE methods brings numerous advantages over the centralised ones. In this paper, we propose a novel distributed approach to solve the linear SE model by combining local solutions obtained by applying weighted least-squares (WLS) of the given subsystems with the Gaussian belief propagation (GBP) algorithm. The proposed algorithm is based on the factor graph operating without a central coordinator, where subsystems exchange only ``beliefs", thus preserving privacy of the measurement data and state variables. Further, we propose an approach to speed-up evaluation of the local solution upon arrival of a new information to the subsystem. Finally, the proposed algorithm provides results that reach accuracy of the centralised WLS solution in a few iterations, and outperforms vanilla GBP algorithm with respect to its convergence properties.


2018 ◽  
Vol 146 (1) ◽  
pp. 175-198 ◽  
Author(s):  
Rong Kong ◽  
Ming Xue ◽  
Chengsi Liu

Abstract A hybrid ensemble–3DVar (En3DVar) system is developed and compared with 3DVar, EnKF, “deterministic forecast” EnKF (DfEnKF), and pure En3DVar for assimilating radar data through perfect-model observing system simulation experiments (OSSEs). DfEnKF uses a deterministic forecast as the background and is therefore parallel to pure En3DVar. Different results are found between DfEnKF and pure En3DVar: 1) the serial versus global nature and 2) the variational minimization versus direct filter updating nature of the two algorithms are identified as the main causes for the differences. For 3DVar (EnKF/DfEnKF and En3DVar), optimal decorrelation scales (localization radii) for static (ensemble) background error covariances are obtained and used in hybrid En3DVar. The sensitivity of hybrid En3DVar to covariance weights and ensemble size is examined. On average, when ensemble size is 20 or larger, a 5%–10% static covariance gives the best results, while for smaller ensembles, more static covariance is beneficial. Using an ensemble size of 40, EnKF and DfEnKF perform similarly, and both are better than pure and hybrid En3DVar overall. Using 5% static error covariance, hybrid En3DVar outperforms pure En3DVar for most state variables but underperforms for hydrometeor variables, and the improvement (degradation) is most notable for water vapor mixing ratio qυ (snow mixing ratio qs). Overall, EnKF/DfEnKF performs the best, 3DVar performs the worst, and static covariance only helps slightly via hybrid En3DVar.


2016 ◽  
Vol 19 (02) ◽  
pp. 239-252 ◽  
Author(s):  
Morteza Haghighat Sefat ◽  
Khafiz M. Muradov ◽  
Ahmed H. Elsheikh ◽  
David R. Davies

Summary The popularity of intelligent wells (I-wells), which provide layer-by-layer monitoring and control capability of production and injection, is growing. However, the number of available techniques for optimal control of I-wells is limited (Sarma et al. 2006; Alghareeb et al. 2009; Almeida et al. 2010; Grebenkin and Davies 2012). Currently, most of the I-wells that are equipped with interval control valves (ICVs) are operated to enhance the current production and to resolve problems associated with breakthrough of the unfavorable phase. This reactive strategy is unlikely to deliver the long-term optimum production. On the other side, the proactive-control strategy of I-wells, with its ambition to provide the optimum control for the entire well's production life, has the potential to maximize the cumulative oil production. This strategy, however, results in a high-dimensional, nonlinear, and constrained optimization problem. This study provides guidelines on selecting a suitable proactive optimization approach, by use of state-of-the-art stochastic gradient-approximation algorithms. A suitable optimization approach increases the practicality of proactive optimization for real field models under uncertain operational and subsurface conditions. We evaluate the simultaneous-perturbation stochastic approximation (SPSA) method (Spall 1992) and the ensemble-based optimization (EnOpt) method (Chen et al. 2009). In addition, we present a new derivation of the EnOpt by use of the concept of directional derivatives. The numerical results show that both SPSA and EnOpt methods can provide a fast solution to a large-scale and multiple I-well proactive optimization problem. A criterion for tuning the algorithms is proposed and the performance of both methods is compared for several test cases. The used methodology for estimating the gradient is shown to affect the application area of each algorithm. SPSA provides a rough estimate of the gradient and performs better in search environments, characterized by several local optima, especially with a large ensemble size. EnOpt was found to provide a smoother estimation of the gradient, resulting in a more-robust algorithm to the choice of the tuning parameters, and a better performance with a small ensemble size. Moreover, the final optimum operation obtained by EnOpt is smoother. Finally, the obtained criteria are used to perform proactive optimization of ICVs in a real field.


2019 ◽  
Vol 147 (4) ◽  
pp. 1107-1126 ◽  
Author(s):  
Jonathan Poterjoy ◽  
Louis Wicker ◽  
Mark Buehner

Abstract A series of papers published recently by the first author introduce a nonlinear filter that operates effectively as a data assimilation method for large-scale geophysical applications. The method uses sequential Monte Carlo techniques adopted by particle filters, which make no parametric assumptions for the underlying prior and posterior error distributions. The filter also treats the underlying dynamical system as a set of loosely coupled systems to effectively localize the effect observations have on posterior state estimates. This property greatly reduces the number of particles—or ensemble members—required for its implementation. For these reasons, the method is called the local particle filter. The current manuscript summarizes algorithmic advances made to the local particle filter following recent tests performed over a hierarchy of dynamical systems. The revised filter uses modified vector weight calculations and probability mapping techniques from earlier studies, and new strategies for improving filter stability in situations where state variables are observed infrequently with very accurate measurements. Numerical experiments performed on low-dimensional data assimilation problems provide evidence that supports the theoretical benefits of the new improvements. As a proof of concept, the revised particle filter is also tested on a high-dimensional application from a real-time weather forecasting system at the NOAA/National Severe Storms Laboratory (NSSL). The proposed changes have large implications for researchers applying the local particle filter for real applications, such as data assimilation in numerical weather prediction models.


Sign in / Sign up

Export Citation Format

Share Document