Use of Power Transform Mixing Ratios as Hydrometeor Control Variables for Direct Assimilation of Radar Reflectivity in GSI En3DVar and Tests with Five Convective Storms Cases

Author(s):  
Lianglyu Chen ◽  
Chengsi Liu ◽  
Ming Xue ◽  
Gang Zhao ◽  
Rong Kong ◽  
...  

AbstractWhen directly assimilating radar data within a variational framework using hydrometeor mixing ratios (q) as control variables (CVq), the gradient of the cost function becomes extremely large when background mixing ratio is close to zero. This significantly slows down minimization convergence and makes the assimilation of radial velocity and other observations ineffective because of the dominance of reflectivity observation term in the cost function gradient. Using logarithmic hydrometeor mixing ratios as control variables (CVlogq) can alleviate the problem but the high nonlinearity of logarithmic transformation can introduce spurious analysis increments into mixing ratios.In this study, power transform of hydrometeors is proposed to form new control variables (CVpq) where the nonlinearity of transformation can be adjusted by tuning exponent or power parameter p. The performance of assimilating radar data using CVpq is compared with those using CVq and CVlogq for the analyses and forecasts of five convective storm cases from spring of 2017. Results show that CVpq with p = 0.4 (CVpq0.4) gives the best reflectivity forecasts in terms of root mean square error and equitable threat score. Furthermore, CVpq0.4 has faster convergence of cost function minimization than CVq and produces less spurious analysis increment than CVlogq. Compared to CVq and CVlogq, CVpq0.4 have better skills of 0-3h composite reflectivity forecasts, and the updraft helicity tracks for the 16 May 2017 Texas and Oklahoma tornado outbreak case are more consistent with observations when using CVpq0.4.

2020 ◽  
Vol 148 (4) ◽  
pp. 1483-1502 ◽  
Author(s):  
Chengsi Liu ◽  
Ming Xue ◽  
Rong Kong

Abstract Radar reflectivity (Z) data are either directly assimilated using 3DVar, 4DVar, or ensemble Kalman filter, or indirectly assimilated using, for example, cloud analysis that preretrieves hydrometeors from Z. When directly assimilating radar data variationally, issues related to the highly nonlinear Z operator arise that can cause nonconvergence and bad analyses. To alleviate the issues, treatments are proposed in this study and their performances are examined via observing system simulation experiments. They include the following: 1) When using hydrometeor mixing ratios as control variables (CVq), small background Z can cause extremely large cost function gradient. Lower limits are imposed on the mixing ratios (qLim treatment) or the equivalent reflectivity (ZeLim treatment) in Z observation operator. ZeLim is found to work better than qLim in terms of analysis accuracy and convergence speed. 2) With CVq, the assimilation of radial velocity (Vr) is ineffective when assimilated together with Z data due to the much smaller cost function gradient associated with Vr. A procedure (VrPass) that assimilates Vr data in a separate pass is found very helpful. 3) Using logarithmic hydrometeor mixing ratios as control variables (CVlogq) can also avoid extremely large cost function gradient, and has much faster convergence. However, spurious analysis increments can be created when transforming the analysis increments back to mixing ratios. A background smoothing and a lower limit are applied to the background mixing ratios, and are shown to be effective. Using CVlogq with associated treatments produces better reflectivity analysis that is much closer to the observation without resorting to multiple analysis passes, and the cost function minimization also converges faster. CVlogq is therefore recommended for variational radar data assimilation.


Atmosphere ◽  
2019 ◽  
Vol 10 (7) ◽  
pp. 415 ◽  
Author(s):  
Dongmei Xu ◽  
Feifei Shen ◽  
Jinzhong Min

The variational data assimilation (DA) method seeks the optimal analyses by minimizing a cost function with respect to control variables (CVs). CVs are extended in this study to include hydrometeor mixing ratios related variables besides the widely used sets of CVs (momentum fields, surface pressure, temperature, and pseudo-relative humidity). The impacts of the extra CVs are investigated in terms of hydrometeor mixing ratios to the assimilation of radar radial velocity (Vr) and reflectivity (RF) for the analysis and prediction of Typhoon Chanthu (2010). It is found that the background error statistics of the extended CVs from the National Meteorological Center (NMC) method is reliable. The track forecast is improved significantly by including hydrometeor mixing ratios as CVs to assimilate radar Vr and RF. The DA experiments using the hydrometer CVs show much improved intensity analysis and forecast. It also improves the precipitation forecast skills to some extent. The positive impact is significant using a direct RF assimilation scheme, when Vr and RF data are applied together. It suggests that when we applying an indirect RF assimilation scheme, the fitting of more hydrometers in the cost function will tend to cause a slight degradation for other variables such as the wind and temperature.


2014 ◽  
Vol 142 (11) ◽  
pp. 3998-4016 ◽  
Author(s):  
Dominik Jacques ◽  
Isztar Zawadzki

Abstract In radar data assimilation, statistically optimal analyses are sought by minimizing a cost function in which the variance and covariance of background and observation errors are correctly represented. Radar observations are particular in that they are often available at spatial resolution comparable to that of background estimates. Because of computational constraints and lack of information, it is impossible to perfectly represent the correlation of errors. In this study, the authors characterize the impact of such misrepresentations in an idealized framework where the spatial correlations of background and observation errors are each described by a homogeneous and isotropic exponential decay. Analyses obtained with perfect representation of correlations are compared to others obtained by neglecting correlations altogether. These two sets of analyses are examined from a theoretical and an experimental perspective. The authors show that if the spatial correlations of background and observation errors are similar, then neglecting the correlation of errors has a small impact on the quality of analyses. They suggest that the sampling noise, related to the precision with which analysis errors may be estimated, could be used as a criterion for determining when the correlations of errors may be omitted. Neglecting correlations altogether also yields better analyses than representing correlations for only one term in the cost function or through the use of data thinning. These results suggest that the computational costs of data assimilation could be reduced by neglecting the correlations of errors in areas where dense radar observations are available.


Author(s):  
E. S. Noussair

AbstractExistence of piecewise optimal control is proved when the cost function includes one or both of (a) a cost of sudden switching (discontinuity) of control variables, and (b) a cost associated with the maximum rate of variation of the control over segments of the path for which the control is continuous.


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050006 ◽  
Author(s):  
Alexsandro Oliveira Alexandrino ◽  
Carla Negri Lintzmayer ◽  
Zanoni Dias

One of the main problems in Computational Biology is to find the evolutionary distance among species. In most approaches, such distance only involves rearrangements, which are mutations that alter large pieces of the species’ genome. When we represent genomes as permutations, the problem of transforming one genome into another is equivalent to the problem of Sorting Permutations by Rearrangement Operations. The traditional approach is to consider that any rearrangement has the same probability to happen, and so, the goal is to find a minimum sequence of operations which sorts the permutation. However, studies have shown that some rearrangements are more likely to happen than others, and so a weighted approach is more realistic. In a weighted approach, the goal is to find a sequence which sorts the permutations, such that the cost of that sequence is minimum. This work introduces a new type of cost function, which is related to the amount of fragmentation caused by a rearrangement. We present some results about the lower and upper bounds for the fragmentation-weighted problems and the relation between the unweighted and the fragmentation-weighted approach. Our main results are 2-approximation algorithms for five versions of this problem involving reversals and transpositions. We also give bounds for the diameters concerning these problems and provide an improved approximation factor for simple permutations considering transpositions.


2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


2000 ◽  
Vol 25 (2) ◽  
pp. 209-227 ◽  
Author(s):  
Keith R. McLaren ◽  
Peter D. Rossitter ◽  
Alan A. Powell

2021 ◽  
pp. 107754632110324
Author(s):  
Berk Altıner ◽  
Bilal Erol ◽  
Akın Delibaşı

Adaptive optics systems are powerful tools that are implemented to degrade the effects of wavefront aberrations. In this article, the optimal actuator placement problem is addressed for the improvement of disturbance attenuation capability of adaptive optics systems due to the fact that actuator placement is directly related to the enhancement of system performance. For this purpose, the linear-quadratic cost function is chosen, so that optimized actuator layouts can be specialized according to the type of wavefront aberrations. It is then considered as a convex optimization problem, and the cost function is formulated for the disturbance attenuation case. The success of the presented method is demonstrated by simulation results.


2014 ◽  
Vol 665 ◽  
pp. 643-646
Author(s):  
Ying Liu ◽  
Yan Ye ◽  
Chun Guang Li

Metalearning algorithm learns the base learning algorithm, targeted for improving the performance of the learning system. The incremental delta-bar-delta (IDBD) algorithm is such a metalearning algorithm. On the other hand, sparse algorithms are gaining popularity due to their good performance and wide applications. In this paper, we propose a sparse IDBD algorithm by taking the sparsity of the systems into account. Thenorm penalty is contained in the cost function of the standard IDBD, which is equivalent to adding a zero attractor in the iterations, thus can speed up convergence if the system of interest is indeed sparse. Simulations demonstrate that the proposed algorithm is superior to the competing algorithms in sparse system identification.


Sign in / Sign up

Export Citation Format

Share Document