scholarly journals The Application of an Evolutionary Algorithm to the Optimization of a Mesoscale Meteorological Model

2009 ◽  
Vol 48 (2) ◽  
pp. 317-329 ◽  
Author(s):  
Lance O’Steen ◽  
David Werth

Abstract It is shown that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root-mean-square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). It is found that the optimization can be done with relatively modest computer resources; therefore, operational implementation is possible. The overall number of simulations needed to obtain a specific reduction of the cost function is found to depend strongly on the procedure used to perturb the “child” parameters relative to their “parents” within the evolutionary algorithm. In addition, the choice of meteorological variables that are included in the rms error and their relative weighting are also found to be important factors in the optimization.

2012 ◽  
Vol 2012 ◽  
pp. 1-16 ◽  
Author(s):  
Francisco José Lopes de Lima ◽  
Enilson Palmeira Cavalcanti ◽  
Enio Pereira de Souza ◽  
Emerson Mariano da Silva

This work aims to describe the wind power density in five sites in the State of Paraiba, as well as to access the ability of the mesoscale atmospheric model Brazilian developments on the regional atmospheric modeling system (BRAMS) in describing the intensity of wind in São Gonçalo Monteiro, Patos, Campina Grande, and João Pessoa. Observational data are wind speed and direction at 10 m high, provided by the National Institute of Meteorology (INMET). We used the numerical model BRAMS in simulations for two different months. We ran the model for rainy months: March and April. It was concluded that the BRAMS model is able to satisfactorily reproduce the monthly cycle of the wind regime considered, as well as the main direction. However the model tends to underestimate the wind speed.


Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. F203-F214 ◽  
Author(s):  
A. Abubakar ◽  
M. Li ◽  
G. Pan ◽  
J. Liu ◽  
T. M. Habashy

We have developed an inversion algorithm for jointly inverting controlled-source electromagnetic (CSEM) data and magnetotelluric (MT) data. It is well known that CSEM and MT data provide complementary information about the subsurface resistivity distribution; hence, it is useful to derive earth resistivity models that simultaneously and consistently fit both data sets. Because we are dealing with a large-scale computational problem, one usually uses an iterative technique in which a predefined cost function is optimized. One of the issues of this simultaneous joint inversion approach is how to assign the relative weights on the CSEM and MT data in constructing the cost function. We propose a multiplicative cost function instead of the traditional additive one. This function does not require an a priori choice of the relative weights between these two data sets. It will adaptively put CSEM and MT data on equal footing in the inversion process. The inversion is accomplished with a regularized Gauss-Newton minimization scheme where the model parameters are forced to lie within their upper and lower bounds by a nonlinear transformation procedure. We use a line search scheme to enforce a reduction of the cost function at each iteration. We tested our joint inversion approach on synthetic and field data.


2008 ◽  
Vol 136 (12) ◽  
pp. 4653-4667 ◽  
Author(s):  
Jahrul M. Alam ◽  
John C. Lin

Abstract An improved treatment of advection is essential for atmospheric transport and chemistry models. Eulerian treatments are generally plagued with instabilities, unrealistic negative constituent values, diffusion, and dispersion errors. A higher-order Eulerian model improves one error at significant cost but magnifies another error. The cost of semi-Lagrangian models is too high for many applications. Furthermore, traditional trajectory “Lagrangian” models do not solve both the dynamical and tracer equations simultaneously in the Lagrangian frame. A fully Lagrangian numerical model is, therefore, presented for calculating atmospheric flows. The model employs a Lagrangian mesh of particles to approximate the nonlinear advection processes for all dependent variables simultaneously. Verification results for simulating sea-breeze circulations in a dry atmosphere are presented. Comparison with Defant’s analytical solution for the sea-breeze system enabled quantitative assessment of the model’s convergence and stability. An average of 20 particles in each cell of an 11 × 20 staggered grid system are required to predict the two-dimensional sea-breeze circulation, which accounts for a total of about 4400 particles in the Lagrangian mesh. Comparison with Eulerian and semi-Lagrangian models shows that the proposed fully Lagrangian model is more accurate for the sea-breeze circulation problem. Furthermore, the Lagrangian model is about 20 times as fast as the semi-Lagrangian model and about 2 times as fast as the Eulerian model. These results point toward the value of constructing an atmospheric model based on the fully Lagrangian approach.


2015 ◽  
Vol 15 (17) ◽  
pp. 10019-10031 ◽  
Author(s):  
S. Lim ◽  
S. K. Park ◽  
M. Zupanski

Abstract. Ozone (O3) plays an important role in chemical reactions and is usually incorporated in chemical data assimilation (DA). In tropical cyclones (TCs), O3 usually shows a lower concentration inside the eyewall and an elevated concentration around the eye, impacting meteorological as well as chemical variables. To identify the impact of O3 observations on TC structure, including meteorological and chemical information, we developed a coupled meteorology–chemistry DA system by employing the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) and an ensemble-based DA algorithm – the maximum likelihood ensemble filter (MLEF). For a TC case that occurred over East Asia, Typhoon Nabi (2005), our results indicate that the ensemble forecast is reasonable, accompanied with larger background state uncertainty over the TC, and also over eastern China. Similarly, the assimilation of O3 observations impacts meteorological and chemical variables near the TC and over eastern China. The strongest impact on air quality in the lower troposphere was over China, likely due to the pollution advection. In the vicinity of the TC, however, the strongest impact on chemical variables adjustment was at higher levels. The impact on meteorological variables was similar in both over China and near the TC. The analysis results are verified using several measures that include the cost function, root mean square (RMS) error with respect to observations, and degrees of freedom for signal (DFS). All measures indicate a positive impact of DA on the analysis – the cost function and RMS error have decreased by 16.9 and 8.87 %, respectively. In particular, the DFS indicates a strong positive impact of observations in the TC area, with a weaker maximum over northeastern China.


Author(s):  
LAURENT OUDRE

This paper presents a method for adapting the cost function in the Monge–Kantorovich Problem (MKP) to a classification task. More specifically, we introduce a criterion that allows to learn a cost function which tends to produce large distance values for elements belonging to different classes and small distance values for elements belonging to the same class. Under some additional constraints (one of them being the well-known Monge condition), we show that the optimization of this criterion writes as a linear programming problem. Experimental results on synthetic data show that the output optimal cost function provides good retrieval performances in the presence of two types of perturbations commonly found in histograms. When compared to a set of various commonly used cost functions, our optimal cost function performs as good as the best cost function of the set, which shows that it can adapt well to the task. Promising results are also obtained on real data for two-class image retrieval based on grayscale intensity histograms.


Author(s):  
Byamakesh Nayak ◽  
Sangeeta Sahu

This article estimates the unknown dc motor parameters by adapting the adaptive model with the reference model created by experimental data onto armature current and speed response from separately excited dc motor .The field flux dynamics, which is usually ignored, is included to model the dynamics of the motor. The block diagram including the flux dynamics and model parameters is considered as the adaptive model. The integral time square error between the instant experimental data and the corresponding adaptive model data is taken as cost function. The Whale optimization algorithm is used to minimize the cost function. Additionally, to improve the performances of optimization algorithm and for accurate result, the experimental data is divided into three intervals which form the three inequality constraints. A fixed penalty value is added to the cost function for violating these constraints. The effectiveness of estimation with two different methods is validated by convergence curve.


2008 ◽  
Vol 9 (3) ◽  
pp. 507-520 ◽  
Author(s):  
Daniel E. Comarazamy ◽  
Jorge E. González

Abstract The ability of a mesoscale atmospheric model to reproduce the spatial distribution of the precipitation of the Caribbean island of Puerto Rico during an early rainfall season month (April) is evaluated in this paper, taking the month of April 1998 as the primary test case, and analyzed in detail with subsequent simulations for April 1993. The monthly accumulated rainfall was simulated using the Regional Atmospheric Modeling System (RAMS), and the results were validated with precipitation data from 15 cooperative stations located throughout the island. The monthlong numerical simulation for April 1998 replicated the observed precipitation pattern, including the general spatial distribution, and daily and monthly totals, to varying degrees of accuracy. At specific locations, errors ranged from 2% in the rainy mountains to 82% in the San Juan metropolitan area, with a general tendency of the model to produce lower precipitation values throughout the simulation domain. An error analysis proved that the accuracy of the simulation is independent of elevation. The station data showed two dominant precipitation events during the month of April 1998: one on 2 April and the other on 16 April. The model was able to replicate the precipitation observed during the first precipitation event with less precision than for the second event. This might be attributed to the model’s inability to capture the large-scale forcing that produced the recorded amounts of rainfall observed during the second precipitation event. The results for total accumulated precipitation for April 1993 were very similar to the results for the April 1998 simulation, for both the spatial distribution and total values of rainfall.


Geophysics ◽  
1992 ◽  
Vol 57 (11) ◽  
pp. 1428-1434 ◽  
Author(s):  
K. J. Ellefsen ◽  
M. N. Toksöz ◽  
K. M. Tubman ◽  
C. H. Cheng

We have developed a method that estimates a shear modulus [Formula: see text] of a transversely isotropic formation using the tube wave generated during acoustic logging. (The symmetry axis of the anisotropy is assumed to parallel the borehole.) The inversion, which is implemented in the frequency‐wavenumber domain, is based upon a cost function that has three terms: a measure of the misfit between the observed and predicted wavenumbers of the tube wave, a measure of the misfit between the current estimate for [Formula: see text] and the most‐likely value for [Formula: see text], and penalty functions that constrain the estimate to physically acceptable values. The largest contribution to the value of the cost function ordinarily comes from the first term, indicating that the estimate for [Formula: see text] depends mostly on the data. Because the cost function only has one minimum, it can be found using standard optimization methods. The minimum is well defined indicating that the estimate for [Formula: see text] is well resolved. Estimates for [Formula: see text] from synthetic data are almost always within 1 percent of their correct value. Estimates for [Formula: see text] from field data that were collected in a formation with a high clay content are typical of transversely isotropic rocks.


2020 ◽  
Vol 221 (1) ◽  
pp. 617-639 ◽  
Author(s):  
L Colli ◽  
H-P Bunge ◽  
J Oeser

SUMMARY The adjoint method is a powerful technique to compute sensitivities (Fréchet derivatives) with respect to model parameters, allowing one to solve inverse problems where analytical solutions are not available or the cost to determine many times the associated forward problem is prohibitive. In Geodynamics it has been applied to the restoration problem of mantle convection—that is, to reconstruct past mantle flow states with dynamic models by finding optimal flow histories relative to the current model state—so that poorly known mantle flow parameters can be tested against observations gleaned from the geological record. By enabling us to construct time dependent earth models the adjoint method has the potential to link observations from seismology, geology, mineral physics and palaeomagnetism in a dynamically consistent way, greatly enhancing our understanding of the solid Earth system. Synthetic experiments demonstrate for the ideal case of no model error and no data error that the adjoint method restores mantle flow over timescales on the order of a transit time (≈100 Myr). But in reality unavoidable limitations enter the inverse problem in the form of poorly known model parameters and uncertain state estimations, which may result in systematic errors of the reconstructed flow history. Here we use high-resolution, 3-D spherical mantle circulation models to perform a systematic study of synthetic adjoint inversions, where we insert on purpose a mismatch between the model used to generate synthetic data and the model used for carrying out the inversion. By considering a mismatch in rheology, final state and history of surface velocities we find that mismatched model parameters do not inhibit misfit reduction: the adjoint method still produces a flow history that fits the estimated final state. However, the recovered initial state can be a poor approximation of the true initial state, where reconstructed and true flow histories diverge exponentially back in time and where for the more divergent cases the reconstructed initial state includes physically implausible structures, especially in and near the thermal boundary layers. Consequently, a complete reduction of the cost function may not be desirable when the goal is a best fit to the initial condition. When the estimated final state is a noisy low-pass version of the true final state choosing an appropriate misfit function can reduce the generation of artefacts in the initial state. While none of the model mismatches considered in this study, taken singularly, results in a complete failure of the recovered flow history, additional work is needed to assess their combined effects.


Author(s):  
GUOLI ZHANG ◽  
GENGYIN LI ◽  
HONG XIE ◽  
JIANWEI MA

In this paper we propose a new economic load dispatch model considered the cost function coefficients with uncertainties and the constraints of ramp rate. The uncertain parameters are represented by fuzzy numbers, with the model called fuzzy dynamic economic load dispatch model (FDELD). A novel hybrid evolutionary algorithm and fuzzy number ranking method is proposed to solve FDELD problem. Hybrid evolutionary algorithm combines evolutionary algorithm of very strong global search ability with quasi-simplex technique of better local search capability. The fuzzy number ranking method is used to compare the fuzzy cost function values when optimizing fuzzy cost function. In addition, this paper gives a novel method dealing with directly constrained conditions, and it is not necessary to construct penalty function, as a common disposal constraints method. The experimental study shows that FDELD is practical and the algorithm and techniques proposed are very efficient to solve FDELD problem.


Sign in / Sign up

Export Citation Format

Share Document