The Application of a Genetic Algorithm to the Optimization of a Mesoscale Model for Emergency Response

Abstract Besides solving the equations of momentum, heat, and moisture transport on the model grid, mesoscale weather models must account for subgrid-scale processes that affect the resolved model variables. These are simulated with model parameterizations, which often rely on values preset by the user. Such ‘free’ model parameters, along with others set to initialize the model, are often poorly constrained, requiring that a user select each from a range of plausible values. Finding the values to optimize any forecasting tool can be accomplished with a search algorithm, and one such process – the genetic algorithm (GA) – has become especially popular. As applied to modeling, GAs represent a Darwinian process – an ensemble of simulations is run with a different set of parameter values for each member, and the members subsequently judged to be most accurate are selected as ‘parents’ who pass their parameters onto a new generation. At the Department of Energy’s Savannah River Site in South Carolina, we are applying a GA to the Regional Atmospheric Modeling System (RAMS) mesoscale weather model, which supplies input to a model to simulate the dispersion of an airborne contaminant as part of the site’s emergency response preparations. An ensemble of forecasts is run each day, weather data are used to ‘score’ the individual members of the ensemble, and the parameters from the best members are used for the next day’s forecasts. As meteorological conditions change, the parameters change as well, maintaining a model configuration that is best adapted to atmospheric conditions.

2019 ◽  
Vol 58 (3) ◽  
pp. 511-525
Author(s):  
David Werth ◽  
Grace Maze ◽  
Robert Buckley ◽  
Steven Chiswell

AbstractAirborne tracer simulations are typically performed using a dispersion model driven by a high-resolution meteorological model. Besides solving the dynamic equations of momentum, heat, and moisture on the resolved model grid, mesoscale models must account for subgrid-scale fluxes and other unresolved processes. These are estimated through parameterization schemes of eddy diffusion, convection, and surface interactions, and they make use of prescribed parameters set by the user. Such “free” model parameters are often poorly constrained, and a range of plausible values exists for each. Evolutionary programming (EP) is a process to improve the selection of the parameters. A population of simulations is first run with a different set of parameter values for each member, and the member judged most accurate is selected as the “parent” of a new “generation.” After a number of iterations, the simulations should approach a configuration that is best adapted to the atmospheric conditions. We apply the EP process to simulate the first release of the 1994 European Tracer Experiment (ETEX) project, which comprised two experiments in which a tracer was released in western France and sampled by an observing network. The EP process is used to improve a simulation of the RAMS mesoscale weather model, with weather data collected during ETEX being used to “score” the individual members according to how well each simulation matches the observations. The meteorological simulations from before and after application of the EP process are each used to force a dispersion model to create a simulation of the ETEX release, and substantial improvement is observed when these are validated against sampled tracer concentrations.


Author(s):  
Amandeep Kaur Sohal ◽  
Ajay Kumar Sharma ◽  
Neetu Sood

Background: An information gathering is a typical and important task in agriculture monitoring and military surveillance. In these applications, minimization of energy consumption and maximization of network lifetime have prime importance for green computing. As wireless sensor networks comprise of a large number of sensors with limited battery power and deployed at remote geographical locations for monitoring physical events, therefore it is imperative to have minimum consumption of energy during network coverage. The WSNs help in accurate monitoring of remote environment by collecting data intelligently from the individual sensors. Objective: The paper is motivated from green computing aspect of wireless sensor network and an Energy-efficient Weight-based Coverage Enhancing protocol using Genetic Algorithm (WCEGA) is presented. The WCEGA is designed to achieve continuously monitoring of remote areas for a longer time with least power consumption. Method: The cluster-based algorithm consists two phases: cluster formation and data transmission. In cluster formation, selection of cluster heads and cluster members areas based on energy and coverage efficient parameters. The governing parameters are residual energy, overlapping degree, node density and neighbor’s degree. The data transmission between CHs and sink is based on well-known evolution search algorithm i.e. Genetic Algorithm. Conclusion: The results of WCEGA are compared with other established protocols and shows significant improvement of full coverage and lifetime approximately 40% and 45% respectively.


2021 ◽  
Vol 2021 (7) ◽  
Author(s):  
K. Nowak ◽  
A.F. Żarnecki

Abstract One of the important goals at the future e+e− colliders is to measure the top-quark mass and width in a scan of the pair production threshold. However, the shape of the pair-production cross section at the threshold depends also on other model parameters, as the top Yukawa coupling, and the measurement is a subject to many systematic uncertainties. Presented in this work is the study of the top-quark mass determination from the threshold scan at CLIC. The most general approach is used with all relevant model parameters and selected systematic uncertainties included in the fit procedure. Expected constraints from other measurements are also taken into account. It is demonstrated that the top-quark mass can be extracted with precision of the order of 30 to 40 MeV, including considered systematic uncertainties, already for 100 fb−1 of data collected at the threshold. Additional improvement is possible, if the running scenario is optimised. With the optimisation procedure based on the genetic algorithm the statistical uncertainty of the mass measurement can be reduced by about 20%. Influence of the collider luminosity spectra on the expected precision of the measurement is also studied.


Transport ◽  
2009 ◽  
Vol 24 (2) ◽  
pp. 135-142 ◽  
Author(s):  
Ali Payıdar Akgüngör ◽  
Erdem Doğan

This study proposes an Artificial Neural Network (ANN) model and a Genetic Algorithm (GA) model to estimate the number of accidents (A), fatalities (F) and injuries (I) in Ankara, Turkey, utilizing the data obtained between 1986 and 2005. For model development, the number of vehicles (N), fatalities, injuries, accidents and population (P) were selected as model parameters. In the ANN model, the sigmoid and linear functions were used as activation functions with the feed forward‐back propagation algorithm. In the GA approach, two forms of genetic algorithm models including a linear and an exponential form of mathematical expressions were developed. The results of the GA model showed that the exponential model form was suitable to estimate the number of accidents and fatalities while the linear form was the most appropriate for predicting the number of injuries. The best fit model with the lowest mean absolute errors (MAE) between the observed and estimated values is selected for future estimations. The comparison of the model results indicated that the performance of the ANN model was better than that of the GA model. To investigate the performance of the ANN model for future estimations, a fifteen year period from 2006 to 2020 with two possible scenarios was employed. In the first scenario, the annual average growth rates of population and the number of vehicles are assumed to be 2.0 % and 7.5%, respectively. In the second scenario, the average number of vehicles per capita is assumed to reach 0.60, which represents approximately two and a half‐fold increase in fifteen years. The results obtained from both scenarios reveal the suitability of the current methods for road safety applications.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Xing Zhao ◽  
Zhao-yan Feng ◽  
Yan Li ◽  
Antoine Bernard

Sometimes, the evacuation measure may seem to be the best choice as an emergency response. To enable an efficiency evacuation, a network optimization model which integrates lane-based reversal design and routing with intersection crossing conflict elimination for evacuation is constructed. The proposed bilevel model minimizes the total evacuation time to leave the evacuation zone. A tabu search algorithm is applied to find an optimal lane reversal plan in the upper-level. The lower-level utilizes a simulated annealing algorithm to get two types of “a single arc for an intersection approach” and “multiple arcs for an intersection approach” lane-based route plans with intersection crossing conflict elimination. An experiment of a nine-intersection evacuation zone illustrates the validity of the model and the algorithm. A field case with network topology of Jianye District around the Nanjing Olympics Sports Center is studied to show the applicability of this algorithm.


Author(s):  
Roger C. von Doenhoff ◽  
Robert J. Streifel ◽  
Robert J. Marks

Abstract A model of the friction characteristics of carbon brakes is proposed to aid in the understanding of the causes of brake vibration. The model parameters are determined by a genetic algorithm in an attempt to identify differences in friction properties between brake applications during which vibration occurs and those during which there is no vibration. The model computes the brake torque as a function of wheelspeed, brake pressure, and the carbon surface temperature. The surface temperature is computed using a five node temperature model. The genetic algorithm chooses the model parameters to minimize the error between the model output and the torque measured during a dynamometer test. The basics of genetic algorithms and results of the model parameter identification process are presented.


2020 ◽  
Vol 54 (2) ◽  
pp. 597-614
Author(s):  
Shanoli Samui Pal ◽  
Samarjit Kar

In this paper, fuzzified Choquet integral and fuzzy-valued integrand with respect to separate measures like fuzzy measure, signed fuzzy measure and intuitionistic fuzzy measure are used to develop regression model for forecasting. Fuzzified Choquet integral is used to build a regression model for forecasting time series with multiple attributes as predictor attributes. Linear regression based forecasting models are suffering from low accuracy and unable to approximate the non-linearity in time series. Whereas Choquet integral can be used as a general non-linear regression model with respect to non classical measures. In the Choquet integral based regression model parameters are optimized by using a real coded genetic algorithm (GA). In these forecasting models, fuzzified integrands denote the participation of an individual attribute or a group of attributes to predict the current situation. Here, more generalized Choquet integral, i.e., fuzzified Choquet integral is used in case of non-linear time series forecasting models. Three different real stock exchange data are used to predict the time series forecasting model. It is observed that the accuracy of prediction models highly depends on the non-linearity of the time series.


2014 ◽  
Vol 14 (14) ◽  
pp. 7341-7365 ◽  
Author(s):  
A. Cirisan ◽  
B. P. Luo ◽  
I. Engel ◽  
F. G. Wienhold ◽  
M. Sprenger ◽  
...  

Abstract. Observations of high supersaturations with respect to ice inside cirrus clouds with high ice water content (> 0.01 g kg−1) and high crystal number densities (> 1 cm−3) are challenging our understanding of cloud microphysics and of climate feedback processes in the upper troposphere. However, single measurements of a cloudy air mass provide only a snapshot from which the persistence of ice supersaturation cannot be judged. We introduce here the "cirrus match technique" to obtain information about the evolution of clouds and their saturation ratio. The aim of these coordinated balloon soundings is to analyze the same air mass twice. To this end the standard radiosonde equipment is complemented by a frost point hygrometer, "SnowWhite", and a particle backscatter detector, "COBALD" (Compact Optical Backscatter AerosoL Detector). Extensive trajectory calculations based on regional weather model COSMO (Consortium for Small-Scale Modeling) forecasts are performed for flight planning, and COSMO analyses are used as a basis for comprehensive microphysical box modeling (with grid scale of 2 and 7 km, respectively). Here we present the results of matching a cirrus cloud to within 2–15 km, realized on 8 June 2010 over Payerne, Switzerland, and a location 120 km downstream close to Zurich. A thick cirrus cloud was detected over both measurement sites. We show that in order to quantitatively reproduce the measured particle backscatter ratios, the small-scale temperature fluctuations not resolved by COSMO must be superimposed on the trajectories. The stochastic nature of the fluctuations is captured by ensemble calculations. Possibilities for further improvements in the agreement with the measured backscatter data are investigated by assuming a very slow mass accommodation of water on ice, the presence of heterogeneous ice nuclei, or a wide span of (spheroidal) particle shapes. However, the resulting improvements from these microphysical refinements are moderate and comparable in magnitude with changes caused by assuming different regimes of temperature fluctuations for clear-sky or cloudy-sky conditions, highlighting the importance of proper treatment of subscale fluctuations. The model yields good agreement with the measured backscatter over both sites and reproduces the measured saturation ratios with respect to ice over Payerne. Conversely, the 30% in-cloud supersaturation measured in a massive 4 km thick cloud layer over Zurich cannot be reproduced, irrespective of the choice of meteorological or microphysical model parameters. The measured supersaturation can only be explained by either resorting to an unknown physical process, which prevents the ice particles from consuming the excess humidity, or – much more likely – by a measurement error, such as a contamination of the sensor housing of the SnowWhite hygrometer by a precipitation drop from a mixed-phase cloud just below the cirrus layer or from some very slight rain in the boundary layer. This uncertainty calls for in-flight checks or calibrations of hygrometers under the special humidity conditions in the upper troposphere.


2019 ◽  
Author(s):  
Kee Huong Lai ◽  
Woon Jeng Siow ◽  
Ahmad Aniq bin Mohd Nooramin Kaw ◽  
Pauline Ong ◽  
Zarita Zainuddin

Sign in / Sign up

Export Citation Format

Share Document