latin hypercube sampling
Recently Published Documents


TOTAL DOCUMENTS

590
(FIVE YEARS 194)

H-INDEX

37
(FIVE YEARS 6)

Author(s):  
Jaeyoo Choi ◽  
Yohan Cha ◽  
Jihoon Kong ◽  
Neil Vaz ◽  
Jaeseung Lee ◽  
...  

Abstract This study applies a comprehensive surrogate-based optimization techniques to optimize the performance of polymer electrolyte membrane fuel cells (PEMFCs). Parametric cases considering four variables are defined using latin hypercube sampling. Training and test data are generated using a multidimensional, two-phase PEMFC simulation model. Response surface approximation, radial basis neural network, and kriging surrogates are employed to construct objective functions for the PEMFC performance. There accuracies are tested and compared using root mean square error and adjusted R-square. Surrogates linked with optimization algorithms, i.e., genetic algorithm and particle swarm optimization are used to determine the optimal design points. Comparative study of these surrogates reveals that the kriging model outperforms the other models in terms of prediction capability. Furthermore, the PEMFC model simulations at the optimal design points demonstrate that performance improvements of around 56–69 mV at 2.0 A/cm2 are achieved with the optimal design compared to typical PEMFC design conditions.


2022 ◽  
Author(s):  
Yves Tinda Mangongo ◽  
Joseph-Désiré Kyemba Bukweli ◽  
Justin Dupar Busili Kampempe ◽  
Rostin Matendo Mabela ◽  
Justin Manango Wazute Munganga

Abstract In this paper we present a more realistic mathematical model for the transmission dynamics of malaria by extending the classical SEIRS scheme and the model of Hai-Feng Huo and Guang-Ming Qiu [21] by adding the ignorant infected humans compartment. We analyze the global asymptotically stabilities of the model by the use of the basic reproduction number R_0 and we prove that when R_0≦1, the disease-free equilibrium is globally asymptotically stable. That is malaria dies out in the population. When R_0>1, there exists a co-existing unique endemic equilibrium which is globally asymptotically stable. The global sensitivity analysis have been done through the partial rank correlation coefficient using the samples generated by the use of latin hypercube sampling method and shows that the most influence parameters in the spread of malaria are the proportion θ of infectious humans who recover and the recovery rate γ of infectious humans. In order to eradicate malaria, we have to decrease the number of ignorant infected humans by testing peoples and treat them. Numerical simulations show that malaria can be also controlled or eradicated by increasing the recovery rate γ of infectious humans, decreasing the number of ignorant infected humans and decreasing the average number n of mosquito bites.


2021 ◽  
Author(s):  
Catalina Vich ◽  
Matthew Clapp ◽  
Timothy Verstynen ◽  
Jonathan Rubin

During action selection, mammals exhibit a high degree of flexibility in adapting their decisions in response to environmental changes. Although the cortico-basal ganglia-thalamic (CBGT) network is implicated in this adaptation, it features a synaptic architecture comprising multiple feed-forward, reciprocal, and feedback pathways, complicating efforts to elucidate the roles of specific CBGT populations in the process of evidence accumulation during decision-making. In this paper we apply a strategic sampling approach, based on Latin hypercube sampling, to explore how CBGT network properties, including subpopulation firing rates and synaptic weights, map to parameters of a normative drift diffusion model (DDM) representing algorithmic aspects of information accumulation during decision-making. Through the application of canonical correlation analysis, we find that this relationship can be characterized in terms of three low-dimensional control ensembles impacting specific qualities of the emergent decision policy: responsiveness (associated with overall activity in corticothalamic and direct pathways), pliancy (associated largely with overall activity in components of the indirect pathway of the basal ganglia), and choice (associated with differences in direct and indirect pathways across action channels). These analyses provide key mechanistic predictions about the roles of specific CBGT network elements in shifting different aspects of decision policies.


2021 ◽  
Author(s):  
Jianpeng Sun ◽  
Guanjun Lv ◽  
Wenfeng Huang ◽  
Rong Wang ◽  
Xiaogang Ma

Abstract In order to further improve the prediction accuracy of typhoon simulation method for extreme wind speed in typhoon prone areas, an improved typhoon simulation method is proposed by introducing the Latin hypercube sampling method into the traditional typhoon simulation method. In this paper, the improved typhoon simulation method is first given a detailed introduction. Then, this method is applied to the prediction of extreme wind speeds under various return periods in Hong Kong. To validate this method, two aspects of analysis is carried out, including correlation analysis among typhoon key parameters and prediction of extreme wind speeds under various return periods. The results show that the correlation coefficients among typhoon key parameters can be maintained satisfactorily with this improved typhoon simulation method. Compared with the traditional typhoon simulation method, extreme wind speeds under various return periods obtained with this improved typhoon simulation method are much closer to the results obtained with historical typhoon wind data.


2021 ◽  
Author(s):  
Huiyizhe Zhao ◽  
Zhenchuan Niu ◽  
Weijian Zhou ◽  
Sen Wang ◽  
Xue Feng ◽  
...  

Abstract. In this study, we investigated the characteristics of and changes in the sources of carbonaceous aerosols in northern Chinese cities after the implementation of the Action Plan for Air Pollution Prevention and Control in 2013. We collected PM2.5 samples from three representative inland cities, viz. Beijing (BJ), Xi’an (XA), and Linfen (LF) from January 2018 to April 2019. Elemental carbon (EC), organic carbon (OC), levoglucosan, stable carbon, and radiocarbon were measured in PM2.5 to quantify the sources of carbonaceous aerosol employing Latin hypercube sampling. The best estimate of source apportionment showed that the emissions from liquid fossil fuels contributed 33.6 ± 12.9 %, 26.6 ± 16.4 %, and 24.6 ± 13.4 % of the total carbon (TC) in BJ, XA, and LF, whereas coal combustion contributed 11.2 ± 9.1 %, 19.2 ± 12.3 %, and 39.2 ± 20.5 %, respectively. Non-fossil sources accounted for 55 ± 11 %, 54 ± 10 %, and 36 ± 14 % of the TC in BJ, XA, and LF, respectively. In XA, 48.34 ± 32.01 % of non-fossil sources was attributed to biomass burning. The highest contributors to OC in LF and XA were fossil sources (65.4 ± 14.9 % and 44.9 ± 9.5 %, respectively), whereas that in BJ was non-fossil sources in BJ (56.1 ± 16.7 %). The main contributors to EC were fossil sources, accounting for 92.9 ± 6.13 %, 69.9 ± 20.9 %, and 90.8 ± 9.9 % of the total EC in BJ, XA, and LF, respectively. The decline (6–17 %) in fossil source contributions in BJ and XA since the implementation of the Action Plan indicates the effectiveness of air quality management. We suggest that measures targeted to each city should be strengthened in the future.


2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


2021 ◽  
Vol 932 ◽  
Author(s):  
Yiqing Li ◽  
Wenshi Cui ◽  
Qing Jia ◽  
Qiliang Li ◽  
Zhigang Yang ◽  
...  

We address a challenge of active flow control: the optimization of many actuation parameters guaranteeing fast convergence and avoiding suboptimal local minima. This challenge is addressed by a new optimizer, called the explorative gradient method (EGM). EGM alternatively performs one exploitive downhill simplex step and an explorative Latin hypercube sampling iteration. Thus, the convergence rate of a gradient based method is guaranteed while, at the same time, better minima are explored. For an analytical multi-modal test function, EGM is shown to significantly outperform the downhill simplex method, the random restart simplex, Latin hypercube sampling, Monte Carlo sampling and the genetic algorithm. EGM is applied to minimize the net drag power of the two-dimensional fluidic pinball benchmark with three cylinder rotations as actuation parameters. The net drag power is reduced by 29 % employing direct numerical simulations at a Reynolds number of $100$ based on the cylinder diameter. This optimal actuation leads to 52 % drag reduction employing Coanda forcing for boat tailing and partial stabilization of vortex shedding. The price is an actuation energy corresponding to 23 % of the unforced parasitic drag power. EGM is also used to minimize drag of the $35^\circ$ slanted Ahmed body employing distributed steady blowing with 10 inputs. 17 % drag reduction are achieved using Reynolds-averaged Navier–Stokes simulations at the Reynolds number $Re_H=1.9 \times 10^5$ based on the height of the Ahmed body. The wake is controlled with seven local jet-slot actuators at all trailing edges. Symmetric operation corresponds to five independent actuator groups at top, middle, bottom, top sides and bottom sides. Each slot actuator produces a uniform jet with the velocity and angle as free parameters, yielding 10 actuation parameters as free inputs. The optimal actuation emulates boat tailing by inward-directed blowing with velocities which are comparable to the oncoming velocity. We expect that EGM will be employed as efficient optimizer in many future active flow control plants as alternative or augmentation to pure gradient search or explorative methods.


2021 ◽  
Vol 31 (1) ◽  
pp. 70-94
Author(s):  
Jeffrey O. Agushaka ◽  
Absalom E. Ezugwu

Abstract Arithmetic optimization algorithm (AOA) is one of the recently proposed population-based metaheuristic algorithms. The algorithmic design concept of the AOA is based on the distributive behavior of arithmetic operators, namely, multiplication (M), division (D), subtraction (S), and addition (A). Being a new metaheuristic algorithm, the need for a performance evaluation of AOA is significant to the global optimization research community and specifically to nature-inspired metaheuristic enthusiasts. This article aims to evaluate the influence of the algorithm control parameters, namely, population size and the number of iterations, on the performance of the newly proposed AOA. In addition, we also investigated and validated the influence of different initialization schemes available in the literature on the performance of the AOA. Experiments were conducted using different initialization scenarios and the first is where the population size is large and the number of iterations is low. The second scenario is when the number of iterations is high, and the population size is small. Finally, when the population size and the number of iterations are similar. The numerical results from the conducted experiments showed that AOA is sensitive to the population size and requires a large population size for optimal performance. Afterward, we initialized AOA with six initialization schemes, and their performances were tested on the classical functions and the functions defined in the CEC 2020 suite. The results were presented, and their implications were discussed. Our results showed that the performance of AOA could be influenced when the solution is initialized with schemes other than default random numbers. The Beta distribution outperformed the random number distribution in all cases for both the classical and CEC 2020 functions. The performance of uniform distribution, Rayleigh distribution, Latin hypercube sampling, and Sobol low discrepancy sequence are relatively competitive with the Random number. On the basis of our experiments’ results, we recommend that a solution size of 6,000, the number of iterations of 100, and initializing the solutions with Beta distribution will lead to AOA performing optimally for scenarios considered in our experiments.


Aerospace ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. 376
Author(s):  
Nasim Fallahi

In the current research, variable angle tow composites are used to improve the buckling and free vibration behavior of a structure. A one-dimensional (1D) Carrera Unified Formulation (CUF) is employed to determine the buckling loads and natural frequencies in Variable Angle Tow (VAT) square plates by taking advantage of the layerwise theory (LW). Subsequently, the Genetic Algorithm (GA) optimization method is applied to maximize the first critical buckling load and first natural frequency using the definition of linear fiber orientation angles. To show the power of the genetic algorithm for the VAT structure, a surrogate model using Response Surface (RS) method was used to demonstrate the convergence of the GA approach. The results showed the cost reduction for optimized VAT performance through GA optimization in combination with the 1D CUF procedure. Additionally, a Latin hypercube sampling (LHS) method with RS was used for buckling analysis. The capability of LHS sampling confirmed that it could be employed for the next stages of research along with GA.


2021 ◽  
pp. 1-21
Author(s):  
M. Kowsari ◽  
L. A. James ◽  
R. D. Haynes

Summary Water-alternating gas (WAG) as a tertiary recovery method is applied to oil reservoirs at a later stage of reservoir life to more or less success depending on field and operation. Uncertainty in WAG optimization has been shown to be dependent on several factors including reservoir characterization, WAG timing, and its operation. In this paper, we comprehensively explore WAG optimization in the context of WAG operating parameters and hysteresis, the first paper to explore both simultaneously. WAG operating parameters have been analyzed and optimized at both the core and field scale with a general conclusion that the timing, miscibility, WAG ratio, cycle time, and number of cycles play a varying role in the WAG optimization. Reservoir characterization has considered well configuration, oil type, rock properties, and hysteresis in relative permeability. Due to the cyclic nature of WAG and the dependency of the relative permeability on the saturation history, the relative permeability hysteresis modeling plays a key role in WAG performance whereby different hysteresis models will predict different results, as shown in literature. In this paper, we consider the choice of the hysteresis model and WAG operating parameters on WAG optimization. First, a sensitivity analysis is performed to evaluate the effect of hysteresis models (no hysteresis, Carlson, and Killough) on a large number of WAG development scenarios sampled by the Latin hypercube sampling method. Next, optimizations were conducted to compare and analyze the optimum recovery factor and corresponding optimal WAG operating parameters for various combinations of hysteresis models. The results of the study indicate that excluding hysteresis modeling from simulations would likely lead to a higher predicted produced volume of the nonwetting phases, that is, oil and gas, and a lower predicted produced volume of the wetting phase, that is, water. Also, the optimal recovery factor as well as the optimal WAG operating parameters can be significantly affected by the choice of the hysteresis models.


Sign in / Sign up

Export Citation Format

Share Document