scholarly journals Bayesian Optimization Approaches for Identifying the Best Genotype from a Candidate Population

Author(s):  
Shin-Fu Tsai ◽  
Chih-Chien Shen ◽  
Chen-Tuo Liao

AbstractBayesian optimization is incorporated into genomic prediction to identify the best genotype from a candidate population. Several expected improvement (EI) criteria are proposed for the Bayesian optimization. The iterative search process of the optimization consists of two main steps. First, a genomic BLUP (GBLUP) prediction model is constructed using the phenotype and genotype data of a training set. Second, an EI criterion, estimated from the resulting GBLUP model, is employed to select the individuals that are phenotyped and added to the current training set to update the GBLUP model until the sequential observed EI values are less than a stopping tolerance. Three real datasets are analyzed to illustrate the proposed approach. Furthermore, a detailed simulation study is conducted to compare the performance of the EI criteria. The simulation results show that one augmented version derived from the distribution of predicted genotypic values is able to identify the best genotype from a large candidate population with an economical training set, and it can therefore be recommended for practical use. Supplementary materials accompanying this paper appear on-line.

2012 ◽  
Vol 2309 (1) ◽  
pp. 114-126 ◽  
Author(s):  
Dhafer Marzougui ◽  
Cing-Dao (Steve) Kan ◽  
Kenneth S. Opiela

The National Crash Analysis Center (NCAC) at the George Washington University simulated the crash of a 2,270-kg Chevrolet Silverado pickup truck into a standard 32-in. New Jersey shape concrete barrier under the requirements of Test 3–11 of the Manual for Assessing Safety Hardware (MASH). The new, detailed finite element (FE) model for the Chevrolet Silverado was used as the surrogate for the MASH 2270P test vehicle. An FE model of the New Jersey barrier was drawn from the array of NCAC hardware models. The primary objective of this analysis was to simulate the crash test conducted to evaluate how this commonly used, NCHRP 350–approved device would perform under the more rigorous MASH crashworthiness criteria. A secondary objective was to use newly developed verification and validation (V&V) procedures to compare the results of the detailed simulation with the results of crash tests undertaken as part of another project. The crash simulation was successfully executed with the detailed Silverado FE model and NCAC models of the New Jersey concrete barrier. Traditional comparisons of the simulation results and the data derived from the crash test suggested that the modeling provided viable results. Further comparisons employing the V&V procedures provided a structured assessment across multiple factors reflected in the phenomena importance ranking table. Statistical measures of the accuracy of the test in comparison with simulation results provided a more robust validation than previous approaches. These comparisons further confirmed that the model was able to replicate impacts with a 2270P vehicle, as required by MASH.


Author(s):  
Arunabha Batabyal ◽  
Sugrim Sagar ◽  
Jian Zhang ◽  
Tejesh Dube ◽  
Xuehui Yang ◽  
...  

Abstract A persistent problem in the selective laser sintering process is to maintain the quality of additively manufactured parts, which can be attributed to the various sources of uncertainty. In this work, a two-particle phase-field microstructure model has been analyzed. The sources of uncertainty as the two input parameters were surface diffusivity and inter-particle distance. The response quantity of interest (QOI) was selected as the size of the neck region that develops between the two particles. Two different cases with equal and unequal sized particles were studied. It was observed that the neck size increased with increasing surface diffusivity and decreased with increasing inter-particle distance irrespective of particle size. Sensitivity analysis found that the inter-particle distance has more influence on variation in neck size than that of surface diffusivity. The machine learning algorithm Gaussian Process Regression was used to create the surrogate model of the QOI. Bayesian Optimization method was used to find optimal values of the input parameters. For equal-sized particles, optimization using Probability of Improvement provided optimal values of surface diffusivity and inter-particle distance as 23.8268 and 40.0001, respectively. The Expected Improvement as an acquisition function gave optimal values 23.9874 and 40.7428, respectively. For unequal sized particles, optimal design values from Probability of Improvement were 23.9700 and 33.3005, respectively, while those from Expected Improvement were 23.9893 and 33.9627, respectively. The optimization results from the two different acquisition functions seemed to be in good agreement.


2021 ◽  
Author(s):  
Yanhua Tian

Power law degree distribution, the small world property, and bad spectral expansion are three of the most important properties of On-line Social Networks (OSNs). We sampled YouTube and Wikipedia to investigate OSNs. Our simulation and computational results support the conclusion that OSNs follow a power law degree distribution, have the small world property, and bad spectral expansion. We calculated the diameters and spectral gaps of OSNs samples, and compared these to graphs generated by the GEO-P model. Our simulation results support the Logarithmic Dimension Hypothesis, which conjectures that the dimension of OSNs is m = [log N]. We introduced six GEO-P type models. We ran simulations of these GEO-P-type models, and compared the simulated graphs with real OSN data. Our simulation results suggest that, except for the GEO-P (GnpDeg) model, all our models generate graphs with power law degree distributions, the small world property, and bad spectral expansion.


1971 ◽  
Vol 4 (9) ◽  
pp. T151-T157 ◽  
Author(s):  
P D Roberts

The paper describes a digital simulation study of the application of a non-linear controller to the regulation of a single stage neutralisation process. In the controller, the proportional gain increases with amplitude of controller error signal. The performance of the non-linear controller is compared with that of a conventional linear controller and with the performance obtained by employing a linear controller with a linearisation network designed to compensate for the non-linear characteristic of the neutralisation curve. Although the performance of the non-linear controller is inferior to that obtained by employing a perfect linearisation network, its performance is still considerably superior to that obtained by using a conventional linear controller when operating at a symmetrical point on the neutralisation curve. In contrast to the linearisation network technique, the non-linear controller contains only one extra parameter and can be readily tuned on-line without prior knowledge of the neutralisation curve. Hence, it can be considered as an attractive alternative for the control of neutralisation processes.


2014 ◽  
Author(s):  
◽  
Dan Zheng

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] Capture-recapture models have been widely used to estimate the size of a target wildlife population. There are three major sources of variations that can affect capture probabilities: time (i.e., capture probabilities vary with time or trapping occasion), behavioral response (i.e., capture probabilities vary due to a trap response of animals to the first capture), and heterogeneity (i.e., capture probabilities vary by individual animal). There are eight models regarding possible combinations of these factors, including M0, Mt, Mb, Mh, Mtb, Mth, Mbh, and Mtbh. A capture-recapture model (Mb model) was created to present the behavioral response effect. The objective Bayesian analysis for the population size was developed and compared with common maximum likelihood estimates (MLEs). Simulation results demonstrate the advantages of the objective Bayesian over MLEs. Two real examples about a deer mouse are presented and one R package (OBMbpkg) was built for application. Companion diagnostics (CDx) for personalized medicine is commonly applied to in vitro diagnostic (IVD) industry and clinical trials for specific disease or treatment with biomarkers (e.g. molecular targets). The Bayesian method with Gibbs sampler was used to estimate the potential bias caused by imperfect CDx under the targeted design, where only patients with a positive diagnosis were enrolled the clinical trials. A simulation study was conducted to evaluate the performance of the Bayesian method and to compare with the EM algorithm. The Bayesian model selection method with G-prior was used to test treatment effects of targeted drugs for patients with biomarkers under the targeted design. A simulation study was conducted to evaluate the performance of the Bayesian method and to compare it with the original method and EM method when sample size is small. Eventually a biomarker-stratified design was studied, while patients enrolled in clinical trials could be divided into two groups (i.e., those with a positive or negative diagnosis). Both the EM algorithm and Bayesian method were used to estimate the potential bias caused by imperfect CDx. Simulation results demonstrate the advantages of the Bayesian method over the original method and EM method.


Author(s):  
Gomaa Zaki El-Far

This paper presents a robust instrument fault detection (IFD) scheme based on modified immune mechanism based evolutionary algorithm (MIMEA) that determines on line the optimal control actions, detects faults quickly in the control process, and reconfigures the controller structure. To ensure the capability of the proposed MIMEA, repeating cycles of crossover, mutation, and clonally selection are included through the sampling time. This increases the ability of the proposed algorithm to reach the global optimum performance and optimize the controller parameters through a few generations. A fault diagnosis logic system is created based on the proposed algorithm, nonlinear decision functions, and its derivatives with respect to time. Threshold limits are implied to improve the system dynamics and sensitivity of the IFD scheme to the faults. The proposed algorithm is able to reconfigure the control law safely in all the situations. The presented false alarm rates are also clearly indicated. To illustrate the performance of the proposed MIMEA, it is applied successfully to tune and optimize the controller parameters of the nonlinear nuclear power reactor such that a robust behavior is obtained. Simulation results show the effectiveness of the proposed IFD scheme based MIMEA in detecting and isolating the dynamic system faults.


Author(s):  
Veronica Gil-Costa ◽  
Romina Soledad Molina ◽  
Ricardo Petrino ◽  
Carlos Federico Sosa Paez ◽  
A. Marcela Printista ◽  
...  

Typical applications involving image retrieval processes demand a great amount of computation. The visual content of the images is extracted and represented by means of descriptor vectors of multidimensional characteristics. The image retrieval process consists of two tasks: (1) generation of database and indexing; and (2) the search process. The first task involves the construction of descriptor vectors. Then, an index is built upon the database to speed the search process. The second requires calculating a descriptor vector for the query image and computes the similarity search with the ones stored in the index. In this context, it is relevant to devise new algorithms and different parallel platforms that can reduce execution times. In particular, this work focuses on platforms with FPGAs based SoCs to present and evaluate a two stage system where the index is constructed off-line and the similarity search is executed on-line. Results show that the FPGA is 73% faster than a 2 Quad CPU to compute the descriptor vector of an image when using the Color Layout Descriptor of MPEG-7.


Sign in / Sign up

Export Citation Format

Share Document