expected improvement
Recently Published Documents


TOTAL DOCUMENTS

284
(FIVE YEARS 47)

H-INDEX

24
(FIVE YEARS 1)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Cong Chen ◽  
Jiaxin Liu ◽  
Pingfei Xu

AbstractOne of the key issues that affect the optimization effect of the efficient global optimization (EGO) algorithm is to determine the infill sampling criterion. Therefore, this paper compares the common efficient parallel infill sampling criterion. In addition, the pseudo-expected improvement (EI) criterion is introduced to minimizing the predicted (MP) criterion and the probability of improvement (PI) criterion, which helps to improve the problem of MP criterion that is easy to fall into local optimum. An adaptive distance function is proposed, which is used to avoid the concentration problem of update points and also improves the global search ability of the infill sampling criterion. Seven test problems were used to evaluate these criteria to verify the effectiveness of these methods. The results show that the pseudo method is also applicable to PI and MP criteria. The DMP and PEI criteria are the most efficient and robust. The actual engineering optimization problems can more directly show the effects of these methods. So these criteria are applied to the inverse design of RAE2822 airfoil. The results show the criterion including the MP has higher optimization efficiency.



Universe ◽  
2021 ◽  
Vol 7 (12) ◽  
pp. 506
Author(s):  
Matteo Martinelli ◽  
Santiago Casas

In this review, we outline the expected tests of gravity that will be achieved at cosmological scales in the upcoming decades. We focus mainly on constraints on phenomenologically parameterized deviations from general relativity, which allow to test gravity in a model-independent way, but also review some of the expected constraints obtained with more physically motivated approaches. After reviewing the state-of-the-art for such constraints, we outline the expected improvement that future cosmological surveys will achieve, focusing mainly on future large-scale structures and cosmic microwave background surveys but also looking into novel probes on the nature of gravity. We will also highlight the necessity of overcoming accuracy issues in our theoretical predictions, issues that become relevant due to the expected sensitivity of future experiments.



2021 ◽  
Author(s):  
Harshil Saradva ◽  
Siddharth Jain ◽  
Christna Golaco ◽  
Armando Guillen ◽  
Kapil Kumar Thakur

Abstract Sharjah National Oil Corporation (SNOC) operates 4 onshore fields the largest of which has been in production since the 1980's. The majority of wells in the biggest field have a complex network of multilaterals drilled using an underbalanced coiled tubing technique for production enhancement in early 2000s. The scope of this project was to maximize the productivity from these wells in the late life by modelling the dynamic flow behaviour in a simulator and putting that theory to the test by recompleting the wells. A comprehensive multilateral wellbore flow study was undertaken using dynamic multiphase flow simulator to predict the expected improvement in well deliverability of these mature wells, each having 4-6 laterals (Saradva et al. 2019). The well laterals have openhole fishbone completions with one parent lateral having subsequent numerous sub-laterals reaching further into the reservoir with each lateral between 500-2000ft drilled to maximize the intersection with fractures. Complexity in simulation further increased due to complex geology, compositional simulation, condensate banking and liquid loading with the reservoir pressure less than 10% of original. The theory that increasing wellbore diameter by removing the tubing reduces frictional pressure loss was put to test on 2 pilot wells in the 2020-21 workover campaign. The results obtained from the simulator and the actual production increment in the well aligned within 10% accuracy. A production gain of 20-30% was observed on both the wells and results are part of a dynamic simulation predicting well performance over their remaining life. Given the uncertainties in the current PVT, lateral contribution and the fluid production ratios, a broad range sensitivity was performed to ensure a wide range of applicability of the study. This instils confidence in the multiphase transient simulator for subsurface modelling and the workflow will now be used to expand the applicability to other well candidates on a field level. This will result in the opportunity to maximize the production and net revenues from these gas wells by reducing the impact of liquid loading. This paper discusses the detailed comparison of the actual well behaviour with the simulation outcomes which are counterproductive to the conventional gas well development theory of utilizing velocity strings to reduce liquid loading. Two key outcomes from the project are observed, the first is that liquid loading in multilaterals is successfully modelled in a dynamic multiphase transient simulator instead of a typical nodal analysis package, all validated from a field pilot. The second is the alternative to the conventional theory of using smaller tubing sizes to alleviate gas wells liquid loading, that high velocity achieved through wellhead compression would allow higher productivity than a velocity string in low pressure late life gas condensate wells.



2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>



2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Eunjue Yi ◽  
Kwanghyoung Lee ◽  
Younggi Jung ◽  
Jae Ho Chung ◽  
Han Sung Kim ◽  
...  

AbstractVacuum bell therapy has been acceptable substitute for pectus excavatum patients who want to improve their appearance but avoid surgical correction. The aim of this study was to assess the pre-treatment characteristics of patients with pectus excavatum and to establish characteristics that can potentially help identify ideal candidates for vacuum bell therapy. Expected improvements in thoracic indices were evaluated using pre-treatment chest computed tomography, which was performed before and after applying a vacuum bell device. Treatment results after 1-year of application were evaluated using changes in the Haller index before and after treatment. The patients were categorized into two groups according the post- treatment changes in Haller index calculated using chest radiographs: those with changes in Haller index less than 0.5 (Group 1) and those with greater than or equal to 0.5 (Group 2). Pre-treatment Haller index was significantly lower in Group 1 than in Group 2 (3.1 ± 0.46 vs. 4.2 ± 1.14, respectively, p < 0.001). The expected improvement in Haller index in Group 2 was significantly higher than that in Group 1 (3.3 ± 0.60 vs. 2.8 ± 0.54, respectively, p = 0.001). The cut-off value of the expected improvement in Haller index was 0.46 with a sensitivity of 75.8% and a specificity of 83.3%. Patients who demonstrated pliability with a vacuum bell were identified as suitable candidates.





2021 ◽  
pp. 1-15
Author(s):  
Seyed Saeed Ahmadisoleymani ◽  
Samy Missoum

Abstract Finite element-based crashworthiness optimization is extensively used to improve the safety of motor vehicles. However, the responses of crash simulations are characterized by a high level of numerical noise, which can hamper the blind use of surrogate-based design optimization methods. It is therefore essential to account for the noise-induced uncertainty when performing optimization. For this purpose, a surrogate, referred to as Non-Deterministic Kriging (NDK), can be used. It models the noise as a non-stationary stochastic process, which is added to a traditional deterministic kriging surrogate. Based on the NDK surrogate, this study proposes an optimization algorithm tailored to account for both epistemic uncertainty, due to the lack of data, and irreducible aleatory uncertainty, due to the simulation noise. The variances are included within an extension of the well-known expected improvement infill criterion referred to as Modified Augmented Expected Improvement (MAEI). Because the proposed optimization scheme requires an estimate of the aleatory variance, it is approximated through a regression kriging, which is iteratively refined. The proposed algorithm is tested on a set of analytical functions and applied to the optimization of an Occupant Restraint System (ORS) during a crash.



2021 ◽  
Author(s):  
Arsela Prelaj ◽  
Mattia Boeri ◽  
Alessandro Robuschi ◽  
Roberto Ferrara ◽  
Claudia Proto ◽  
...  

Abstract Introduction: In advanced Non-Small Cell Lung Cancer (NSCLC), Programmed Death Ligand 1 (PD-L1) remains the only used biomarker to candidate patients to immunotherapy (IO) with many limits. Given the complex dynamics of the immune system it is improbable that a single biomarker could be able to profile prediction with high accuracy. A promising solution cope with this complexity is provided by Artificial Intelligence (AI) and Machine Learning (ML), which are techniques able to analyse and interpret big multifactorial data. The present study aims at using AI tools to improve response and efficacy prediction in NSCLC patients treated with IO.MethodsReal world data (clinical data, PD-L1, histology, molecular, lab tests) and the blood microRNA signature classifier (MSC), which include 24 different microRNAs, were used. Patients were divided into responders (R), who obtained a complete or partial response or stable disease as best response, and non-responders (NR), who experienced progressive or hyperprogressive disease and those who died before the first radiologic evaluation. Moreover, we used the same data to determine if the overall survival of the patients was likely to be shorter or longer than 24 months from baseline IO. For A literature review and forward feature selection technique was used to extract a specific subset of the patients’ data. To develop the final predictive model, different ML methods have been tested, i.e., Feedforward Neural Network (FFNN), Logistic Regression (LR), K-nearest neighbours (K-NN), Support Vector Machines (SVM), and Random Forest (RF).Results 200 patients were included. 164 out of 200 (i.e., only those patients with PD-L1 data available) were considered in the model, 73 (44.5%) were R and 91 (55.5%) NR. Overall, the best model was the LR and included 5 features: 2 clinical features including the ECOG performance status and IO-line of therapy; 1 tissue feature such as PD-L1 tumour expression; and 2 blood features including the MSC test and the neutrophil-to-lymphocyte ratio (NLR). The model predicting R/NR of the patient achieves accuracy ACC= 0.756, F1 score F1=0.722, and Area Under the ROC Curve AUC=0.82. The use of the PD-L1 alone has an ACC=0.655. The accuracy of the ML models excluding some of the features from the model were as follow: without PD-L1 value (ACC=0.726), MSC (ACC=0.750), and both PD-L1 and MSC (ACC=0.707), i.e., considering only clinical features. At data cut-off (Nov 2020), median Overall Survival (mOS) for R was 38.5 months (m) (95%IC 23.9 - 53.1) vs 3.8 m (95%IC 2.8 - 4.7) for NR, with p<0.001. LR was the most performing model in predicting patients with long survival (24-months OS), achieving ACC=0.839, F1=0.908, and AUC=0.87.ConclusionsThe results suggest that the integration of multifactorial data provided by ML techniques is a useful tool to improve personalized selection of NSCLC patients candidates to IO. In particular, compared to PD-L1 alone the expected improvement was around 10%. In particular, the model shows that the higher the ECOG, NLR value, IO-line, and MSC test level the lower the response, and the higher PD-L1 the higher the response. Considering the difference in survival among R and NR groups, these results suggest that the model can also be used to indirectly predict survival. Moreover, a second model was able to predict long survival patients with good accuracy.



2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Rahul Rao ◽  
Jennifer Carpena-Núñez ◽  
Pavel Nikolaev ◽  
Michael A. Susner ◽  
Kristofer G. Reyes ◽  
...  

AbstractThe diameters of single-walled carbon nanotubes (SWCNTs) are directly related to their electronic properties, making diameter control highly desirable for a number of applications. Here we utilized a machine learning planner based on the Expected Improvement decision policy that mapped regions where growth was feasible vs. not feasible and further optimized synthesis conditions to selectively grow SWCNTs within a narrow diameter range. We maximized two ranges corresponding to Raman radial breathing mode frequencies around 265 and 225 cm−1 (SWCNT diameters around 0.92 and 1.06 nm, respectively), and our planner found optimal synthesis conditions within a hundred experiments. Extensive post-growth characterization showed high selectivity in the optimized growth experiments compared to the unoptimized growth experiments. Remarkably, our planner revealed significantly different synthesis conditions for maximizing the two diameter ranges in spite of their relative closeness. Our study shows the promise for machine learning-driven diameter optimization and paves the way towards chirality-controlled SWCNT growth.



Sign in / Sign up

Export Citation Format

Share Document