An Advanced and Robust Ensemble Surrogate Model: Extended Adaptive Hybrid Functions

2018 ◽  
Vol 140 (4) ◽  
Author(s):  
Xueguan Song ◽  
Liye Lv ◽  
Jieling Li ◽  
Wei Sun ◽  
Jie Zhang

Hybrid or ensemble surrogate models developed in recent years have shown a better accuracy compared to individual surrogate models. However, it is still challenging for hybrid surrogate models to always meet the accuracy, robustness, and efficiency requirements for many specific problems. In this paper, an advanced hybrid surrogate model, namely, extended adaptive hybrid functions (E-AHF), is developed, which consists of two major components. The first part automatically filters out the poorly performing individual models and remains the appropriate ones based on the leave-one-out (LOO) cross-validation (CV) error. The second part calculates the adaptive weight factors for each individual surrogate model based on the baseline model and the estimated mean square error in a Gaussian process prediction. A large set of numerical experiments consisting of up to 40 test problems from one dimension to 16 dimensions are used to verify the accuracy and robustness of the proposed model. The results show that both the accuracy and the robustness of E-AHF have been remarkably improved compared with the individual surrogate models and multiple benchmark hybrid surrogate models. The computational time of E-AHF has also been considerately reduced compared with other hybrid models.

2020 ◽  
Author(s):  
Marcelo Damasceno ◽  
Hélio Ribeiro Neto ◽  
Tatiane Costa ◽  
Aldemir Cavalini Júnior ◽  
Ludimar Aguiar ◽  
...  

Abstract Fluid-structure interaction modeling tools based on computational fluid dynamics (CFD) produce interesting results that can be used in the design of submerged structures. However, the computational cost of simulations associated with the design of submerged offshore structures is high. There are no high-performance platforms devoted to the analysis and optimization of these structures using CFD techniques. In this context, this work aims to present a computational tool dedicated to the construction of Kriging surrogate models in order to represent the time domain force responses of submerged risers. The force responses obtained from high-cost computational simulations are used as outputs for training and validated the surrogate models. In this case, different excitations are applied in the riser aiming at evaluating the representativeness of the obtained Kriging surrogate model. A similar investigation is performed by changing the number of samples and the total time used for training purposes. The present methodology can be used to perform the dynamic analysis in different submerged structures with a low computational cost. Instead of solving the motion equation associated with the fluid-structure system, a Kriging surrogate model is used. A significant reduction in computational time is expected, which allows the realization of different analyses and optimization procedures in a fast and efficient manner for the design of this type of structure.


2014 ◽  
Vol 136 (3) ◽  
Author(s):  
Jie Zhang ◽  
Souma Chowdhury ◽  
Ali Mehmani ◽  
Achille Messac

This paper investigates the characterization of the uncertainty in the prediction of surrogate models. In the practice of engineering, where predictive models are pervasively used, the knowledge of the level of modeling error in any region of the design space is uniquely helpful for design exploration and model improvement. The lack of methods that can explore the spatial variation of surrogate error levels in a wide variety of surrogates (i.e., model-independent methods) leaves an important gap in our ability to perform design domain exploration. We develop a novel framework, called domain segmentation based on uncertainty in the surrogate (DSUS) to segregate the design domain based on the level of local errors. The errors in the surrogate estimation are classified into physically meaningful classes based on the user's understanding of the system and/or the accuracy requirements for the concerned system analysis. The leave-one-out cross-validation technique is used to quantity the local errors. Support vector machine (SVM) is implemented to determine the boundaries between error classes, and to classify any new design point into the pertinent error class. We also investigate the effectiveness of the leave-one-out cross-validation technique in providing a local error measure, through comparison with actual local errors. The utility of the DSUS framework is illustrated using two different surrogate modeling methods: (i) the Kriging method and (ii) the adaptive hybrid functions (AHF). The DSUS framework is applied to a series of standard test problems and engineering problems. In these case studies, the DSUS framework is observed to provide reasonable accuracy in classifying the design-space based on error levels. More than 90% of the test points are accurately classified into the appropriate error classes.


2017 ◽  
Vol 34 (2) ◽  
pp. 499-547 ◽  
Author(s):  
Eduardo Krempser ◽  
Heder S. Bernardino ◽  
Helio J.C. Barbosa ◽  
Afonso C.C. Lemonge

Purpose The purpose of this paper is to propose and analyze the use of local surrogate models to improve differential evolution’s (DE) overall performance in computationally expensive problems. Design/methodology/approach DE is a popular metaheuristic to solve optimization problems with several variants available in the literature. Here, the offspring are generated by means of different variants, and only the best one, according to the surrogate model, is evaluated by the simulator. The problem of weight minimization of truss structures is used to assess DE’s performance when different metamodels are used. The surrogate-assisted DE techniques proposed here are also compared to common DE variants. Six different structural optimization problems are studied involving continuous as well as discrete sizing design variables. Findings The use of a local, similarity-based, surrogate model improves the relative performance of DE for most test-problems, specially when using r-nearest neighbors with r = 0.001 and a DE parameter F = 0.7. Research limitations/implications The proposed methods have no limitations and can be applied to solve constrained optimization problems in general, and structural ones in particular. Practical/implications The proposed techniques can be used to solve real-world problems in engineering. Also, the performance of the proposals is examined using structural engineering problems. Originality/value The main contributions of this work are to introduce and to evaluate additional local surrogate models; to evaluate the effect of the value of DE’s parameter F (which scales the differences between components of candidate solutions) upon each surrogate model; and to perform a more complete set of experiments covering continuous as well as discrete design variables.


Author(s):  
J. Magelin Mary ◽  
Chitra K. ◽  
Y. Arockia Suganthi

Image processing technique in general, involves the application of signal processing on the input image for isolating the individual color plane of an image. It plays an important role in the image analysis and computer version. This paper compares the efficiency of two approaches in the area of finding breast cancer in medical image processing. The fundamental target is to apply an image mining in the area of medical image handling utilizing grouping guideline created by genetic algorithm. The parameter using extracted border, the border pixels are considered as population strings to genetic algorithm and Ant Colony Optimization, to find out the optimum value from the border pixels. We likewise look at cost of ACO and GA also, endeavors to discover which one gives the better solution to identify an affected area in medical image based on computational time.


Author(s):  
Soumya Ranjan Nayak ◽  
S Sivakumar ◽  
Akash Kumar Bhoi ◽  
Gyoo-Soo Chae ◽  
Pradeep Kumar Mallick

Graphical processing unit (GPU) has gained more popularity among researchers in the field of decision making and knowledge discovery systems. However, most of the earlier studies have GPU memory utilization, computational time, and accuracy limitations. The main contribution of this paper is to present a novel algorithm called the Mixed Mode Database Miner (MMDBM) classifier by implementing multithreading concepts on a large number of attributes. The proposed method use the quick sort algorithm in GPU parallel computing to overcome the state of the art limitations. This method applies the dynamic rule generation approach for constructing the decision tree based on the predicted rules. Moreover, the implementation results are compared with both SLIQ and MMDBM using Java and GPU with the computed acceleration ratio time using the BP dataset. The primary objective of this work is to improve the performance with less processing time. The results are also analyzed using various threads in GPU mining using eight different datasets of UCI Machine learning repository. The proposed MMDBM algorithm have been validated on these chosen eight different dataset with accuracy of 91.3% in diabetes, 89.1% in breast cancer, 96.6% in iris, 89.9% in labor, 95.4% in vote, 89.5% in credit card, 78.7% in supermarket and 78.7% in BP, and simultaneously, it also takes less computational time for given datasets. The outcome of this work will be beneficial for the research community to develop more effective multi thread based GPU solution in GPU mining to handle large set of data in minimal processing time. Therefore, this can be considered a more reliable and precise method for GPU computing.


Author(s):  
Kevin Cremanns ◽  
Dirk Roos ◽  
Simon Hecker ◽  
Peter Dumstorff ◽  
Henning Almstedt ◽  
...  

The demand for energy is increasingly covered through renewable energy sources. As a consequence, conventional power plants need to respond to power fluctuations in the grid much more frequently than in the past. Additionally, steam turbine components are expected to deal with high loads due to this new kind of energy management. Changes in steam temperature caused by rapid load changes or fast starts lead to high levels of thermal stress in the turbine components. Therefore, todays energy market requires highly efficient power plants which can be operated under flexible conditions. In order to meet the current and future market requirements, turbine components are optimized with respect to multi-dimensional target functions. The development of steam turbine components is a complex process involving different engineering disciplines and time-consuming calculations. Currently, optimization is used most frequently for subtasks within the individual discipline. For a holistic approach, highly efficient calculation methods, which are able to deal with high dimensional and multidisciplinary systems, are needed. One approach to solve this problem is the usage of surrogate models using mathematical methods e.g. polynomial regression or the more sophisticated Kriging. With proper training, these methods can deliver results which are nearly as accurate as the full model calculations themselves in a fraction of time. Surrogate models have to face different requirements: the underlying outputs can be, for example, highly non-linear, noisy or discontinuous. In addition, the surrogate models need to be constructed out of a large number of variables, where often only a few parameters are important. In order to achieve good prognosis quality only the most important parameters should be used to create the surrogate models. Unimportant parameters do not improve the prognosis quality but generate additional noise to the approximation result. Another challenge is to achieve good results with as little design information as possible. This is important because in practice the necessary information is usually only obtained by very time-consuming simulations. This paper presents an efficient optimization procedure using a self-developed hybrid surrogate model consisting of moving least squares and anisotropic Kriging. With its maximized prognosis quality, it is capable of handling the challenges mentioned above. This enables time-efficient optimization. Additionally, a preceding sensitivity analysis identifies the most important parameters regarding the objectives. This leads to a fast convergence of the optimization and a more accurate surrogate model. An example of this method is shown for the optimization of a labyrinth shaft seal used in steam turbines. Within the optimization the opposed objectives of minimizing leakage mass flow and decreasing total enthalpy increase due to friction are considered.


Author(s):  
Mahyar Asadi ◽  
Ghazi Alsoruji

Weld sequence optimization, which is determining the best (and worst) welding sequence for welding work pieces, is a very common problem in welding design. The solution for such a combinatorial problem is limited by available resources. Although there are fast simulation models that support sequencing design, still it takes long because of many possible combinations, e.g. millions in a welded structure involving 10 passes. It is not feasible to choose the optimal sequence by evaluating all possible combinations, therefore this paper employs surrogate modeling that partially explores the design space and constructs an approximation model from some combinations of solutions of the expensive simulation model to mimic the behavior of the simulation model as closely as possible but at a much lower computational time and cost. This surrogate model, then, could be used to approximate the behavior of the other combinations and to find the best (and worst) sequence in terms of distortion. The technique is developed and tested on a simple panel structure with 4 weld passes, but essentially can be generalized to many weld passes. A comparison between the results of the surrogate model and the full transient FEM analysis all possible combinations shows the accuracy of the algorithm/model.


2015 ◽  
Vol 27 (6) ◽  
pp. 1186-1222 ◽  
Author(s):  
Bryan P. Tripp

Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.


Author(s):  
Shivali Parkhedkar ◽  
Shaveri Vairagade ◽  
Vishakha Sakharkar ◽  
Bharti Khurpe ◽  
Arpita Pikalmunde ◽  
...  

In our proposed work we will accept the challenges of recognizing the words and we will work to win the challenge. The handwritten document is scanned using a scanner. The image of the scanned document is processed victimization the program. Each character in the word is isolated. Then the individual isolated character is subjected to “Feature Extraction” by the Gabor Feature. Extracted features are passed through KNN classifier. Finally we get the Recognized word. Character recognition is a process by which computer recognizes handwritten characters and turns them into a format which a user can understand. Computer primarily based pattern recognition may be a method that involves many sub process. In today’s surroundings character recognition has gained ton of concentration with in the field of pattern recognition. Handwritten character recognition is beneficial in cheque process in banks, form processing systems and many more. Character recognition is one in all the favored and difficult space in analysis. In future, character recognition creates paperless environment. The novelty of this approach is to achieve better accuracy, reduced computational time for recognition of handwritten characters. The proposed method extracts the geometric features of the character contour. These features are based on the basic line types that forms the character skeleton. The system offers a feature vector as its output. The feature vectors so generated from a training set, were then used to train a pattern recognition engine based on Neural Networks so that the system can be benchmarked. The algorithm proposed concentrates on the same. It extracts totally different line varieties that forms a specific character. It conjointly also concentrates on the point options of constant. The feature extraction technique explained was tested using a Neural Network which was trained with the feature vectors obtained from the proposed method.


Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8456
Author(s):  
Icaro Figueiredo Vilasboas ◽  
Victor Gabriel Sousa Fagundes dos Santos ◽  
Armando Sá Ribeiro Júnior ◽  
Julio Augusto Mendes da Silva

Global optimization of industrial plant configurations using organic Rankine cycles (ORC) to recover heat is becoming attractive nowadays. This kind of optimization requires structural and parametric decisions to be made; the number of variables is usually high, and some of them generate disruptive responses. Surrogate models can be developed to replace the main components of the complex models reducing the computational requirements. This paper aims to create, evaluate, and compare surrogates built to replace a complex thermodynamic-economic code used to indicate the specific cost (US$/kWe) and efficiency of optimized ORCs. The ORCs are optimized under different heat sources conditions in respect to their operational state, configuration, working fluid and thermal fluid, aiming at a minimal specific cost. The costs of 1449.05, 1045.24, and 638.80 US$/kWe and energy efficiencies of 11.1%, 10.9%, and 10.4% were found for 100, 1000, and 50,000 kWt of heat transfer rate at average temperature of 345 °C. The R-square varied from 0.96 to 0.99 while the number of results with error lower than 5% varied from 88% to 75% depending on the surrogate model (random forest or polynomial regression) and output (specific cost or efficiency). The computational time was reduced in more than 99.9% for all surrogates indicated.


Sign in / Sign up

Export Citation Format

Share Document