scholarly journals A Novel Active Optimization Approach for Rapid and Efficient Design Space Exploration Using Ensemble Machine Learning

Author(s):  
Opeoluwa Owoyele ◽  
Pinaki Pal

Abstract In this work, a novel design optimization technique based on active learning, which involves dynamic exploration and exploitation of the design space of interest using an ensemble of machine learning algorithms, is presented. In this approach, a hybrid methodology incorporating an explorative weak learner (regularized basis function model) which fits high-level information about the response surface, and an exploitative strong learner (based on committee machine) that fits finer details around promising regions identified by the weak learner, is employed. For each design iteration, an aristocratic approach is used to select a set of nominees, where points that meet a threshold merit value as predicted by the weak learner are selected to be evaluated using expensive function evaluation. In addition to these points, the global optimum as predicted by the strong learner is also evaluated to enable rapid convergence to the actual global optimum once the most promising region has been identified by the optimizer. This methodology is first tested by applying it to the optimization of a two-dimensional multi-modal surface. The performance of the new active learning approach is compared with traditional global optimization methods, namely micro-genetic algorithm (μGA) and particle swarm optimization (PSO). It is demonstrated that the new optimizer is able to reach the global optimum much faster, with a significantly fewer number of function evaluations. Subsequently, the new optimizer is also applied to a complex internal combustion (IC) engine combustion optimization case with nine control parameters related to fuel injection, initial thermodynamic conditions, and in-cylinder flow. It is again found that the new approach significantly lowers the number of function evaluations that are needed to reach the optimum design configuration (by up to 80%) when compared to particle swarm and genetic algorithm-based optimization techniques.

2020 ◽  
Vol 143 (3) ◽  
Author(s):  
Opeoluwa Owoyele ◽  
Pinaki Pal

Abstract In this work, a novel design optimization technique based on active learning, which involves dynamic exploration and exploitation of the design space of interest using an ensemble of machine learning algorithms, is presented. In this approach, a hybrid methodology incorporating an explorative weak learner (regularized basis function model) that fits high-level information about the response surface and an exploitative strong learner (based on committee machine) that fits finer details around promising regions identified by the weak learner is employed. For each design iteration, an aristocratic approach is used to select a set of nominees, where points that meet a threshold merit value as predicted by the weak learner are selected for evaluation. In addition to these points, the global optimum as predicted by the strong learner is also evaluated to enable rapid convergence to the actual global optimum once the most promising region has been identified by the optimizer. This methodology is first tested by applying it to the optimization of a two-dimensional multi-modal surface and, subsequently, to a complex internal combustion (IC) engine combustion optimization case with nine control parameters related to fuel injection, initial thermodynamic conditions, and in-cylinder flow. It is found that the new approach significantly lowers the number of function evaluations that are needed to reach the optimum design configuration (by up to 80%) when compared to conventional optimization techniques, such as particle swarm and genetic algorithm-based optimization techniques.


Author(s):  
Conner Sharpe ◽  
Clinton Morris ◽  
Benjamin Goldsberry ◽  
Carolyn Conner Seepersad ◽  
Michael R. Haberman

Modern design problems present both opportunities and challenges, including multifunctionality, high dimensionality, highly nonlinear multimodal responses, and multiple levels or scales. These factors are particularly important in materials design problems and make it difficult for traditional optimization algorithms to search the space effectively, and designer intuition is often insufficient in problems of this complexity. Efficient machine learning algorithms can map complex design spaces to help designers quickly identify promising regions of the design space. In particular, Bayesian network classifiers (BNCs) have been demonstrated as effective tools for top-down design of complex multilevel problems. The most common instantiations of BNCs assume that all design variables are independent. This assumption reduces computational cost, but can limit accuracy especially in engineering problems with interacting factors. The ability to learn representative network structures from data could provide accurate maps of the design space with limited computational expense. Population-based stochastic optimization techniques such as genetic algorithms (GAs) are ideal for optimizing networks because they accommodate discrete, combinatorial, and multimodal problems. Our approach utilizes GAs to identify optimal networks based on limited training sets so that future test points can be classified as accurately and efficiently as possible. This method is first tested on a common machine learning data set, and then demonstrated on a sample design problem of a composite material subjected to a planar sound wave.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4332
Author(s):  
Daniel Jancarczyk ◽  
Marcin Bernaś ◽  
Tomasz Boczar

The paper proposes a method of automatic detection of parameters of a distribution transformer (model, type, and power) from a distance, based on its low-frequency noise spectra. The spectra are registered by sensors and processed by a method based on evolutionary algorithms and machine learning. The method, as input data, uses the frequency spectra of sound pressure levels generated during operation by transformers in the real environment. The model also uses the background characteristic to take under consideration the changing working conditions of the transformers. The method searches for frequency intervals and its resolution using both a classic genetic algorithm and particle swarm optimization. The interval selection was verified using five state-of-the-art machine learning algorithms. The research was conducted on 16 different distribution transformers. As a result, a method was proposed that allows the detection of a specific transformer model, its type, and its power with an accuracy greater than 84%, 99%, and 87%, respectively. The proposed optimization process using the genetic algorithm increased the accuracy by up to 5%, at the same time reducing the input data set significantly (from 80% up to 98%). The machine learning algorithms were selected, which were proven efficient for this task.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 139
Author(s):  
Maxinder S Kanwal ◽  
Avinash S Ramesh ◽  
Lauren A Huang

Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates.


2012 ◽  
Vol 498 ◽  
pp. 115-125 ◽  
Author(s):  
H. Hachimi ◽  
Rachid Ellaia ◽  
A. El Hami

In this paper, we present a new hybrid algorithm which is a combination of a hybrid genetic algorithm and particle swarm optimization. We focus in this research on a hybrid method combining two heuristic optimization techniques, genetic algorithms (GA) and particle swarm optimization (PSO) for the global optimization. Denoted asGA-PSO, this hybrid technique incorporates concepts fromGAandPSOand creates individuals in a new generation not only by crossover and mutation operations as found inGAbut also by mechanisms ofPSO. The performance of the two algorithms has been evaluated using several experiments.


2018 ◽  
Vol 5 (1) ◽  
pp. 61-70 ◽  
Author(s):  
Akshay Kumar ◽  
H K Rangavittal

The Genetic Algorithm is one of the advanced optimization techniques frequently used for solving complex problems in the research field, and there are plenty of parameters which affect the outcome of the GA. In this study, a 25-bar truss with the nonlinear constraint is chosen with the objective to minimize the mass and variables being the discrete area. For the same, GA parameter like Selection Function, Population Size, Crossover Function, and Creation Function are varied to find the best combination with minimum function evaluation. It is found that the Uniform selection gives the best result irrespective of the creation function, population size or crossover functions. But this is at the cost of a large number of function evaluations, and the other selection function fails to reach the global optimum and has a smaller number of function evaluation count. If the analysis of selection function is done one at a time, it is seen that all Cases performs better in Roulette but, Case A which is non-integer type with 200 population size being computationally cheaper than Case B and C of population size 300. In the Tournament selection, Case A, B with smaller population size and Case C with higher population size performs better. Case C performs better at Remainder selection with smaller population size, and Case A and B for Stochastic Uniform with higher population size. And, it is clear that the function evaluation count increases with the population size in every Case from this study.


2021 ◽  
Vol 23 (11) ◽  
pp. 749-758
Author(s):  
Saranya N ◽  
◽  
Kavi Priya S ◽  

Breast Cancer is one of the chronic diseases occurred to human beings throughout the world. Early detection of this disease is the most promising way to improve patients’ chances of survival. The strategy employed in this paper is to select the best features from various breast cancer datasets using a genetic algorithm and machine learning algorithm is applied to predict the outcomes. Two machine learning algorithms such as Support Vector Machines and Decision Tree are used along with Genetic Algorithm. The proposed work is experimented on five datasets such as Wisconsin Breast Cancer-Diagnosis Dataset, Wisconsin Breast Cancer-Original Dataset, Wisconsin Breast Cancer-Prognosis Dataset, ISPY1 Clinical trial Dataset, and Breast Cancer Dataset. The results exploit that SVM-GA achieves higher accuracy of 98.16% than DT-GA of 97.44%.


Author(s):  
Prince Nathan S

Abstract: Travelling Salesmen problem is a very popular problem in the world of computer programming. It deals with the optimization of algorithms and an ever changing scenario as it gets more and more complex as the number of variables goes on increasing. The solutions which exist for this problem are optimal for a small and definite number of cases. One cannot take into consideration of the various factors which are included when this specific problem is tried to be solved for the real world where things change continuously. There is a need to adapt to these changes and find optimized solutions as the application goes on. The ability to adapt to any kind of data, whether static or ever-changing, understand and solve it is a quality that is shown by Machine Learning algorithms. As advances in Machine Learning take place, there has been quite a good amount of research for how to solve NP-hard problems using Machine Learning. This reportis a survey to understand what types of machine algorithms can be used to solve with TSP. Different types of approaches like Ant Colony Optimization and Q-learning are explored and compared. Ant Colony Optimization uses the concept of ants following pheromone levels which lets them know where the most amount of food is. This is widely used for TSP problems where the path is with the most pheromone is chosen. Q-Learning is supposed to use the concept of awarding an agent when taking the right action for a state it is in and compounding those specific rewards. This is very much based on the exploiting concept where the agent keeps on learning onits own to maximize its own reward. This can be used for TSP where an agentwill be rewarded for having a short path and will be rewarded more if the path chosen is the shortest. Keywords: LINEAR REGRESSION, LASSO REGRESSION, RIDGE REGRESSION, DECISION TREE REGRESSOR, MACHINE LEARNING, HYPERPARAMETER TUNING, DATA ANALYSIS


Sign in / Sign up

Export Citation Format

Share Document