scholarly journals Finding Plausible Optimal Solutions in Engineering Problems Using an Adaptive Genetic Algorithm

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Muslum Kilinc ◽  
Juan M. Caicedo

In engineering, optimization applications are commonly used to solve various problems. As widely known, solution of an engineering problem does not have a unique result; moreover, the solution of a unique problem may totally differ from one engineer to another. On the other hand, one of the most commonly used engineering optimization methods is genetic algorithm that leads us to only one global optimum. As to mention, engineering problems can conclude in different results from the point of different engineers’ views. In this study, a modified genetic algorithm named multi-solution genetic algorithm (MsGA) based on clustering and section approaches is presented to identify alternative solutions for an engineering problem. MsGA can identify local optima points along with global optimum and can find numerous solution alternatives. The reliability of MsGA was tested by using a Gaussian and trigonometric function. After testing, MsGA was applied to a truss optimization problem as an example of an engineering optimization problem. The result obtained shows that MsGA is successful at finding multiple plausible solutions to an engineering optima problem.

Author(s):  
Chang-Wook Han ◽  
◽  
Hajime Nobuhara ◽  

Genetic algorithms (GA) are well known and very popular stochastic optimization algorithm. Although, GA is very powerful method to find the global optimum, it has some drawbacks, for example, premature convergence to local optima, slow convergence speed to global optimum. To enhance the performance of the GA, this paper proposes an adaptive genetic algorithm based on partitioning method. The partitioning method, which enables a genetic algorithm to find a solution very effectively, adaptively divides the search space into promising sub-spaces to reduce the complexity of optimization. This partitioning method is more effective as the complexity of the search space is increasing. The validity of the proposed method is confirmed by applying it to several bench mark test function examples and a traveling salesman problem.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


Author(s):  
K. Kamil ◽  
K.H Chong ◽  
H. Hashim ◽  
S.A. Shaaya

<p>Genetic algorithm is a well-known metaheuristic method to solve optimization problem mimic the natural process of cell reproduction. Having great advantages on solving optimization problem makes this method popular among researchers to improve the performance of simple Genetic Algorithm and apply it in many areas. However, Genetic Algorithm has its own weakness of less diversity which cause premature convergence where the potential answer trapped in its local optimum.  This paper proposed a method Multiple Mitosis Genetic Algorithm to improve the performance of simple Genetic Algorithm to promote high diversity of high-quality individuals by having 3 different steps which are set multiplying factor before the crossover process, conduct multiple mitosis crossover and introduce mini loop in each generation. Results shows that the percentage of great quality individuals improve until 90 percent of total population to find the global optimum.</p>


Author(s):  
ZOHEIR EZZIANE

Probabilistic and stochastic algorithms have been used to solve many hard optimization problems since they can provide solutions to problems where often standard algorithms have failed. These algorithms basically search through a space of potential solutions using randomness as a major factor to make decisions. In this research, the knapsack problem (optimization problem) is solved using a genetic algorithm approach. Subsequently, comparisons are made with a greedy method and a heuristic algorithm. The knapsack problem is recognized to be NP-hard. Genetic algorithms are among search procedures based on natural selection and natural genetics. They randomly create an initial population of individuals. Then, they use genetic operators to yield new offspring. In this research, a genetic algorithm is used to solve the 0/1 knapsack problem. Special consideration is given to the penalty function where constant and self-adaptive penalty functions are adopted.


Author(s):  
Bo-Suk Yang

This chapter describes a hybrid artificial life optimization algorithm (ALRT) based on emergent colonization to compute the solutions of global function optimization problem. In the ALRT, the emergent colony is a fundamental mechanism to search the optimum solution and can be accomplished through the metabolism, movement and reproduction among artificial organisms which appear at the optimum locations in the artificial world. In this case, the optimum locations mean the optimum solutions in the optimization problem. Hence, the ALRT focuses on the searching for the optimum solution in the location of emergent colonies and can achieve more accurate global optimum. The optimization results using different types of test functions are presented to demonstrate the described approach successfully achieves optimum performance. The algorithm is also applied to the test function optimization and optimum design of short journal bearing as a practical application. The optimized results are compared with those of genetic algorithm and successive quadratic programming to identify the optimizing ability.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-22 ◽  
Author(s):  
Alberto Pajares ◽  
Xavier Blasco ◽  
Juan M. Herrero ◽  
Gilberto Reynoso-Meza

Traditionally, in a multiobjective optimization problem, the aim is to find the set of optimal solutions, the Pareto front, which provides the decision-maker with a better understanding of the problem. This results in a more knowledgeable decision. However, multimodal solutions and nearly optimal solutions are ignored, although their consideration may be useful for the decision-maker. In particular, there are some of these solutions which we consider specially interesting, namely, the ones that have distinct characteristics from those which dominate them (i.e., the solutions that are not dominated in their neighborhood). We call these solutions potentially useful solutions. In this work, a new genetic algorithm called nevMOGA is presented, which provides not only the optimal solutions but also the multimodal and nearly optimal solutions nondominated in their neighborhood. This means that nevMOGA is able to supply additional and potentially useful solutions for the decision-making stage. This is its main advantage. In order to assess its performance, nevMOGA is tested on two benchmarks and compared with two other optimization algorithms (random and exhaustive searches). Finally, as an example of application, nevMOGA is used in an engineering problem to optimally adjust the parameters of two PI controllers that operate a plant.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 139
Author(s):  
Maxinder S Kanwal ◽  
Avinash S Ramesh ◽  
Lauren A Huang

Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates.


2011 ◽  
Vol 08 (03) ◽  
pp. 535-544 ◽  
Author(s):  
BOUDJEHEM DJALIL ◽  
BOUDJEHEM BADREDDINE ◽  
BOUKAACHE ABDENOUR

In this paper, we propose a very interesting idea in global optimization making it easer and a low-cost task. The main idea is to reduce the dimension of the optimization problem in hand to a mono-dimensional one using variables coding. At this level, the algorithm will look for the global optimum of a mono-dimensional cost function. The new algorithm has the ability to avoid local optima, reduces the number of evaluations, and improves the speed of the algorithm convergence. This method is suitable for functions that have many extremes. Our algorithm can determine a narrow space around the global optimum in very restricted time based on a stochastic tests and an adaptive partition of the search space. Illustrative examples are presented to show the efficiency of the proposed idea. It was found that the algorithm was able to locate the global optimum even though the objective function has a large number of optima.


2013 ◽  
Vol 339 ◽  
pp. 784-788
Author(s):  
Lei Wang ◽  
Yu Yun Kang

In order to allocate tasks and optimize resources well in dynamical manufacturing environment, the model for task allocation is established. An adaptive genetic algorithm (AGA) is applied to deal with it. A machine-based encoding approach is also adopted. The simulation results testify the validity of this method, and therefore the task allocation and resources optimization problem could be dealt with efficiently.


2011 ◽  
Vol 105-107 ◽  
pp. 386-391 ◽  
Author(s):  
Jan Szweda ◽  
Zdenek Poruba

In this paper is discussed the way of suitable numerical solution of contact shape optimization problem. The first part of the paper is focused on method of global optimization field among which the genetic algorithm is chosen for computer processing and for application on contact problem optimization. The brief description of this method is done with emphasis of its characteristic features. The experiment performed on plane structural problem validates the ability of genetic algorithm in search the area of the global optimum. On the base of the research described in this work, it is possible to recommend optimization technique of genetic algorithm to use for shape optimization of engineering contact problems in which it is possible for any shape to achieve successful convergence of contact task solution.


Sign in / Sign up

Export Citation Format

Share Document