scholarly journals Solving Packing Problems by a Distributed Global Optimization Algorithm

2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Nian-Ze Hu ◽  
Han-Lin Li ◽  
Jung-Fa Tsai

Packing optimization problems aim to seek the best way of placing a given set of rectangular boxes within a minimum volume rectangular box. Current packing optimization methods either find it difficult to obtain an optimal solution or require too many extra 0-1 variables in the solution process. This study develops a novel method to convert the nonlinear objective function in a packing program into an increasing function with single variable and two fixed parameters. The original packing program then becomes a linear program promising to obtain a global optimum. Such a linear program is decomposed into several subproblems by specifying various parameter values, which is solvable simultaneously by a distributed computation algorithm. A reference solution obtained by applying a genetic algorithm is used as an upper bound of the optimal solution, used to reduce the entire search region.

2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


2016 ◽  
pp. 450-475
Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


Author(s):  
Arslan Ali Syed ◽  
Irina Gaponova ◽  
Klaus Bogenberger

The majority of transportation problems include optimizing some sort of cost function. These optimization problems are often NP-hard and have an exponential increase in computation time with the increase in the model size. The problem of matching vehicles to passenger requests in ride hailing (RH) contexts typically falls into this category.Metaheuristics are often utilized for such problems with the aim of finding a global optimal solution. However, such algorithms usually include lots of parameters that need to be tuned to obtain a good performance. Typically multiple simulations are run on diverse small size problems and the parameters values that perform the best on average are chosen for subsequent larger simulations.In contrast to the above approach, we propose training a neural network to predict the parameter values that work the best for an instance of the given problem. We show that various features, based on the problem instance and shareability graph statistics, can be used to predict the solution quality of a matching problem in RH services. Consequently, the values corresponding to the best predicted solution can be selected for the actual problem. We study the effectiveness of above described approach for the static assignment of vehicles to passengers in RH services. We utilized the DriveNow data from Bavarian Motor Works (BMW) for generating passenger requests inside Munich, and for the metaheuristic, we used a large neighborhood search (LNS) algorithm combined with a shareability graph.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Feng Qian ◽  
Mohammad Reza Mahmoudi ◽  
Hamïd Parvïn ◽  
Kim-Hung Pho ◽  
Bui Anh Tuan

Conventional optimization methods are not efficient enough to solve many of the naturally complicated optimization problems. Thus, inspired by nature, metaheuristic algorithms can be utilized as a new kind of problem solvers in solution to these types of optimization problems. In this paper, an optimization algorithm is proposed which is capable of finding the expected quality of different locations and also tuning its exploration-exploitation dilemma to the location of an individual. A novel particle swarm optimization algorithm is presented which implements the conditioning learning behavior so that the particles are led to perform a natural conditioning behavior on an unconditioned motive. In the problem space, particles are classified into several categories so that if a particle lies within a low diversity category, it would have a tendency to move towards its best personal experience. But, if the particle’s category is with high diversity, it would have the tendency to move towards the global optimum of that category. The idea of the birds’ sensitivity to its flying space is also utilized to increase the particles’ speed in undesired spaces in order to leave those spaces as soon as possible. However, in desirable spaces, the particles’ velocity is reduced to provide a situation in which the particles have more time to explore their environment. In the proposed algorithm, the birds’ instinctive behavior is implemented to construct an initial population randomly or chaotically. Experiments provided to compare the proposed algorithm with the state-of-the-art methods show that our optimization algorithm is one of the most efficient and appropriate ones to solve the static optimization problems.


Author(s):  
Adel A. Younis ◽  
George H. Cheng ◽  
G. Gary Wang ◽  
Zuomin Dong

Metamodel based design optimization (MBDO) algorithms have attracted considerable interests in recent years due to their special capability in dealing with complex optimization problems with computationally expensive objective and constraint functions and local optima. Conventional unimodal-based optimization algorithms and stochastic global optimization algorithms either miss the global optimum frequently or require unacceptable computation time. In this work, a generic testbed/platform for evaluating various MBDO algorithms has been introduced. The purpose of the platform is to facilitate quantitative comparison of different MBDO algorithms using standard test problems, test procedures, and test outputs, as well as to improve the efficiency of new algorithm testing and improvement. The platform consists of a comprehensive test function database that contains about 100 benchmark functions and engineering problems. The testbed accepts any optimization algorithm to be tested, and only requires minor modifications to meet the test-bed requirements. The testbed is useful in comparing the performance of competing algorithms through execution of same problems. It allows researchers and practitioners to test and choose the most suitable optimization tool for their specific needs. It also helps to increase confidence and reliability of the newly developed MBDO tools. Many new MBDO algorithms, including Mode Pursuing Sampling (MPS), Pareto Set Pursuing (PSP), and Space Exploration and Unimodal Region Elimination (SEUMRE), were tested in this work to demonstrate its functionality and benefits.


Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 934
Author(s):  
Jorge Gálvez ◽  
Erik Cuevas ◽  
Krishna Gopal Dhal

Evolutionary Computation Methods (ECMs) are proposed as stochastic search methods to solve complex optimization problems where classical optimization methods are not suitable. Most of the proposed ECMs aim to find the global optimum for a given function. However, from a practical point of view, in engineering, finding the global optimum may not always be useful, since it may represent solutions that are not physically, mechanically or even structurally realizable. Commonly, the evolutionary operators of ECMs are not designed to efficiently register multiple optima by executing them a single run. Under such circumstances, there is a need to incorporate certain mechanisms to allow ECMs to maintain and register multiple optima at each generation executed in a single run. On the other hand, the concept of dominance found in animal behavior indicates the level of social interaction among two animals in terms of aggressiveness. Such aggressiveness keeps two or more individuals as distant as possible from one another, where the most dominant individual prevails as the other withdraws. In this paper, the concept of dominance is computationally abstracted in terms of a data structure called “competitive memory” to incorporate multimodal capabilities into the evolutionary operators of the recently proposed Cluster-Chaotic-Optimization (CCO). Under CCO, the competitive memory is implemented as a memory mechanism to efficiently register and maintain all possible optimal values within a single execution of the algorithm. The performance of the proposed method is numerically compared against several multimodal schemes over a set of benchmark functions. The experimental study suggests that the proposed approach outperforms its competitors in terms of robustness, quality, and precision.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Alireza Rowhanimanesh ◽  
Sohrab Efati

Evolutionary methods are well-known techniques for solving nonlinear constrained optimization problems. Due to the exploration power of evolution-based optimizers, population usually converges to a region around global optimum after several generations. Although this convergence can be efficiently used to reduce search space, in most of the existing optimization methods, search is still continued over original space and considerable time is wasted for searching ineffective regions. This paper proposes a simple and general approach based on search space reduction to improve the exploitation power of the existing evolutionary methods without adding any significant computational complexity. After a number of generations when enough exploration is performed, search space is reduced to a small subspace around the best individual, and then search is continued over this reduced space. If the space reduction parameters (red_gen and red_factor) are adjusted properly, reduced space will include global optimum. The proposed scheme can help the existing evolutionary methods to find better near-optimal solutions in a shorter time. To demonstrate the power of the new approach, it is applied to a set of benchmark constrained optimization problems and the results are compared with a previous work in the literature.


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Yuehe Zhu ◽  
Hua Wang ◽  
Jin Zhang

Since most spacecraft multiple-impulse trajectory optimization problems are complex multimodal problems with boundary constraint, finding the global optimal solution based on the traditional differential evolution (DE) algorithms becomes so difficult due to the deception of many local optima and the probable existence of a bias towards suboptimal solution. In order to overcome this issue and enhance the global searching ability, an improved DE algorithm with combined mutation strategies and boundary-handling schemes is proposed. In the first stage, multiple mutation strategies are utilized, and each strategy creates a mutant vector. In the second stage, multiple boundary-handling schemes are used to simultaneously address the same infeasible trial vector. Two typical spacecraft multiple-impulse trajectory optimization problems are studied and optimized using the proposed DE method. The experimental results demonstrate that the proposed DE method efficiently overcomes the problem created by the convergence to a local optimum and obtains the global optimum with a higher reliability and convergence rate compared with some other popular evolutionary methods.


Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


2019 ◽  
Vol 63 (4) ◽  
pp. 726-737
Author(s):  
Azita Mayeli

AbstractIn this paper, we introduce a class of nonsmooth nonconvex optimization problems, and we propose to use a local iterative minimization-majorization (MM) algorithm to find an optimal solution for the optimization problem. The cost functions in our optimization problems are an extension of convex functions with MC separable penalty, which were previously introduced by Ivan Selesnick. These functions are not convex; therefore, convex optimization methods cannot be applied here to prove the existence of optimal minimum point for these functions. For our purpose, we use convex analysis tools to first construct a class of convex majorizers, which approximate the value of non-convex cost function locally, then use the MM algorithm to prove the existence of local minimum. The convergence of the algorithm is guaranteed when the iterative points $x^{(k)}$ are obtained in a ball centred at $x^{(k-1)}$ with small radius. We prove that the algorithm converges to a stationary point (local minimum) of cost function when the surregators are strongly convex.


Sign in / Sign up

Export Citation Format

Share Document