TCGD: A Time-Constrained Approximate Guided Depth-First Search Algorithm

1997 ◽  
Vol 06 (02) ◽  
pp. 255-271 ◽  
Author(s):  
Benjamin W. Wah ◽  
Lon-Chan Chu

In this paper, we develop TCGD, a problem-independent, time-constrained, approximate guided depth-first search (GDFS) algorithm. The algorithm is designed to achieve the best ascertained approximation degree under a fixed time constraint. We consider only searches with finite search space and admissible heuristic functions. We study NP-hard combinatorial optimization problems with polynomial-time computable feasible solutions. For the problems studied, we observe that the execution time increases exponentially as approximation degree decreases, although anomalies may happen. The algorithms we study are evaluated by simulations using the symmetric traveling-salesperson problem.

2011 ◽  
Vol 421 ◽  
pp. 559-563
Author(s):  
Yong Chao Gao ◽  
Li Mei Liu ◽  
Heng Qian ◽  
Ding Wang

The scale and complexity of search space are important factors deciding the solving difficulty of an optimization problem. The information of solution space may lead searching to optimal solutions. Based on this, an algorithm for combinatorial optimization is proposed. This algorithm makes use of the good solutions found by intelligent algorithms, contracts the search space and partitions it into one or several optimal regions by backbones of combinatorial optimization solutions. And optimization of small-scale problems is carried out in optimal regions. Statistical analysis is not necessary before or through the solving process in this algorithm, and solution information is used to estimate the landscape of search space, which enhances the speed of solving and solution quality. The algorithm breaks a new path for solving combinatorial optimization problems, and the results of experiments also testify its efficiency.


2013 ◽  
Vol 411-414 ◽  
pp. 1904-1910
Author(s):  
Kai Zhong Jiang ◽  
Tian Bo Wang ◽  
Zhong Tuan Zheng ◽  
Yu Zhou

An algorithm based on free search is proposed for the combinatorial optimization problems. In this algorithm, a feasible solution is converted into a full permutation of all the elements and a transformation of one solution into another solution can be interpreted the transformation of one permutation into another permutation. Then, the algorithm is combined with intersection elimination. The discrete free search algorithm greatly improves the convergence rate of the search process and enhances the quality of the results. The experiment results on TSP standard data show that the performance of the proposed algorithm is increased by about 2.7% than that of the genetic algorithm.


The study presents a pragmatic outlook of genetic algorithm. Many biological algorithms are inspired for their ability to evolve towards best solutions and of all; genetic algorithm is widely accepted as they well suit evolutionary computing models. Genetic algorithm could generate optimal solutions on random as well as deterministic problems. Genetic algorithm is a mathematical approach to imitate the processes studied in natural evolution. The methodology of genetic algorithm is intensively experimented in order to use the power of evolution to solve optimization problems. Genetic algorithm is an adaptive heuristic search algorithm based on the evolutionary ideas of genetics and natural selection. Genetic algorithm exploits random search approach to solve optimization problems. Genetic algorithm takes benefits of historical information to direct the search into the convergence of better performance within the search space. The basic techniques of evolutionary algorithms are observed to be simulating the processes in natural systems. These techniques are aimed to carry effective population to the next generation and ensure the survival of the fittest. Nature supports the domination of stronger over the weaker ones in any kind. In this study, we proposed the arithmetic views of the behavior and operators of genetic algorithm that support the evolution of feasible solutions to optimized solutions.


2013 ◽  
Vol 21 (4) ◽  
pp. 625-658 ◽  
Author(s):  
Leticia Hernando ◽  
Alexander Mendiburu ◽  
Jose A. Lozano

The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Shuntaro Okada ◽  
Masayuki Ohzeki ◽  
Shinichiro Taguchi

Abstract Quantum annealing is a heuristic algorithm for solving combinatorial optimization problems, and hardware for implementing this algorithm has been developed by D-Wave Systems Inc. The current version of the D-Wave quantum annealer can solve unconstrained binary optimization problems with a limited number of binary variables. However, the cost functions of several practical problems are defined by a large number of integer variables. To solve these problems using the quantum annealer, integer variables are generally binarized with one-hot encoding, and the binarized problem is partitioned into small subproblems. However, the entire search space of the binarized problem is considerably larger than that of the original integer problem and is dominated by infeasible solutions. Therefore, to efficiently solve large optimization problems with one-hot encoding, partitioning methods that extract subproblems with as many feasible solutions as possible are required. In this study, we propose two partitioning methods and demonstrate that they result in improved solutions.


2018 ◽  
Vol 7 (4.27) ◽  
pp. 22
Author(s):  
Zulkifli Md Yusof ◽  
Zuwairie Ibrahim ◽  
Asrul Adam ◽  
Kamil Zakwan Mohd Azmi ◽  
Tasiransurini Ab Rahman ◽  
...  

Simulated Kalman Filter (SKF) is a population-based optimization algorithm which exploits the estimation capability of Kalman filter to search for a solution in a continuous search space. The SKF algorithm only capable to solve numerical optimization problems which involve continuous search space. Some problems, such as routing and scheduling, involve binary or discrete search space. At present, there are three modifications to the original SKF algorithm in solving combinatorial optimization problems. Those modified algorithms are binary SKF (BSKF), angle modulated SKF (AMSKF), and distance evaluated SKF (DESKF). These three combinatorial SKF algorithms use binary encoding to represent the solution to a combinatorial optimization problem. This paper introduces the latest version of distance evaluated SKF which uses state encoding, instead of binary encoding, to represent the solution to a combinatorial problem. The algorithm proposed in this paper is called state-encoded distance evaluated SKF (SEDESKF) algorithm. Since the original SKF algorithm tends to converge prematurely, the distance is handled differently in this study. To control and exploration and exploitation of the SEDESKF algorithm, the distance is normalized. The performance of the SEDESKF algorithm is compared against the existing combinatorial SKF algorithm based on a set of Traveling Salesman Problem (TSP).      


Sign in / Sign up

Export Citation Format

Share Document