scholarly journals On Creating Complementary Pattern Databases

Author(s):  
Santiago Franco ◽  
Álvaro Torralba ◽  
Levi H. S. Lelis ◽  
Mike Barley

A pattern database (PDB) for a planning task is a heuristic function in the form of a lookup table that contains optimal solution costs of a simplified version of the task. In this paper we introduce a method that sequentially creates multiple PDBs which are later combined into a single heuristic function. At a given iteration, our method uses estimates of the A* running time to create a PDB that complements the strengths of the PDBs created in previous iterations. We evaluate our algorithm using explicit and symbolic PDBs. Our results show that the heuristics produced by our approach are able to outperform existing schemes, and that our method is able to create PDBs that complement the strengths of other existing heuristics such as a symbolic perimeter heuristic.

2007 ◽  
Vol 30 ◽  
pp. 213-247 ◽  
Author(s):  
A. Felner ◽  
R. E. Korf ◽  
R. Meshulam ◽  
R. C. Holte

A pattern database (PDB) is a heuristic function implemented as a lookup table that stores the lengths of optimal solutions for subproblem instances. Standard PDBs have a distinct entry in the table for each subproblem instance. In this paper we investigate compressing PDBs by merging several entries into one, thereby allowing the use of PDBs that exceed available memory in their uncompressed form. We introduce a number of methods for determining which entries to merge and discuss their relative merits. These vary from domain-independent approaches that allow any set of entries in the PDB to be merged, to more intelligent methods that take into account the structure of the problem. The choice of the best compression method is based on domain-dependent attributes. We present experimental results on a number of combinatorial problems, including the four-peg Towers of Hanoi problem, the sliding-tile puzzles, and the Top-Spin puzzle. For the Towers of Hanoi, we show that the search time can be reduced by up to three orders of magnitude by using compressed PDBs compared to uncompressed PDBs of the same size. More modest improvements were observed for the other domains.


2017 ◽  
Vol 8 (4) ◽  
pp. 1-17
Author(s):  
Han Huang ◽  
Hongyue Wu ◽  
Yushan Zhang ◽  
Zhiyong Lin ◽  
Zhifeng Hao

Running-time analysis of ant colony optimization (ACO) is crucial for understanding the power of the algorithm in computation. This paper conducts a running-time analysis of ant system algorithms (AS) as a kind of ACO for traveling salesman problems (TSP). The authors model the AS algorithm as an absorbing Markov chain through jointly representing the best-so-far solutions and pheromone matrix as a discrete stochastic status per iteration. The running-time of AS can be evaluated by the expected first-hitting time (FHT), the least number of iterations needed to attain the global optimal solution on average. The authors derive upper bounds of the expected FHT of two classical AS algorithms (i.e., ant quantity system and ant-cycle system) for TSP. They further take regular-polygon TSP (RTSP) as a case study and obtain numerical results by calculating six RTSP instances. The RTSP is a special but real-world TSP where the constraint of triangle inequality is stringently imposed. The numerical results derived from the comparison of the running time of the two AS algorithms verify our theoretical findings.


2020 ◽  
Vol 45 (4) ◽  
pp. 1371-1392 ◽  
Author(s):  
Klaus Jansen ◽  
Kim-Manuel Klein ◽  
José Verschae

Makespan scheduling on identical machines is one of the most basic and fundamental packing problems studied in the discrete optimization literature. It asks for an assignment of n jobs to a set of m identical machines that minimizes the makespan. The problem is strongly NP-hard, and thus we do not expect a ([Formula: see text])-approximation algorithm with a running time that depends polynomially on [Formula: see text]. It has been recently shown that a subexponential running time on [Formula: see text] would imply that the Exponential Time Hypothesis (ETH) fails. A long sequence of algorithms have been developed that try to obtain low dependencies on [Formula: see text], the better of which achieves a quadratic running time on the exponent. In this paper we obtain an algorithm with an almost-linear dependency on [Formula: see text] in the exponent, which is tight under ETH up to logarithmic factors. Our main technical contribution is a new structural result on the configuration-IP integer linear program. More precisely, we show the existence of a highly symmetric and sparse optimal solution, in which all but a constant number of machines are assigned a configuration with small support. This structure can then be exploited by integer programming techniques and enumeration. We believe that our structural result is of independent interest and should find applications to other settings. We exemplify this by applying our structural results to the minimum makespan problem on related machines and to a larger class of objective functions on parallel machines. For all these cases, we obtain an efficient PTAS with running time with an almost-linear dependency on [Formula: see text] and polynomial in n.


2014 ◽  
Vol 50 ◽  
pp. 141-187 ◽  
Author(s):  
M. Goldenberg ◽  
A. Felner ◽  
R. Stern ◽  
G. Sharon ◽  
N. Sturtevant ◽  
...  

When solving instances of problem domains that feature a large branching factor, A* may generate a large number of nodes whose cost is greater than the cost of the optimal solution. We designate such nodes as surplus. Generating surplus nodes and adding them to the OPEN list may dominate both time and memory of the search. A recently introduced variant of A* called Partial Expansion A* (PEA*) deals with the memory aspect of this problem. When expanding a node n, PEA* generates all of its children and puts into OPEN only the children with f = f (n). n is re-inserted in the OPEN list with the f -cost of the best discarded child. This guarantees that surplus nodes are not inserted into OPEN. In this paper, we present a novel variant of A* called Enhanced Partial Expansion A* (EPEA*) that advances the idea of PEA* to address the time aspect. Given a priori domain- and heuristic- specific knowledge, EPEA* generates only the nodes with f = f(n). Although EPEA* is not always applicable or practical, we study several variants of EPEA*, which make it applicable to a large number of domains and heuristics. In particular, the ideas of EPEA* are applicable to IDA* and to the domains where pattern databases are traditionally used. Experimental studies show significant improvements in run-time and memory performance for several standard benchmark applications. We provide several theoretical studies to facilitate an understanding of the new algorithm.


Author(s):  
Aref Gholizadeh Manghutay ◽  
Mehdi Salay Naderi ◽  
Seyed Hamid Fathi

Purpose Heuristic algorithms have been widely used in different types of optimization problems. Their unique features in terms of running time and flexibility have made them superior to deterministic algorithms. To accurately compare different heuristic algorithms in solving optimization problems, the final optimal solution needs to be known. Existing deterministic methods such as Exhaustive Search and Integer Linear Programming can provide the final global optimal solution for small-scale optimization problems. However, as the system grows the number of calculations and required memory size incredibly increases, so applying existing deterministic methods is no longer possible for medium and large-scale systems. The purpose of this paper is to introduce a novel deterministic method with short running time and small memory size requirement for optimal placement of Micro Phasor Measurement Units (µPMUs) in radial electricity distribution systems to make the system completely observable. Design/methodology/approach First, the principle of the method is explained and the observability of the system is analyzed. Then, the algorithm’s running time and memory usage when applying on some of the modified versions of the Institute of Electrical and Electronics Engineers 123-node test feeder are obtained and compared with those of its deterministic counterparts. Findings Because of the innovative method of step-by-step placement of µPMUs, a unique method is developed. Simulation results elucidate that the proposed method has unique features of short running time and small memory size requirements. Originality/value While the mathematical background of the observability study of electricity distribution systems is very well-presented in the referenced papers, the proposed step-by-step placement method of µPMUs, which shrinks unobservable parts of the system in each step, is not discussed yet. The presented paper is directly applicable to typical problems in the field of power systems.


2014 ◽  
Author(s):  
Αθανάσιος Κουτσώνας

Many combinatorial computational problems are considered in their generalform intractable, in the sense that even for modest size problems, providingan exact optimal solution is practically infeasible, as it typically involves theuse of algorithms whose running time is exponential in the size of the problem.Often these problems can be modeled by graphs. Then, additional structuralproperties of a graph, such as surface embeddability, can provide a handle forthe design of more ecient algorithms.The theory of Bidimensionality, dened in the context of ParameterizedComplexity, builds on the celebrated results of Graph Minor theory and establishesa meta algorithmic framework for addressing problems in a broadrange of graph classes, namely all generalizations of graphs embeddable onsome surface.In this doctoral thesis we explore topics of combinatorial nature related tothe implementation of the theory of Bidimensionality and to the possibilitiesof the extension of its applicability range.


2003 ◽  
Vol 20 ◽  
pp. 291-341 ◽  
Author(s):  
J. Hoffmann

Planning with numeric state variables has been a challenge for many years, and was a part of the 3rd International Planning Competition (IPC-3). Currently one of the most popular and successful algorithmic techniques in STRIPS planning is to guide search by a heuristic function, where the heuristic is based on relaxing the planning task by ignoring the delete lists of the available actions. We present a natural extension of ``ignoring delete lists'' to numeric state variables, preserving the relevant theoretical properties of the STRIPS relaxation under the condition that the numeric task at hand is ``monotonic''. We then identify a subset of the numeric IPC-3 competition language, ``linear tasks'', where monotonicity can be achieved by pre-processing. Based on that, we extend the algorithms used in the heuristic planning system FF to linear tasks. The resulting system Metric-FF is, according to the IPC-3 results which we discuss, one of the two currently most efficient numeric planners.


Author(s):  
Mehdi Sadeqi ◽  
Howard J. Hamilton

A domain-independent heuristic function created by an abstraction is usually implemented using a Pattern Database (PDB), which is a lookup table of (abstract state, heuristic value) pairs. PDBs containing high quality heuristic values generally require substantial memory space and therefore need to be compressed. In this paper, we introduce Acyclic Random Hypergraph Compression (ARHC), a domain-independent approach to compressing PDBs using acyclic random r-partite r-uniform hypergraphs. The ARHC algorithm, which comes in Base and Extended versions, provides fast lookup and a high compression rate. ARHC-Extended achieves higher quality heuristics than ARHC-Base by decreasing the heuristic information loss at the cost of some decrease in the compression rate. ARHC shows higher performance than level-by-level Bloom filter PDB compression in all experiments conducted so far.


2020 ◽  
Author(s):  
Daniel S. Zalkind ◽  
Emilano Dall'Anese ◽  
Lucy Y. Pao

Abstract. We develop an automated controller tuning procedure for wind turbines that uses the results of nonlinear, aeroelastic simulations to arrive at an optimal solution. Using a zeroth-order optimization algorithm, simulations using controllers with randomly generated parameters are used to estimate the gradient and converge to an optimal set of those parameters. We use kriging to visualize the design space and estimate the uncertainty, providing a level of confidence in the result. The procedure is applied to three problems in wind turbine control. First, the below-rated torque control is optimized for power capture. Next, the parameters of a proportional-integral blade pitch controller are optimized to minimize structural loads with a constraint on the maximum generator speed; the procedure is tested on rotors from 40 to 400 m in diameter and compared with the results of a grid search optimization. Finally, we present an algorithm that uses a series of parameter optimizations to tune the lookup table for the minimum pitch setting of the above-rated pitch controller, considering peak loads and power capture. Using experience gained from the applications, we present a generalized design procedure and guidelines for implementing similar automated controller tuning tasks.


2015 ◽  
Vol 26 (06) ◽  
pp. 769-801
Author(s):  
Javad Akbari Torkestani

The Steiner connected dominating set problem is a generalization of the well-known connected dominating set problem, in which only a specified subset of the vertices must be dominated. Finding the Steiner connected dominating set is an NP-hard problem in graph theory, and a new promising approach for multicast routing in wireless ad hoc networks. In this paper, we propose six learning automata-based approximation algorithms for finding a near optimal solution to the minimum Steiner connected dominating set problem. For the first proposed algorithm, it is shown that, by a proper choice of the learning rate of algorithm, the probability of approximating the optimal solution is as close to unity as possible. Moreover, we compute the worst case running time of this algorithm for finding a 1/(1 − ε) optimal solution, and show that, by a proper choice of the learning rate, a trade-off between the running time of algorithm and dominating set size can be made. The last proposed algorithm is compared with the previous well-known algorithms and the results show the superiority of the proposed algorithm both in terms of the dominating set size and running time.


Sign in / Sign up

Export Citation Format

Share Document