scholarly journals On Hash-Based Work Distribution Methods for Parallel Best-First Search

2017 ◽  
Vol 60 ◽  
pp. 491-548 ◽  
Author(s):  
Yuu Jinnai ◽  
Alex Fukunaga

Parallel best-first search algorithms such as Hash Distributed A* (HDA*) distribute work among the processes using a global hash function. We analyze the search and communication overheads of state-of-the-art hash-based parallel best-first search algorithms, and show that although Zobrist hashing, the standard hash function used by HDA*, achieves good load balance for many domains, it incurs significant communication overhead since almost all generated nodes are transferred to a different processor than their parents. We propose Abstract Zobrist hashing, a new work distribution method for parallel search which, instead of computing a hash value based on the raw features of a state, uses a feature projection function to generate a set of abstract features which results in a higher locality, resulting in reduced communications overhead. We show that Abstract Zobrist hashing outperforms previous methods on search domains using hand-coded, domain specific feature projection functions. We then propose GRAZHDA*, a graph-partitioning based approach to automatically generating feature projection functions. GRAZHDA* seeks to approximate the partitioning of the actual search space graph by partitioning the domain transition graph, an abstraction of the state space graph. We show that GRAZHDA* outperforms previous methods on domain-independent planning.

2022 ◽  
Vol 73 ◽  
Author(s):  
Maximilian Fickert ◽  
Jörg Hoffmann

In classical AI planning, heuristic functions typically base their estimates on a relaxation of the input task. Such relaxations can be more or less precise, and many heuristic functions have a refinement procedure that can be iteratively applied until the desired degree of precision is reached. Traditionally, such refinement is performed offline to instantiate the heuristic for the search. However, a natural idea is to perform such refinement online instead, in situations where the heuristic is not sufficiently accurate. We introduce several online-refinement search algorithms, based on hill-climbing and greedy best-first search. Our hill-climbing algorithms perform a bounded lookahead, proceeding to a state with lower heuristic value than the root state of the lookahead if such a state exists, or refining the heuristic otherwise to remove such a local minimum from the search space surface. These algorithms are complete if the refinement procedure satisfies a suitable convergence property. We transfer the idea of bounded lookaheads to greedy best-first search with a lightweight lookahead after each expansion, serving both as a method to boost search progress and to detect when the heuristic is inaccurate, identifying an opportunity for online refinement. We evaluate our algorithms with the partial delete relaxation heuristic hCFF, which can be refined by treating additional conjunctions of facts as atomic, and whose refinement operation satisfies the convergence property required for completeness. On both the IPC domains as well as on the recently published Autoscale benchmarks, our online-refinement search algorithms significantly beat state-of-the-art satisficing planners, and are competitive even with complex portfolios.


Author(s):  
Manuel Heusner ◽  
Thomas Keller ◽  
Malte Helmert

A classical result in optimal search shows that A* with an admissible and consistent heuristic expands every state whose f-value is below the optimal solution cost and no state whose f-value is above the optimal solution cost. For satisficing search algorithms, a similarly clear understanding is currently lacking. We examine the search behavior of greedy best-first search (GBFS) in order to make progress towards such an understanding. We introduce the concept of high-water mark benches, which separate the search space into areas that are searched by a GBFS algorithm in sequence. High-water mark benches allow us to exactly determine the set of states that are expanded by at least one GBFS tie-breaking strategy and give us a clearer understanding of search progress.


2017 ◽  
Vol 58 ◽  
pp. 67-121 ◽  
Author(s):  
Masataro Asai ◽  
Alex Fukunaga

Best-first search algorithms such as A* need to apply tie-breaking strategies in order to decide which node to expand when multiple search nodes have the same evaluation score. We investigate and improve tie-breaking strategies for cost-optimal search using A*. We first experimentally analyze the performance of common tie-breaking strategies that break ties according to the heuristic value of the nodes. We find that the tie-breaking strategy has a significant impact on search algorithm performance when there are 0-cost operators that induce large plateau regions in the search space. Based on this, we develop two new classes of tie-breaking strategies. We first propose a depth diversification strategy which breaks ties according to the distance from the entrance to the plateau, and then show that this new strategy significantly outperforms standard strategies on domains with 0-cost actions. Next, we propose a new framework for interpreting A* search as a series of satisficing searches within plateaus consisting of nodes with the same f-cost. Based on this framework, we investigate a second, new class of tie-breaking strategy, a multi-heuristic tie-breaking strategy which embeds inadmissible, distance-to-go variations of various heuristics within an admissible search. This is shown to further improve the performance in combination with the depth metric.


Author(s):  
Tad Hogg

Phase transitions have long been studied empirically in various combinatorial searches and theoretically in simplified models [91, 264, 301, 490]. The analogy with statistical physics [397], explored throughout this volume, shows how the many local choices made during search relate to global properties such as the resulting search cost. These studies have led to a better understanding of typical search behaviors [514] and improved search methods [195, 247, 261, 432, 433]. Among the current research questions in this field are the range of algorithms exhibiting the transition behavior and the algorithm-independent problem properties associated with the difficult instances concentrated near the transition. Towards this end, the present chapter examines quantum computer [123, 126, 158, 486] algorithms for nondeterministic polynomial (NP) combinatorial search problems [191]. As with many conventional methods, they exhibit the easy-hard-easy pattern of computational cost as the degree of constraint in the problems varies. We describe how properties of the search space affect the algorithms and identify an additional structural property, the energy gap, motivated by one quantum algorithm but applicable to a variety of techniques, both quantum and classical. Thus, the study of quantum search algorithms not only extends the range of algorithms exhibiting phase transitions, but also helps identify underlying structural properties. Specifically, the next two sections describe a class of hard search problems and the form of quantum search algorithms proposed to date. The remainder of the chapter presents algorithm behaviors, relevant problem structure, arid an approximate asymptotic analysis of their cost scaling. The final section discusses various open issues in designing and evaluating quantum algorithms, and relating their behavior to problem structure. The k-satisfiability (k -SAT) problem, as discussed earlier in this volume, consists of n Boolean variables and m clauses. A clause is a logical OR of k variables, each of which may be negated. A solution is an assignment, that is, a value for each variable, TRUE or FALSE, satisfying all the clauses. An assignment is said to conflict with any clause it does not satisfy.


2011 ◽  
Vol 2 (3) ◽  
pp. 27-44 ◽  
Author(s):  
Nashat Mansour ◽  
Ghia Sleiman-Haidar

University exam timetabling refers to scheduling exams into predefined days, time periods and rooms, given a set of constraints. Exam timetabling is a computationally intractable optimization problem, which requires heuristic techniques for producing adequate solutions within reasonable execution time. For large numbers of exams and students, sequential algorithms are likely to be time consuming. This paper presents parallel scatter search meta-heuristic algorithms for producing good sub-optimal exam timetables in a reasonable time. Scatter search is a population-based approach that generates solutions over a number of iterations and aims to combine diversification and search intensification. The authors propose parallel scatter search algorithms that are based on distributing the population of candidate solutions over a number of processors in a PC cluster environment. The main components of scatter search are computed in parallel and efficient communication techniques are employed. Empirical results show that the proposed parallel scatter search algorithms yield good speed-up. Also, they show that parallel scatter search algorithms improve solution quality because they explore larger parts of the search space within reasonable time, in contrast with the sequential algorithm.


2014 ◽  
Vol 24 (4) ◽  
pp. 901-916
Author(s):  
Zoltán Ádám Mann ◽  
Tamás Szép

Abstract Backtrack-style exhaustive search algorithms for NP-hard problems tend to have large variance in their runtime. This is because “fortunate” branching decisions can lead to finding a solution quickly, whereas “unfortunate” decisions in another run can lead the algorithm to a region of the search space with no solutions. In the literature, frequent restarting has been suggested as a means to overcome this problem. In this paper, we propose a more sophisticated approach: a best-firstsearch heuristic to quickly move between parts of the search space, always concentrating on the most promising region. We describe how this idea can be efficiently incorporated into a backtrack search algorithm, without sacrificing optimality. Moreover, we demonstrate empirically that, for hard solvable problem instances, the new approach provides significantly higher speed-up than frequent restarting.


2015 ◽  
Vol 23 (1) ◽  
pp. 101-129 ◽  
Author(s):  
Antonios Liapis ◽  
Georgios N. Yannakakis ◽  
Julian Togelius

Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. This paper elaborates on the problem of constrained novelty search and proposes two novelty search algorithms which search within both the feasible and the infeasible space. Inspired by the FI-2pop genetic algorithm, both algorithms maintain and evolve two separate populations, one with feasible and one with infeasible individuals, while each population can use its own selection method. The proposed algorithms are applied to the problem of generating diverse but playable game levels, which is representative of the larger problem of procedural game content generation. Results show that the two-population constrained novelty search methods can create, under certain conditions, larger and more diverse sets of feasible game levels than current methods of novelty search, whether constrained or unconstrained. However, the best algorithm is contingent on the particularities of the search space and the genetic operators used. Additionally, the proposed enhancement of offspring boosting is shown to enhance performance in all cases of two-population novelty search.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Noureddine Bouhmala

The simplicity of the maximum satisfiability problem (MAX-SAT) combined with its applicability in many areas of artificial intelligence and computing science made it one of the fundamental optimization problems. This NP-complete problem refers to the task of finding a variable assignment that satisfies the maximum number of clauses (or the sum of weights of satisfied clauses) in a Boolean formula. The Walksat algorithm is considered to be the main skeleton underlying almost all local search algorithms for MAX-SAT. Most local search algorithms including Walksat rely on the 1-flip neighborhood structure. This paper introduces a variable neighborhood walksat-based algorithm. The neighborhood structure can be combined easily using any local search algorithm. Its effectiveness is compared with existing algorithms using 1-flip neighborhood structure and solvers such as CCLS and Optimax from the eighth MAX-SAT evaluation.


Author(s):  
Stasinos Konstantopoulos ◽  
Rui Camacho ◽  
Nuno A. Fonseca ◽  
Vítor Santos Costa

This chapter introduces Inductive Logic Programming (ILP) from the perspective of search algorithms in Computer Science. It first briefly considers the Version Spaces approach to induction, and then focuses on Inductive Logic Programming: from its formal definition and main techniques and strategies, to priors used to restrict the search space and optimized sequential, parallel, and stochastic algorithms. The authors hope that this presentation of the theory and applications of Inductive Logic Programming will help the reader understand the theoretical underpinnings of ILP, and also provide a helpful overview of the State-of-the-Art in the domain.


2005 ◽  
Vol 24 ◽  
pp. 263-303 ◽  
Author(s):  
V. Bayer-Zubek ◽  
T. G. Dietterich

This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected total cost, which is the sum of measurement costs and misdiagnosis costs. In most diagnostic settings, there is a tradeoff between these two kinds of costs. This paper formalizes diagnostic decision making as a Markov Decision Process (MDP). The paper introduces a new family of systematic search algorithms based on the AO* algorithm to solve this MDP. To make AO* efficient, the paper describes an admissible heuristic that enables AO* to prune large parts of the search space. The paper also introduces several greedy algorithms including some improvements over previously-published methods. The paper then addresses the question of learning diagnostic policies from examples. When the probabilities of diseases and test results are computed from training data, there is a great danger of overfitting. To reduce overfitting, regularizers are integrated into the search algorithms. Finally, the paper compares the proposed methods on five benchmark diagnostic data sets. The studies show that in most cases the systematic search methods produce better diagnostic policies than the greedy methods. In addition, the studies show that for training sets of realistic size, the systematic search algorithms are practical on today's desktop computers.


Sign in / Sign up

Export Citation Format

Share Document