scholarly journals Parallel best-first search algorithms for planning problems on multi-core processors

Author(s):  
Didier El Baz ◽  
Bilal Fakih ◽  
Romeo Sanchez Nigenda ◽  
Vincent Boyer
2016 ◽  
Vol 57 ◽  
pp. 273-306 ◽  
Author(s):  
Christopher Wilt ◽  
Wheeler Ruml

Suboptimal heuristic search algorithms such as weighted A* and greedy best-first search are widely used to solve problems for which guaranteed optimal solutions are too expensive to obtain. These algorithms crucially rely on a heuristic function to guide their search. However, most research on building heuristics addresses optimal solving. In this paper, we illustrate how established wisdom for constructing heuristics for optimal search can fail when considering suboptimal search. We consider the behavior of greedy best-first search in detail and we test several hypotheses for predicting when a heuristic will be effective for it. Our results suggest that a predictive characteristic is a heuristic's goal distance rank correlation (GDRC), a robust measure of whether it orders nodes according to distance to a goal. We demonstrate that GDRC can be used to automatically construct abstraction-based heuristics for greedy best-first search that are more effective than those built by methods oriented toward optimal search. These results reinforce the point that suboptimal search deserves sustained attention and specialized methods of its own.


2022 ◽  
Vol 73 ◽  
Author(s):  
Maximilian Fickert ◽  
Jörg Hoffmann

In classical AI planning, heuristic functions typically base their estimates on a relaxation of the input task. Such relaxations can be more or less precise, and many heuristic functions have a refinement procedure that can be iteratively applied until the desired degree of precision is reached. Traditionally, such refinement is performed offline to instantiate the heuristic for the search. However, a natural idea is to perform such refinement online instead, in situations where the heuristic is not sufficiently accurate. We introduce several online-refinement search algorithms, based on hill-climbing and greedy best-first search. Our hill-climbing algorithms perform a bounded lookahead, proceeding to a state with lower heuristic value than the root state of the lookahead if such a state exists, or refining the heuristic otherwise to remove such a local minimum from the search space surface. These algorithms are complete if the refinement procedure satisfies a suitable convergence property. We transfer the idea of bounded lookaheads to greedy best-first search with a lightweight lookahead after each expansion, serving both as a method to boost search progress and to detect when the heuristic is inaccurate, identifying an opportunity for online refinement. We evaluate our algorithms with the partial delete relaxation heuristic hCFF, which can be refined by treating additional conjunctions of facts as atomic, and whose refinement operation satisfies the convergence property required for completeness. On both the IPC domains as well as on the recently published Autoscale benchmarks, our online-refinement search algorithms significantly beat state-of-the-art satisficing planners, and are competitive even with complex portfolios.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 264
Author(s):  
Anggina Primanita ◽  
Mohd Nor Akmal Khalid ◽  
Hiroyuki Iida

Variants of best-first search algorithms and their expansions have continuously been introduced to solve challenging problems. The probability-based proof number search (PPNS) is a best-first search algorithm that can be used to solve positions in AND/OR game tree structures. It combines information from explored (based on winning status) and unexplored (through Monte Carlo simulation) nodes from a game tree using an indicator called the probability-based proof number (PPN). In this study, PPNS is employed to solve randomly generated positions in Connect Four and Othello, in which the results are compared with the two well-known best-first search algorithms (proof number search (PNS) and Monte Carlo proof number search). Adopting a simple improvement parameter in PPNS reduces the number of nodes that need to be explored by up to 57%. Moreover, further observation showed the varying importance of information from explored and unexplored nodes in which PPNS relies critically on the combination of such information in earlier stages of the Othello game. Discussion and insights from these findings are provided where the potential future works are briefly described.


2000 ◽  
Vol 15 (1) ◽  
pp. 101-117 ◽  
Author(s):  
HENRY KAUTZ ◽  
JOACHIM P. WALSER

This paper describes ILP-PLAN, a framework for solving AI planning problems represented as integer linear programs. ILP-PLAN extends the planning as satisfiability framework to handle plans with resources, action costs, and complex objective functions. We show that challenging planning problems can be effectively solved using both traditional branch-and-bound integer programming solvers and efficient new integer local search algorithms. ILP-PLAN can find better quality solutions for a set of hard benchmark logistics planning problems than had been found by any earlier system.


Author(s):  
I.Parvin Begum ◽  
I.Shahina Begam

Present days many artificial intelligence search algorithms are plays a important to figure out the problem of shortest path finding. The paper presents the detailed study of heuristic search and blind search techniques. The paper focus additional in the direction of blind search strategies such as Breadth First Search, Depth First Search, and Uniform Cost Search and informed explore strategies like A*, and Best First Search. The paper consist of effective of search procedure, their qualities, and demerits, where these algorithms are applicable, also at last comparison of search techniques based on complexity, optimality and completeness are presented in tabular structure.


2018 ◽  
Vol 62 ◽  
pp. 233-268 ◽  
Author(s):  
Matthew Hatem ◽  
Ethan Burns ◽  
Wheeler Ruml

Classic best-first heuristic search algorithms, like A*, record every unique state they encounter in RAM, making them infeasible for solving large problems. In this paper, we demonstrate how best-first search can be scaled to solve much larger problems by exploiting disk storage and parallel processing and, in some cases, slightly relaxing the strict best-first node expansion order. Some previous disk-based search algorithms abandon best-first search order in an attempt to increase efficiency. We present two case studies showing that A*, when augmented with Delayed Duplicate Detection, can actually be more efficient than these non-best-first search orders. First, we present a straightforward external variant of A*, called PEDAL, that slightly relaxes best-first order in order to be I/O efficient in both theory and practice, even on problems featuring real-valued node costs. Because it is easy to parallelize, PEDAL can be faster than in-memory IDA* even on domains with few duplicate states, such as the sliding-tile puzzle. Second, we present a variant of PEDAL, called PE2A*, that uses partial expansion to handle problems that have large branching factors. When tested on the problem of Multiple Sequence Alignment, PE2A* is the first algorithm capable of solving the entire Reference Set 1 of the standard BAliBASE benchmark using a biologically accurate cost function. This work shows that classic best-first algorithms like A* can be applied to large real-world problems. We also provide a detailed implementation guide with source code both for generic parallel disk-based best-first search and for Multiple Sequence Alignment with a biologically accurate cost function. Given its effectiveness as a general-purpose problem-solving method, we hope that this makes parallel and disk-based search accessible to a wider audience.


Author(s):  
Manuel Heusner ◽  
Thomas Keller ◽  
Malte Helmert

A classical result in optimal search shows that A* with an admissible and consistent heuristic expands every state whose f-value is below the optimal solution cost and no state whose f-value is above the optimal solution cost. For satisficing search algorithms, a similarly clear understanding is currently lacking. We examine the search behavior of greedy best-first search (GBFS) in order to make progress towards such an understanding. We introduce the concept of high-water mark benches, which separate the search space into areas that are searched by a GBFS algorithm in sequence. High-water mark benches allow us to exactly determine the set of states that are expanded by at least one GBFS tie-breaking strategy and give us a clearer understanding of search progress.


Author(s):  
Gaojian Fan ◽  
Martin Müller ◽  
Robert Holte

In many planning applications, actions can have highly diverse costs. Recent studies focus on the effects of diverse action costs on search algorithms, but not on their effects on domain-independent heuristics. In this paper, we demonstrate there are negative impacts of action cost diversity on merge-and-shrink (M&S), a successful abstraction method for producing high-quality heuristics for planning problems. We propose a new cost partitioning method for M&S to address the negative effects of diverse action costs. We investigate non-unit cost IPC domains, especially those for which diverse action costs have severe negative effects on the quality of the M&S heuristic. Our experiments demonstrate that in these domains, an additive set of M&S heuristics using the new cost partitioning method produces much more informative and effective heuristics than creating a single M&S heuristic which directly encodes diverse costs.


2017 ◽  
Vol 60 ◽  
pp. 491-548 ◽  
Author(s):  
Yuu Jinnai ◽  
Alex Fukunaga

Parallel best-first search algorithms such as Hash Distributed A* (HDA*) distribute work among the processes using a global hash function. We analyze the search and communication overheads of state-of-the-art hash-based parallel best-first search algorithms, and show that although Zobrist hashing, the standard hash function used by HDA*, achieves good load balance for many domains, it incurs significant communication overhead since almost all generated nodes are transferred to a different processor than their parents. We propose Abstract Zobrist hashing, a new work distribution method for parallel search which, instead of computing a hash value based on the raw features of a state, uses a feature projection function to generate a set of abstract features which results in a higher locality, resulting in reduced communications overhead. We show that Abstract Zobrist hashing outperforms previous methods on search domains using hand-coded, domain specific feature projection functions. We then propose GRAZHDA*, a graph-partitioning based approach to automatically generating feature projection functions. GRAZHDA* seeks to approximate the partitioning of the actual search space graph by partitioning the domain transition graph, an abstraction of the state space graph. We show that GRAZHDA* outperforms previous methods on domain-independent planning.


Sign in / Sign up

Export Citation Format

Share Document