scholarly journals The FF Planning System: Fast Plan Generation Through Heuristic Search

2001 ◽  
Vol 14 ◽  
pp. 253-302 ◽  
Author(s):  
J. Hoffmann ◽  
B. Nebel

We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines hill-climbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS-2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP.

2011 ◽  
Vol 135-136 ◽  
pp. 573-577 ◽  
Author(s):  
Rui Shi Liang ◽  
Min Huang

Increasing interest has been devoted to Planning as Heuristic Search over the years. Intense research has focused on deriving fast and accurate heuristics for domain-independent planning. This paper reports on an extensive survey and analysis of research work related to heuristic derivation techniques for state space search. Survey results reveal that heuristic techniques have been extensively applied in many efficient planners and result in impressive performances. We extend the survey analysis to suggest promising avenues for future research in heuristic derivation and heuristic search techniques.


Sadhana ◽  
1996 ◽  
Vol 21 (3) ◽  
pp. 263-290
Author(s):  
Pallab Dasgupta ◽  
P P Chakrabarti ◽  
S C Desarkar

2002 ◽  
Vol 11 (02) ◽  
pp. 267-282 ◽  
Author(s):  
AGAPITO LEDEZMA ◽  
RICARDO ALER ◽  
DANIEL BORRAJO

Nowadays, there is no doubt that machine learning techniques can be successfully applied to data mining tasks. Currently, the combination of several classifiers is one of the most active fields within inductive machine learning. Examples of such techniques are boosting, bagging and stacking. From these three techniques, stacking is perhaps the less used one. One of the main reasons for this relates to the difficulty to define and parameterize its components: selecting which combination of base classifiers to use, and which classifier to use as the meta-classifier. One could use for that purpose simple search methods (e.g. hill climbing), or more complex ones (e.g. genetic algorithms). But before search is attempted, it is important to know the properties of the search space itself. In this paper we study exhaustively the space of Stacking systems that can be built by using four base learning systems: C4.5, IB1, Naive Bayes, and PART. We have also used the Multiple Linear Response (MLR) as meta-classifier. The properties of this state-space obtained in this paper will be useful for designing new Stacking-based algorithms and tools.


2020 ◽  
Vol 68 ◽  
pp. 691-752
Author(s):  
Enrico Scala ◽  
Patrik Haslum ◽  
Sylvie Thiébaux ◽  
Miquel Ramirez

This paper studies novel subgoaling relaxations for automated planning with propositional and numeric state variables. Subgoaling relaxations address one source of complexity of the planning problem: the requirement to satisfy conditions simultaneously. The core idea is to relax this requirement by recursively decomposing conditions into atomic subgoals that are considered in isolation. Such relaxations are typically used for pruning, or as the basis for computing admissible or inadmissible heuristic estimates to guide optimal or satis_cing heuristic search planners. In the last decade or so, the subgoaling principle has underpinned the design of an abundance of relaxation-based heuristics whose formulations have greatly extended the reach of classical planning. This paper extends subgoaling relaxations to support numeric state variables and numeric conditions. We provide both theoretical and practical results, with the aim of reaching a good trade-o_ between accuracy and computation costs within a heuristic state-space search planner. Our experimental results validate the theoretical assumptions, and indicate that subgoaling substantially improves on the state of the art in optimal and satisficing numeric planning via forward state-space search.


1994 ◽  
Vol 2 (3) ◽  
pp. 249-278 ◽  
Author(s):  
Keith E. Mathias ◽  
L. Darrell Whitley

Delta coding is an iterative genetic search strategy that dynamically changes the representation of the search space in an attempt to exploit different problem representations. Delta coding sustains search by reinitializing the population at each iteration of search. This helps to avoid the asymptotic performance typically observed in genetic search as the population becomes more homogeneous. Here, the optimization ability of delta coding is empirically compared against CHC, ESGA, GENITOR, and random mutation hill-climbing (RMHC) on a suite of well-known test functions with and without Gray coding. Issues concerning the effects of Gray coding on these test functions are addressed.


1998 ◽  
Vol 9 ◽  
pp. 139-165 ◽  
Author(s):  
D. J. Cook ◽  
R. C. Varnell

Many of the artificial intelligence techniques developed to date rely on heuristic search through large spaces. Unfortunately, the size of these spaces and the corresponding computational effort reduce the applicability of otherwise novel and effective algorithms. A number of parallel and distributed approaches to search have considerably improved the performance of the search process. Our goal is to develop an architecture that automatically selects parallel search strategies for optimal performance on a variety of search problems. In this paper we describe one such architecture realized in the Eureka system, which combines the benefits of many different approaches to parallel heuristic search. Through empirical and theoretical analyses we observe that features of the problem space directly affect the choice of optimal parallel search strategy. We then employ machine learning techniques to select the optimal parallel search strategy for a given problem space. When a new search task is input to the system, Eureka uses features describing the search space and the chosen architecture to automatically select the appropriate search strategy. Eureka has been tested on a MIMD parallel processor, a distributed network of workstations, and a single workstation using multithreading. Results generated from fifteen puzzle problems, robot arm motion problems, artificial search spaces, and planning problems indicate that Eureka outperforms any of the tested strategies used exclusively for all problem instances and is able to greatly reduce the search time for these applications.


Author(s):  
Deyi Xue

Abstract A number of intelligent scheduling models for product distribution have been introduced in this research. The databases, representing scheduling requirements and results, are described using object oriented modeling approach. The optimal schedule for product distribution considering relevant constraints is identified using two optimization approaches: state space search and genetic algorithm. State space search is employed when search space is not large. Genetic algorithm, on the other hand, provides a robust mechanism for identifying the global optimal schedule when the search space is large. Different heuristic functions, considering traveling time and distance, have been developed to evaluate the schedules generated in the optimization. In addition to single-vehicle based scheduling, scheduling considering a number of vehicles has also been studied.


2019 ◽  
Vol 65 ◽  
pp. 343-392
Author(s):  
Daniel Gnad ◽  
Jörg Hoffmann ◽  
Martin Wehrle

Analyzing reachability in large discrete transition systems is an important sub-problem in several areas of AI, and of CS in general. State space search is a basic method for conducting such an analysis. A wealth of techniques have been proposed to reduce the search space without affecting the existence of (optimal) solution paths. In particular, strong stubborn set (SSS) pruning is a prominent such method, analyzing action dependencies to prune commutative parts of the search space. We herein show how to apply this idea to star-topology decoupled state space search, a recent search reformulation method invented in the context of classical AI planning. Star-topology decoupled state space search, short decoupled search, addresses planning tasks where a single center component interacts with several leaf components. The search exploits a form of conditional independence arising in this setting: given a fixed path p of transitions by the center, the possible leaf moves compliant with p are independent across the leaves. Decoupled search thus searches over center paths only, maintaining the compliant paths for each leaf separately. This avoids the enumeration of combined states across leaves. Just like standard search, decoupled search is adversely affected by commutative parts of its search space. The adaptation of strong stubborn set pruning is challenging due to the more complex structure of the search space, and the resulting ways in which action dependencies may affect the search. We spell out how to address this challenge, designing optimality-preserving decoupled strong stubborn set (DSSS) pruning methods. We introduce a design for star topologies in full generality, as well as simpler design variants for the practically relevant fork and inverted fork special cases. We show that there are cases where DSSS pruning is exponentially more effective than both, decoupled search and SSS pruning, exhibiting true synergy where the whole is more than the sum of its parts. Empirically, DSSS pruning reliably inherits the best of its components, and sometimes outperforms both.


Sign in / Sign up

Export Citation Format

Share Document