scholarly journals VHPOP: Versatile Heuristic Partial Order Planner

2003 ◽  
Vol 20 ◽  
pp. 405-430 ◽  
Author(s):  
H. L.S. Younes ◽  
R. G. Simmons

VHPOP is a partial order causal link (POCL) planner loosely based on UCPOP. It draws from the experience gained in the early to mid 1990's on flaw selection strategies for POCL planning, and combines this with more recent developments in the field of domain independent planning such as distance based heuristics and reachability analysis. We present an adaptation of the additive heuristic for plan space planning, and modify it to account for possible reuse of existing actions in a plan. We also propose a large set of novel flaw selection strategies, and show how these can help us solve more problems than previously possible by POCL planners. VHPOP also supports planning with durative actions by incorporating standard techniques for temporal constraint reasoning. We demonstrate that the same heuristic techniques used to boost the performance of classical POCL planning can be effective in domains with durative actions as well. The result is a versatile heuristic POCL planner competitive with established CSP-based and heuristic state space planners.

1997 ◽  
Vol 6 ◽  
pp. 223-262 ◽  
Author(s):  
M. E. Pollack ◽  
D. Joslin ◽  
M. Paolucci

Several recent studies have compared the relative efficiency of alternative flaw selection strategies for partial-order causal link (POCL) planning. We review this literature, and present new experimental results that generalize the earlier work and explain some of the discrepancies in it. In particular, we describe the Least-Cost Flaw Repair (LCFR) strategy developed and analyzed by Joslin and Pollack (1994), and compare it with other strategies, including Gerevini and Schubert's (1996) ZLIFO strategy. LCFR and ZLIFO make very different, and apparently conflicting claims about the most effective way to reduce search-space size in POCL planning. We resolve this conflict, arguing that much of the benefit that Gerevini and Schubert ascribe to the LIFO component of their ZLIFO strategy is better attributed to other causes. We show that for many problems, a strategy that combines least-cost flaw selection with the delay of separable threats will be effective in reducing search-space size, and will do so without excessive computational overhead. Although such a strategy thus provides a good default, we also show that certain domain characteristics may reduce its effectiveness.


2010 ◽  
Vol 39 ◽  
pp. 217-268 ◽  
Author(s):  
M. O. Riedl ◽  
R. M. Young

Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors -- logical and aesthetic -- that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience's suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem -- to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm -- the Intent-based Partial Order Causal Link (IPOCL) planner -- that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners.


2019 ◽  
Vol 214 ◽  
pp. 02037
Author(s):  
Marko Petricˇ ◽  
Markus Frank ◽  
Frank Gaede ◽  
André Sailer

For a successful experiment, it is of utmost importance to provide a consistent detector description. This is also the main motivation behind DD4hep, which addresses detector description in a broad sense including the geometry and the materials used in the device, and additional parameters describing, e.g., the detection techniques, constants required for alignment and calibration, description of the readout structures and conditions data. An integral part of DD4hep is DDG4 which is a powerful tool that converts arbitrary DD4hep detector geometries to Geant4 and provides access to all Geant4 action stages. It is equipped with a comprehensive plugins suite that includes handling of different IO formats; Monte Carlo truth linking and a large set of segmentation and sensitive detector classes, allowing the simulation of a wide variety of detector technologies. In the following, recent developments in DD4hep/DDG4 like the addition of a ROOT based persistency mechanism for the detector description and the development of framework support for DDG4 are highlighted. Through this mechanism an experiment’s data processing framework can interface its essential tools to all DDG4 actions. This allows for simple integration of DD4hep into existing experiment frameworks.


2011 ◽  
Vol 61 ◽  
pp. 79-83 ◽  
Author(s):  
Salim Bennoud ◽  
Zergoug Mourad

All aircraft whatever they are; are regularly audited. These controls are mainly visual and external; other controls such as "major inspection" or "general revisions” are more extensive and require the dismantling of certain parts of the aircraft. Some parts of the aircraft remain inaccessible and are therefore more difficult to inspect (compressor, combustion chamber, and turbine). The means of detection must ensure controls either during initial construction, or at the time of exploitation of all the parts. The Non destructive testing (NDT) gathers the most widespread methods for detecting defects of a part or review the integrity of a structure. The aim of this work is to present the different (NDT) techniques and to explore their limits, taking into account the difficulties presented at the level of the hot part of a turbojet, in order to propose one or more effective means, non subjective and less expensive for the detection and the control of cracks in the hot section of a turbojet. To achieve our goal, we followed the following steps: - Acquire technical, scientific and practical basis of magnetic fields, electrical and electromagnetic, related to industrial applications primarily to electromagnetic NDT techniques. - Apply a scientific approach integrating fundamental knowledge of synthetic and pragmatic manner so as to control the implementation of NDT techniques to establish a synthesis in order to comparing between the use of different methods. - To review recent developments concerning the standard techniques and their foreseeable development: eddy current, ultrasonic guided waves ..., and the possibility of the implication of new techniques.


2001 ◽  
Vol 15 ◽  
pp. 115-161 ◽  
Author(s):  
I. Refanidis ◽  
I. Vlahavas

This paper presents GRT, a domain-independent heuristic planning system for STRIPS worlds. GRT solves problems in two phases. In the pre-processing phase, it estimates the distance between each fact and the goals of the problem, in a backward direction. Then, in the search phase, these estimates are used in order to further estimate the distance between each intermediate state and the goals, guiding so the search process in a forward direction and on a best-first basis. The paper presents the benefits from the adoption of opposite directions between the preprocessing and the search phases, discusses some difficulties that arise in the pre-processing phase and introduces techniques to cope with them. Moreover, it presents several methods of improving the efficiency of the heuristic, by enriching the representation and by reducing the size of the problem. Finally, a method of overcoming local optimal states, based on domain axioms, is proposed. According to it, difficult problems are decomposed into easier sub-problems that have to be solved sequentially. The performance results from various domains, including those of the recent planning competitions, show that GRT is among the fastest planners.


Author(s):  
Gregor Hendel

AbstractLarge Neighborhood Search (LNS) heuristics are among the most powerful but also most expensive heuristics for mixed integer programs (MIP). Ideally, a solver adaptively concentrates its limited computational budget by learning which LNS heuristics work best for the MIP problem at hand. To this end, this work introduces Adaptive Large Neighborhood Search (ALNS) for MIP, a primal heuristic that acts as a framework for eight popular LNS heuristics such as Local Branching and Relaxation Induced Neighborhood Search (RINS). We distinguish the available LNS heuristics by their individual search spaces, which we call auxiliary problems. The decision which auxiliary problem should be executed is guided by selection strategies for the multi armed bandit problem, a related optimization problem during which suitable actions have to be chosen to maximize a reward function. In this paper, we propose an LNS-specific reward function to learn to distinguish between the available auxiliary problems based on successful calls and failures. A second, algorithmic enhancement is a generic variable fixing prioritization, which ALNS employs to adjust the subproblem complexity as needed. This is particularly useful for some LNS problems which do not fix variables by themselves. The proposed primal heuristic has been implemented within the MIP solver SCIP. An extensive computational study is conducted to compare different LNS strategies within our ALNS framework on a large set of publicly available MIP instances from the MIPLIB and Coral benchmark sets. The results of this simulation are used to calibrate the parameters of the bandit selection strategies. A second computational experiment shows the computational benefits of the proposed ALNS framework within the MIP solver SCIP.


Author(s):  
Ayeley P. Tchangani

Decision analysis, the mechanism by which a final decision is reached in terms of choice (choosing an alternative or a subset of alternatives from a large set of alternatives), ranking (ranking alternatives of a set from the worst to the best), classification (assigning alternatives to some known classes or categories), or sorting (clustering alternatives to form homogeneous classes or categories) is certainly the most pervasive human activity. Some decisions are made routinely and do not need sophisticated algorithms to support decision analysis process whereas other decisions need more or less complex processes to reach a final decision. Methods and models developed to solve decision analysis problems are in constant evolution going from mechanist models of operational research to more sophisticated and soft computing-oriented models that attempt to integrate human attitude (emotion, affect, fear, egoism, altruism, selfishness, etc.). This complex, soft computing and near human mechanism of problem solving is rendered possible thanks to the overwhelming computational power and data storage possibility of modern computers. The purpose of this chapter is to present new and recent developments in decision analysis that attempt to integrate human judgment through bipolarity notion.


2021 ◽  
Vol 40 (5) ◽  
pp. 1-14
Author(s):  
Michael Mara ◽  
Felix Heide ◽  
Michael Zollhöfer ◽  
Matthias Nießner ◽  
Pat Hanrahan

Large-scale optimization problems at the core of many graphics, vision, and imaging applications are often implemented by hand in tedious and error-prone processes in order to achieve high performance (in particular on GPUs), despite recent developments in libraries and DSLs. At the same time, these hand-crafted solver implementations reveal that the key for high performance is a problem-specific schedule that enables efficient usage of the underlying hardware. In this work, we incorporate this insight into Thallo, a domain-specific language for large-scale non-linear least squares optimization problems. We observe various code reorganizations performed by implementers of high-performance solvers in the literature, and then define a set of basic operations that span these scheduling choices, thereby defining a large scheduling space. Users can either specify code transformations in a scheduling language or use an autoscheduler. Thallo takes as input a compact, shader-like representation of an energy function and a (potentially auto-generated) schedule, translating the combination into high-performance GPU solvers. Since Thallo can generate solvers from a large scheduling space, it can handle a large set of large-scale non-linear and non-smooth problems with various degrees of non-locality and compute-to-memory ratios, including diverse applications such as bundle adjustment, face blendshape fitting, and spatially-varying Poisson deconvolution, as seen in Figure 1. Abstracting schedules from the optimization, we outperform state-of-the-art GPU-based optimization DSLs by an average of 16× across all applications introduced in this work, and even some published hand-written GPU solvers by 30%+.


Author(s):  
Francesco Percassi ◽  
Alfonso E. Gerevini ◽  
Enrico Scala ◽  
Ivan Serina ◽  
Mauro Vallati

Sign in / Sign up

Export Citation Format

Share Document