scholarly journals On the Completeness of Pruning Techniques for Planning with Conditional Effects

2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Dunbo Cai ◽  
Sheng Xu ◽  
Tongzhou Zhao ◽  
Yanduo Zhang

Pruning techniques and heuristics are two keys to the heuristic search-based planning. Thehelpful actionspruning (HAP) strategy andrelaxed-plan-based heuristicsare two representatives among those methods and are still popular in the state-of-the-art planners. Here, we present new analyses on the properties of HAP. Specifically, we show new reasons for which HAP can cause incompleteness of a search procedure. We prove that, in general, HAP is incomplete for planning with conditional effects if factored expansions of actions are used. To preserve completeness, we propose a pruning strategy that is based onrelevance analysisandconfrontation. We will show that bothrelevance analysisandconfrontationare necessary. We call it theconfrontation and goal relevant actionspruning (CGRAP) strategy. However, CGRAP is computationally hard to be exactly computed. Therefore, we suggest practical approximations from the literature.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mostafa El Habib Daho ◽  
Nesma Settouti ◽  
Mohammed El Amine Bechar ◽  
Amina Boublenza ◽  
Mohammed Amine Chikh

PurposeEnsemble methods have been widely used in the field of pattern recognition due to the difficulty of finding a single classifier that performs well on a wide variety of problems. Despite the effectiveness of these techniques, studies have shown that ensemble methods generate a large number of hypotheses and that contain redundant classifiers in most cases. Several works proposed in the state of the art attempt to reduce all hypotheses without affecting performance.Design/methodology/approachIn this work, the authors are proposing a pruning method that takes into consideration the correlation between classifiers/classes and each classifier with the rest of the set. The authors have used the random forest algorithm as trees-based ensemble classifiers and the pruning was made by a technique inspired by the CFS (correlation feature selection) algorithm.FindingsThe proposed method CES (correlation-based Ensemble Selection) was evaluated on ten datasets from the UCI machine learning repository, and the performances were compared to six ensemble pruning techniques. The results showed that our proposed pruning method selects a small ensemble in a smaller amount of time while improving classification rates compared to the state-of-the-art methods.Originality/valueCES is a new ordering-based method that uses the CFS algorithm. CES selects, in a short time, a small sub-ensemble that outperforms results obtained from the whole forest and the other state-of-the-art techniques used in this study.


Author(s):  
Daniel Höller ◽  
Pascal Bercher ◽  
Gregor Behnke ◽  
Susanne Biundo

Planning is the task of finding a sequence of actions that achieves the goal(s) of an agent. It is solved based on a model describing the environment and how to change it. There are several approaches to solve planning tasks, two of the most popular are classical planning and hierarchical planning. Solvers are often based on heuristic search, but especially regarding domain-independent heuristics, techniques in classical planning are more sophisticated. However, due to the different problem classes, it is difficult to use them in hierarchical planning. In this paper we describe how to use arbitrary classical heuristics in hierarchical planning and show that the resulting system outperforms the state of the art in hierarchical planning.


2016 ◽  
Vol 57 ◽  
pp. 229-271 ◽  
Author(s):  
Marcel Steinmetz ◽  
Jörg Hoffmann ◽  
Olivier Buffet

Unavoidable dead-ends are common in many probabilistic planning problems, e.g. when actions may fail or when operating under resource constraints. An important objective in such settings is MaxProb, determining the maximal probability with which the goal can be reached, and a policy achieving that probability. Yet algorithms for MaxProb probabilistic planning are severely underexplored, to the extent that there is scant evidence of what the empirical state of the art actually is. We close this gap with a comprehensive empirical analysis. We design and explore a large space of heuristic search algorithms, systematizing known algorithms and contributing several new algorithm variants. We consider MaxProb, as well as weaker objectives that we baptize AtLeastProb (requiring to achieve a given goal probabilty threshold) and ApproxProb (requiring to compute the maximum goal probability up to a given accuracy). We explore both the general case where there may be 0-reward cycles, and the practically relevant special case of acyclic planning, such as planning with a limited action-cost budget. We design suitable termination criteria, search algorithm variants, dead-end pruning methods using classical planning heuristics, and node selection strategies. We design a benchmark suite comprising more than 1000 instances adapted from the IPPC, resource-constrained planning, and simulated penetration testing. Our evaluation clarifies the state of the art, characterizes the behavior of a wide range of heuristic search algorithms, and demonstrates significant benefits of our new algorithm variants.


2020 ◽  
Vol 62 (2) ◽  
pp. 99-115
Author(s):  
Janek Bevendorff ◽  
Tobias Wenzel ◽  
Martin Potthast ◽  
Matthias Hagen ◽  
Benno Stein

AbstractAuthorship verification is the task of determining whether two texts were written by the same author based on a writing style analysis. Author obfuscation is the adversarial task of preventing a successful verification by altering a text’s style so that it does not resemble that of its original author anymore. This paper introduces new algorithms for both tasks and reports on a comprehensive evaluation to ascertain the merits of the state of the art in authorship verification to withstand obfuscation.After introducing a new generalization of the well-known unmasking algorithm for short texts, thus completing our collection of state-of-the-art algorithms for verification, we introduce an approach that (1) models writing style difference as the Jensen-Shannon distance between the character n-gram distributions of texts, and (2) manipulates an author’s writing style in a sophisticated manner using heuristic search. For obfuscation, we explore the huge space of textual variants in order to find a paraphrased version of the to-be-obfuscated text that has a sufficiently high Jensen-Shannon distance at minimal costs in terms of text quality loss. We analyze, quantify, and illustrate the rationale of this approach, define paraphrasing operators, derive text length-invariant thresholds for termination, and develop an effective obfuscation framework. Our authorship obfuscation approach defeats the presented state-of-the-art verification approaches, while keeping text changes at a minimum. As a final contribution, we discuss and experimentally evaluate a reverse obfuscation attack against our obfuscation approach as well as possible remedies.


2021 ◽  
pp. 1-14
Author(s):  
Heng Wang ◽  
Xiang Ye ◽  
Yong Li

Model pruning aims to reduce the parameter amount of deep neural networks while retaining the performance. Existing strategies often treat all layers equally and all layers simply share the same pruning rate. However, it is observed from our experiments that the redundancy degree differs from layer to layer. Based on this observation, this work proposes a pruning strategy depending on the layer-wise redundancy degree. Firstly, we define the redundancy degree for each layer by the norm and similarity redundancy of filters. Then a novel layer-wise strategy, Redundancy-dependent Filter Pruning (RedFiP), is proposed which prunes different proportion of filters at different layers according to the defined redundancy degree. Since the redundancy analysis and experimental results of RedFiP show that deeper layers need fewer filters, a phase-wise strategy, Phased Filter Pruning (PFP), is proposed that divides the layers into three phases and layers in each phase share the same pruning rate. The phase-wise PFP allows the layer-wise RedFiP to be easily implemented in existing structures of deep neural networks. Experimental results show that when total parameters are pruned by 40%, RedFiP outperforms the state-of-the-art strategy FPGM-Mixed by 1.83% on CIFAR-100, and even slightly outperforms the non-pruned model by 0.11% on CIFAR-10. On ImageNet-1k, RedFiP (30%) and PFP (30%) outperform FPGM-Mixed (30%) by 1.3% and 0.8% with ResNet-18.


2017 ◽  
Author(s):  
André G. Pereira ◽  
Luciana S. Buriol ◽  
Marcus Ritt

Moving-blocks problems are extremely hard to solve and a representative abstraction of many applications. Despite their importance, the known computational complexity results are limited to few versions of these problems. In addition, there are no effective methods to optimally solve them. We address both of these issues. This thesis proves the PSPACE-completeness of many versions of moving-blocks problems. Moreover, we propose new methods to optimally solve these problems based on heuristic search with admissible heuristic functions and tie-breaking strategies. Our methods advance the state of the art, create new lines of research and improve the results of applications.


2010 ◽  
Vol 39 ◽  
pp. 51-126 ◽  
Author(s):  
M. Katz ◽  
C. Domshlak

State-space search with explicit abstraction heuristics is at the state of the art of cost-optimal planning. These heuristics are inherently limited, nonetheless, because the size of the abstract space must be bounded by some, even if a very large, constant. Targeting this shortcoming, we introduce the notion of (additive) implicit abstractions, in which the planning task is abstracted by instances of tractable fragments of optimal planning. We then introduce a concrete setting of this framework, called fork-decomposition, that is based on two novel fragments of tractable cost-optimal planning. The induced admissible heuristics are then studied formally and empirically. This study testifies for the accuracy of the fork decomposition heuristics, yet our empirical evaluation also stresses the tradeoff between their accuracy and the runtime complexity of computing them. Indeed, some of the power of the explicit abstraction heuristics comes from precomputing the heuristic function offline and then determining h(s) for each evaluated state s by a very fast lookup in a ``database.'' By contrast, while fork-decomposition heuristics can be calculated in polynomial time, computing them is far from being fast. To address this problem, we show that the time-per-node complexity bottleneck of the fork-decomposition heuristics can be successfully overcome. We demonstrate that an equivalent of the explicit abstraction notion of a ``database'' exists for the fork-decomposition abstractions as well, despite their exponential-size abstract spaces. We then verify empirically that heuristic search with the ``databased" fork-decomposition heuristics favorably competes with the state of the art of cost-optimal planning.


Author(s):  
T. A. Welton

Various authors have emphasized the spatial information resident in an electron micrograph taken with adequately coherent radiation. In view of the completion of at least one such instrument, this opportunity is taken to summarize the state of the art of processing such micrographs. We use the usual symbols for the aberration coefficients, and supplement these with £ and 6 for the transverse coherence length and the fractional energy spread respectively. He also assume a weak, biologically interesting sample, with principal interest lying in the molecular skeleton remaining after obvious hydrogen loss and other radiation damage has occurred.


2003 ◽  
Vol 48 (6) ◽  
pp. 826-829 ◽  
Author(s):  
Eric Amsel
Keyword(s):  

1968 ◽  
Vol 13 (9) ◽  
pp. 479-480
Author(s):  
LEWIS PETRINOVICH
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document