scholarly journals Solving Delete Free Planning with Relaxed Decision Diagram Based Heuristics

2020 ◽  
Vol 67 ◽  
pp. 607-651
Author(s):  
Margarita Paz Castro ◽  
Chiara Piacentini ◽  
Andre Augusto Cire ◽  
J. Christopher Beck

We investigate the use of relaxed decision diagrams (DDs) for computing admissible heuristics for the cost-optimal delete-free planning (DFP) problem. Our main contributions are the introduction of two novel DD encodings for a DFP task: a multivalued decision diagram that includes the sequencing aspect of the problem and a binary decision diagram representation of its sequential relaxation. We present construction algorithms for each DD that leverage these different perspectives of the DFP task and provide theoretical and empirical analyses of the associated heuristics. We further show that relaxed DDs can be used beyond heuristic computation to extract delete-free plans, find action landmarks, and identify redundant actions. Our empirical analysis shows that while DD-based heuristics trail the state of the art, even small relaxed DDs are competitive with the linear programming heuristic for the DFP task, thus, revealing novel ways of designing admissible heuristics.

Author(s):  
Christian Schwarzl ◽  
Edgar Weippl

This paper serves to systematically describe the attempts made to forge fingerprints to fool biometric systems and to review all relevant publications on forging fingerprints to fool sensors. The research finds that many of the related works fail in this aspect and that past successes could not be repeated. First, the basics of biometrics are explained in order to define the meaning of the term security in this special context. Next, the state of the art of biometric systems is presented, followed by to the topic of security of fingerprint scanners. For this, a series of more than 30,000 experiments were conducted to fool scanners. The authors were able to reproduce and keep records of each single step in the test and to show which methods lead to the desired results. Most studies on this topic exclude a number of steps in producing a fake finger and fooling a fingerprint scanner are not explained, which means that some of the studies cannot be replicated. In addition, the authors’ own ideas and slight variations of existing experiment set-ups are presented.


1996 ◽  
Vol 3 (29) ◽  
Author(s):  
Lars Arge

Ordered Binary-Decision Diagrams (OBDD) are the state-of-the art<br />data structure for boolean function manipulation and there exist<br />several software packages for OBDD manipulation. OBDDs have<br />been successfully used to solve problems in e.g. digital-systems design, verification and testing, in mathematical logic, concurrent system design and in artificial intelligence. The OBDDs used in many of these applications quickly get larger than the available main memory and it becomes essential to consider the problem of minimizing the Input/Output (I/O) communication. In this paper we analyze why existing OBDD manipulation algorithms perform poorly in an I/O environment and develop new I/O-efficient algorithms.


Author(s):  
Lee A. Barnett ◽  
Armin Biere

AbstractState-of-the-art refutation systems for SAT are largely based on the derivation of clauses meeting some redundancy criteria, ensuring their addition to a formula does not alter its satisfiability. However, there are strong propositional reasoning techniques whose inferences are not easily expressed in such systems. This paper extends the redundancy framework beyond clauses to characterize redundancy for Boolean constraints in general. We show this characterization can be instantiated to develop efficiently checkable refutation systems using redundancy properties for Binary Decision Diagrams (BDDs). Using a form of reverse unit propagation over conjunctions of BDDs, these systems capture, for instance, Gaussian elimination reasoning over XOR constraints encoded in a formula, without the need for clausal translations or extension variables. Notably, these systems generalize those based on the strong Propagation Redundancy (PR) property, without an increase in complexity.


Author(s):  
Dr. Diwakar Ramanuj Tripathi

Abstract: DevOps aims to shorten project schedules, boost productivity, and manage quick development-deployment cycles without compromising business or quality. It necessitates good sprint management. Continuous Testing detects integration issues considerably earlier in the development process. It reduces the cost of defect resolution and frees up the tester's time for exploratory testing and value-added activities. Continuous testing allows for more frequent, shorter, and more efficient releases. It ties people, technology, and processes together. Continuous Planning, particularly effort estimation, is closely linked to Continuous Testing. This paper examines the state of the art in DevOps parametric estimate in continuous planning, as well as the difficulties and best practises. Keywords: Project, Testing, Continuous, Planning


Author(s):  
Masaaki Nishino ◽  
Norihito Yasuda ◽  
Kengo Nakamura

Exact cover refers to the problem of finding subfamily F of a given family of sets S whose universe is D, where F forms a partition of D. Knuth’s Algorithm DLX is a state-of-the-art method for solving exact cover problems. Since DLX’s running time depends on the cardinality of input S, it can be slow if S is large. Our proposal can improve DLX by exploiting a novel data structure, DanceDD, which extends the zero-suppressed binary decision diagram (ZDD) by adding links to enable efficient modifications of the data structure. With DanceDD, we can represent S in a compressed way and perform search in linear time with the size of the structure by using link operations. The experimental results show that our method is an order of magnitude faster when the problem is highly compressed.


2022 ◽  
Vol 19 (1) ◽  
pp. 1-26
Author(s):  
Mengya Lei ◽  
Fan Li ◽  
Fang Wang ◽  
Dan Feng ◽  
Xiaomin Zou ◽  
...  

Data security is an indispensable part of non-volatile memory (NVM) systems. However, implementing data security efficiently on NVM is challenging, since we have to guarantee the consistency of user data and the related security metadata. Existing consistency schemes ignore the recoverability of the SGX style integrity tree (SIT) and the access correlation between metadata blocks, thereby generating unnecessary NVM write traffic. In this article, we propose SecNVM, an efficient and write-friendly metadata crash consistency scheme for secure NVM. SecNVM utilizes the observation that for a lazily updated SIT, the lost tree nodes after a crash can be recovered by the corresponding child nodes in NVM. It reduces the SIT persistency overhead through a restrained write-back metadata cache and exploits the SIT inter-layer dependency for recovery. Next, leveraging the strong access correlation between the counter and DMAC, SecNVM improves the efficiency of security metadata access through a novel collaborative counter-DMAC scheme. In addition, it adopts a lightweight address tracker to reduce the cost of address tracking for fast recovery. Experiments show that compared to the state-of-the-art schemes, SecNVM improves the performance and decreases write traffic a lot, and achieves an acceptable recovery time.


2020 ◽  
Vol 34 (04) ◽  
pp. 4585-4593
Author(s):  
Alexander Levine ◽  
Soheil Feizi

Recently, techniques have been developed to provably guarantee the robustness of a classifier to adversarial perturbations of bounded L1 and L2 magnitudes by using randomized smoothing: the robust classification is a consensus of base classifications on randomly noised samples where the noise is additive. In this paper, we extend this technique to the L0 threat model. We propose an efficient and certifiably robust defense against sparse adversarial attacks by randomly ablating input features, rather than using additive noise. Experimentally, on MNIST, we can certify the classifications of over 50% of images to be robust to any distortion of at most 8 pixels. This is comparable to the observed empirical robustness of unprotected classifiers on MNIST to modern L0 attacks, demonstrating the tightness of the proposed robustness certificate. We also evaluate our certificate on ImageNet and CIFAR-10. Our certificates represent an improvement on those provided in a concurrent work (Lee et al. 2019) which uses random noise rather than ablation (median certificates of 8 pixels versus 4 pixels on MNIST; 16 pixels versus 1 pixel on ImageNet.) Additionally, we empirically demonstrate that our classifier is highly robust to modern sparse adversarial attacks on MNIST. Our classifications are robust, in median, to adversarial perturbations of up to 31 pixels, compared to 22 pixels reported as the state-of-the-art defense, at the cost of a slight decrease (around 2.3%) in the classification accuracy. Code and supplementary material is available at https://github.com/alevine0/randomizedAblation/.


Author(s):  
Tung Chou ◽  
Matthias J. Kannwischer ◽  
Bo-Yin Yang

We present the first Cortex-M4 implementation of the NISTPQC signature finalist Rainbow. We target the Giant Gecko EFM32GG11B which comes with 512 kB of RAM which can easily accommodate the keys of RainbowI.We present fast constant-time bitsliced F16 multiplication allowing multiplication of 32 field elements in 32 clock cycles. Additionally, we introduce a new way of computing the public map P in the verification procedure allowing vastly faster signature verification.Both the signing and verification procedures of our implementation are by far the fastest among the NISTPQC signature finalists. Signing of rainbowIclassic requires roughly 957 000 clock cycles which is 4× faster than the state of the art Dilithium2 implementation and 45× faster than Falcon-512. Verification needs about 239 000 cycles which is 5× and 2× faster respectively. The cost of signing can be further decreased by 20% when storing the secret key in a bitsliced representation.


Sign in / Sign up

Export Citation Format

Share Document