scholarly journals Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation

2020 ◽  
Vol 34 (04) ◽  
pp. 4585-4593
Author(s):  
Alexander Levine ◽  
Soheil Feizi

Recently, techniques have been developed to provably guarantee the robustness of a classifier to adversarial perturbations of bounded L1 and L2 magnitudes by using randomized smoothing: the robust classification is a consensus of base classifications on randomly noised samples where the noise is additive. In this paper, we extend this technique to the L0 threat model. We propose an efficient and certifiably robust defense against sparse adversarial attacks by randomly ablating input features, rather than using additive noise. Experimentally, on MNIST, we can certify the classifications of over 50% of images to be robust to any distortion of at most 8 pixels. This is comparable to the observed empirical robustness of unprotected classifiers on MNIST to modern L0 attacks, demonstrating the tightness of the proposed robustness certificate. We also evaluate our certificate on ImageNet and CIFAR-10. Our certificates represent an improvement on those provided in a concurrent work (Lee et al. 2019) which uses random noise rather than ablation (median certificates of 8 pixels versus 4 pixels on MNIST; 16 pixels versus 1 pixel on ImageNet.) Additionally, we empirically demonstrate that our classifier is highly robust to modern sparse adversarial attacks on MNIST. Our classifications are robust, in median, to adversarial perturbations of up to 31 pixels, compared to 22 pixels reported as the state-of-the-art defense, at the cost of a slight decrease (around 2.3%) in the classification accuracy. Code and supplementary material is available at https://github.com/alevine0/randomizedAblation/.

2020 ◽  
Vol 67 ◽  
pp. 607-651
Author(s):  
Margarita Paz Castro ◽  
Chiara Piacentini ◽  
Andre Augusto Cire ◽  
J. Christopher Beck

We investigate the use of relaxed decision diagrams (DDs) for computing admissible heuristics for the cost-optimal delete-free planning (DFP) problem. Our main contributions are the introduction of two novel DD encodings for a DFP task: a multivalued decision diagram that includes the sequencing aspect of the problem and a binary decision diagram representation of its sequential relaxation. We present construction algorithms for each DD that leverage these different perspectives of the DFP task and provide theoretical and empirical analyses of the associated heuristics. We further show that relaxed DDs can be used beyond heuristic computation to extract delete-free plans, find action landmarks, and identify redundant actions. Our empirical analysis shows that while DD-based heuristics trail the state of the art, even small relaxed DDs are competitive with the linear programming heuristic for the DFP task, thus, revealing novel ways of designing admissible heuristics.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 542
Author(s):  
Muhammad Mateen ◽  
Tauqeer Safdar Malik ◽  
Shaukat Hayat ◽  
Musab Hameed ◽  
Song Sun ◽  
...  

In diabetic retinopathy (DR), the early signs that may lead the eyesight towards complete vision loss are considered as microaneurysms (MAs). The shape of these MAs is almost circular, and they have a darkish color and are tiny in size, which means they may be missed by manual analysis of ophthalmologists. In this case, accurate early detection of microaneurysms is helpful to cure DR before non-reversible blindness. In the proposed method, early detection of MAs is performed using a hybrid feature embedding approach of pre-trained CNN models, named as VGG-19 and Inception-v3. The performance of the proposed approach was evaluated using publicly available datasets, namely “E-Ophtha” and “DIARETDB1”, and achieved 96% and 94% classification accuracy, respectively. Furthermore, the developed approach outperformed the state-of-the-art approaches in terms of sensitivity and specificity for microaneurysms detection.


Author(s):  
Dr. Diwakar Ramanuj Tripathi

Abstract: DevOps aims to shorten project schedules, boost productivity, and manage quick development-deployment cycles without compromising business or quality. It necessitates good sprint management. Continuous Testing detects integration issues considerably earlier in the development process. It reduces the cost of defect resolution and frees up the tester's time for exploratory testing and value-added activities. Continuous testing allows for more frequent, shorter, and more efficient releases. It ties people, technology, and processes together. Continuous Planning, particularly effort estimation, is closely linked to Continuous Testing. This paper examines the state of the art in DevOps parametric estimate in continuous planning, as well as the difficulties and best practises. Keywords: Project, Testing, Continuous, Planning


Author(s):  
Chihuang Liu ◽  
Joseph JaJa

Adversarial training has been successfully applied to build robust models at a certain cost. While the robustness of a model increases, the standard classification accuracy declines. This phenomenon is suggested to be an inherent trade-off. We propose a model that employs feature prioritization by a nonlinear attention module and L2 feature regularization to improve the adversarial robustness and the standard accuracy relative to adversarial training. The attention module encourages the model to rely heavily on robust features by assigning larger weights to them while suppressing non-robust features. The regularizer encourages the model to extract similar features for the natural and adversarial images, effectively ignoring the added perturbation. In addition to evaluating the robustness of our model, we provide justification for the attention module and propose a novel experimental strategy that quantitatively demonstrates that our model is almost ideally aligned with salient data characteristics. Additional experimental results illustrate the power of our model relative to the state of the art methods.


2022 ◽  
Vol 19 (1) ◽  
pp. 1-26
Author(s):  
Mengya Lei ◽  
Fan Li ◽  
Fang Wang ◽  
Dan Feng ◽  
Xiaomin Zou ◽  
...  

Data security is an indispensable part of non-volatile memory (NVM) systems. However, implementing data security efficiently on NVM is challenging, since we have to guarantee the consistency of user data and the related security metadata. Existing consistency schemes ignore the recoverability of the SGX style integrity tree (SIT) and the access correlation between metadata blocks, thereby generating unnecessary NVM write traffic. In this article, we propose SecNVM, an efficient and write-friendly metadata crash consistency scheme for secure NVM. SecNVM utilizes the observation that for a lazily updated SIT, the lost tree nodes after a crash can be recovered by the corresponding child nodes in NVM. It reduces the SIT persistency overhead through a restrained write-back metadata cache and exploits the SIT inter-layer dependency for recovery. Next, leveraging the strong access correlation between the counter and DMAC, SecNVM improves the efficiency of security metadata access through a novel collaborative counter-DMAC scheme. In addition, it adopts a lightweight address tracker to reduce the cost of address tracking for fast recovery. Experiments show that compared to the state-of-the-art schemes, SecNVM improves the performance and decreases write traffic a lot, and achieves an acceptable recovery time.


Author(s):  
Tung Chou ◽  
Matthias J. Kannwischer ◽  
Bo-Yin Yang

We present the first Cortex-M4 implementation of the NISTPQC signature finalist Rainbow. We target the Giant Gecko EFM32GG11B which comes with 512 kB of RAM which can easily accommodate the keys of RainbowI.We present fast constant-time bitsliced F16 multiplication allowing multiplication of 32 field elements in 32 clock cycles. Additionally, we introduce a new way of computing the public map P in the verification procedure allowing vastly faster signature verification.Both the signing and verification procedures of our implementation are by far the fastest among the NISTPQC signature finalists. Signing of rainbowIclassic requires roughly 957 000 clock cycles which is 4× faster than the state of the art Dilithium2 implementation and 45× faster than Falcon-512. Verification needs about 239 000 cycles which is 5× and 2× faster respectively. The cost of signing can be further decreased by 20% when storing the secret key in a bitsliced representation.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3590 ◽  
Author(s):  
Kyoungtaek Choi ◽  
Jae Kyu Suhr ◽  
Ho Gi Jung

In order to overcome the limitations of GNSS/INS and to keep the cost affordable for mass-produced vehicles, a precise localization system fusing the estimated vehicle positions from low-cost GNSS/INS and low-cost perception sensors is being developed. For vehicle position estimation, a perception sensor detects a road facility and uses it as a landmark. For this localization system, this paper proposes a method to detect a road sign as a landmark using a monocular camera whose cost is relatively low compared to other perception sensors. Since the inside pattern and aspect ratio of a road sign are various, the proposed method is based on the part-based approach that detects corners and combines them to detect a road sign. While the recall, precision, and processing time of the state of the art detector based on a convolutional neural network are 99.63%, 98.16%, and 4802 ms respectively, the recall, precision, and processing time of the proposed method are 97.48%, 98.78%, and 66.7 ms, respectively. The detection performance of the proposed method is as good as that of the state of the art detector and its processing time is drastically reduced to be applicable for an embedded system.


Author(s):  
J.P. Sprong ◽  
X. Jiang ◽  
H. Polinder

Historic records show that the cost of operating and supporting an aircraft may exceed the initial purchase price as much as ten times. Maintenance, repair and overhaul activities rep- resent around 10-15% of an airlines annual operational costs. Therefore, optimization of maintenance operations to minimize cost is extremely important for airlines in order to stay competitive. Prognostics, a process to predict remaining useful life of systems and/ or components suffering from aging or degradation, has been recognized as one of the revolutionary disciplines that can improve efficiency of aircraft operations and optimize aircraft maintenance. This study focuses on literature that has used prognostics to optimize aircraft maintenance and identifies research gaps for further optimization of aircraft maintenance in commercial aviation. In this paper, the origin and development of prognostics is firstly introduced. Thereafter, the state of art of aircraft maintenance is reviewed. Next, the applicability of prognostics to optimize aircraft maintenance is explained, reviewed, and potential challenges and opportunities are explored. Finally, the state-of-the-art of prognostics in aircraft maintenance is dis- cussed and research gaps are identified in perspective of the deployment of prognostics to optimize aircraft maintenance.


Sign in / Sign up

Export Citation Format

Share Document