scholarly journals A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

Author(s):  
Guanxiong Liu ◽  
Issa Khalil ◽  
Abdallah Khreishah ◽  
NhatHai Phan
2021 ◽  
Vol 2078 (1) ◽  
pp. 012050
Author(s):  
Duo Li ◽  
Chaoqun Dong ◽  
Qianchao Liu

Abstract Neural network has made remarkable achievements in the field of image classification, but they are threatened by adversarial examples in the process of application, making the robustness of neural network classifiers face danger. Programs or software based on neural network image classifiers need to undergo rigorous robustness testing before use and promotion, in order to effectively reduce losses and security risks. To comprehensively test the robustness of neural network image classifiers and standardize the test process, starting from the two aspects of generated content and interference intensity, a variety of robustness test sets are constructed, and a robustness testing framework suitable for neural network classifiers is proposed. And the feasibility and effectiveness of the test framework and method are verified by testing LENET-5 and the model reinforced by the adversavial training.


2021 ◽  
Vol 2021 (4) ◽  
Author(s):  
Jack Y. Araz ◽  
Michael Spannowsky

Abstract Ensemble learning is a technique where multiple component learners are combined through a protocol. We propose an Ensemble Neural Network (ENN) that uses the combined latent-feature space of multiple neural network classifiers to improve the representation of the network hypothesis. We apply this approach to construct an ENN from Convolutional and Recurrent Neural Networks to discriminate top-quark jets from QCD jets. Such ENN provides the flexibility to improve the classification beyond simple prediction combining methods by linking different sources of error correlations, hence improving the representation between data and hypothesis. In combination with Bayesian techniques, we show that it can reduce epistemic uncertainties and the entropy of the hypothesis by simultaneously exploiting various kinematic correlations of the system, which also makes the network less susceptible to a limitation in training sample size.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 52
Author(s):  
Richard Evan Sutanto ◽  
Sukho Lee

Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can pose a major threat to an AI-led society when an attacker uses them as means to attack an AI system, which is called an adversarial attack. Therefore, major IT companies such as Google are now studying ways to build AI systems which are robust against adversarial attacks by developing effective defense methods. However, one of the reasons why it is difficult to establish an effective defense system is due to the fact that it is difficult to know in advance what kind of adversarial attack method the opponent is using. Therefore, in this paper, we propose a method to detect the adversarial noise without knowledge of the kind of adversarial noise used by the attacker. For this end, we propose a blurring network that is trained only with normal images and also use it as an initial condition of the Deep Image Prior (DIP) network. This is in contrast to other neural network based detection methods, which require the use of many adversarial noisy images for the training of the neural network. Experimental results indicate the validity of the proposed method.


Author(s):  
Dat Duong ◽  
Rebekah L. Waikel ◽  
Ping Hu ◽  
Cedrik Tekendo-Ngongang ◽  
Benjamin D. Solomon

BMC Genomics ◽  
2016 ◽  
Vol 17 (1) ◽  
Author(s):  
Juan Manuel González-Camacho ◽  
José Crossa ◽  
Paulino Pérez-Rodríguez ◽  
Leonardo Ornella ◽  
Daniel Gianola

Sign in / Sign up

Export Citation Format

Share Document