Adversarial example generation with adaptive gradient search for single and ensemble deep neural network

2020 ◽  
Vol 528 ◽  
pp. 147-167
Author(s):  
Yatie Xiao ◽  
Chi-Man Pun ◽  
Bo Liu
Author(s):  
Chunlong Fan ◽  
Cailong Li ◽  
Jici Zhang ◽  
Yiping Teng ◽  
Jianzhong Qiao

Neural network technology has achieved good results in many tasks, such as image classification. However, for some input examples of neural networks, after the addition of designed and imperceptible perturbations to the examples, these adversarial examples can change the output results of the original examples. For image classification problems, we derive low-dimensional attack perturbation solutions on multidimensional linear classifiers and extend them to multidimensional nonlinear neural networks. Based on this, a new adversarial example generation algorithm is designed to modify a specified number of pixels. The algorithm adopts a greedy iterative strategy, and gradually iteratively determines the importance and attack range of pixel points. Finally, experiments demonstrate that the algorithm-generated adversarial example is of good quality, and the effects of key parameters in the algorithm are also analyzed.


2021 ◽  
Vol 21 (2) ◽  
pp. 57-66
Author(s):  
Hyun Kwon ◽  
◽  
Joonhyeok Yoon ◽  
Junseob Kim ◽  
Sangjun Park ◽  
...  

2018 ◽  
Vol E101.D (10) ◽  
pp. 2485-2500 ◽  
Author(s):  
Hyun KWON ◽  
Yongchul KIM ◽  
Ki-Woong PARK ◽  
Hyunsoo YOON ◽  
Daeseon CHOI

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 46084-46096 ◽  
Author(s):  
Hyun Kwon ◽  
Yongchul Kim ◽  
Ki-Woong PARK ◽  
Hyunsoo Yoon ◽  
Daeseon Choi

Author(s):  
Felix Specht ◽  
Jens Otto

AbstractCondition monitoring systems based on deep neural networks are used for system failure detection in cyber-physical production systems. However, deep neural networks are vulnerable to attacks with adversarial examples. Adversarial examples are manipulated inputs, e.g. sensor signals, are able to mislead a deep neural network into misclassification. A consequence of such an attack may be the manipulation of the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. This can result in a serious threat for production systems and employees. This work introduces an approach named CyberProtect to prevent misclassification caused by adversarial example attacks. The approach generates adversarial examples for retraining a deep neural network which results in a hardened variant of the deep neural network. The hardened deep neural network sustains a significant better classification rate (82% compared to 20%) while under attack with adversarial examples, as shown by empirical results.


Author(s):  
David T. Wang ◽  
Brady Williamson ◽  
Thomas Eluvathingal ◽  
Bruce Mahoney ◽  
Jennifer Scheler

Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Sign in / Sign up

Export Citation Format

Share Document