Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network

Author(s):  
Hyun Kwon ◽  
Hyunsoo Yoon ◽  
Daeseon Choi
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 109156-109167
Author(s):  
Yan Chu ◽  
Xiao Yue ◽  
Quan Wang ◽  
Zhengkui Wang

Author(s):  
S. A. Sakulin ◽  
A. N. Alfimtsev ◽  
D. A. Loktev ◽  
A. O. Kovalenko ◽  
V. V. Devyatkov

Recently, human recognition systems based on deep machine learning, in particular, on the basis of deep neural networks, have become widespread. In this regard, research has become relevant in the field of protection against recognition by such systems. In this article a method of designing a specially selected type of camouflage applied to clothing, which will protect a person both from recognition by a human observer and from a deep neural network recognition system is proposed. This type of camouflage is constructed on the basis of competitive examples that are generated by a deep neural network. The article describes experiments on human protection from recognition by Faster-RCNN (Regional Convolution Neural Networks) Inception V2 and Faster-RCNN ResNet101 systems. However, the implementation of camouflage is considered on a macro level, which assesses the combination of the camouflage and background, and the micro level which analyzes the relationship between the properties of individual regions of the camouflage properties of the adjacent regions, with constraints on their continuity, smoothness, closure, asymmetry. The dependence of camouflage characteristics on the conditions of observation of the object and the environment is also considered: the transparency of the atmosphere, the intensity of pixels of the sky horizon and the background, the level of contrast of the background and the camouflaged object, the distance to the object. As an example of a possible attack, a “black box” attack, which involves preliminary testing of generated adversarial examples on a target recognition system without knowledge of the internal structure of this system, is considered. Results of these experiments showed the high efficiency of the proposed method in the virtual world, when there is access to each pixel of the image supplied to the input systems. In the real world, results are less impressive, which can be explained by the distortion of colors when printing on the fabric, as well as the lack of spatial resolution of this print.


Author(s):  
Felix Specht ◽  
Jens Otto

AbstractCondition monitoring systems based on deep neural networks are used for system failure detection in cyber-physical production systems. However, deep neural networks are vulnerable to attacks with adversarial examples. Adversarial examples are manipulated inputs, e.g. sensor signals, are able to mislead a deep neural network into misclassification. A consequence of such an attack may be the manipulation of the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. This can result in a serious threat for production systems and employees. This work introduces an approach named CyberProtect to prevent misclassification caused by adversarial example attacks. The approach generates adversarial examples for retraining a deep neural network which results in a hardened variant of the deep neural network. The hardened deep neural network sustains a significant better classification rate (82% compared to 20%) while under attack with adversarial examples, as shown by empirical results.


Author(s):  
David T. Wang ◽  
Brady Williamson ◽  
Thomas Eluvathingal ◽  
Bruce Mahoney ◽  
Jennifer Scheler

Sign in / Sign up

Export Citation Format

Share Document