Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses

Author(s):  
Yonghao Xu ◽  
Bo Du ◽  
Liangpei Zhang
Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


2019 ◽  
Vol 28 (6) ◽  
pp. 1177-1183
Author(s):  
Pengyuan Zhang ◽  
Hangting Chen ◽  
Haichuan Bai ◽  
Qingsheng Yuan

2019 ◽  
Vol 16 (8) ◽  
pp. 1324-1328 ◽  
Author(s):  
Shaoyue Song ◽  
Hongkai Yu ◽  
Zhenjiang Miao ◽  
Qiang Zhang ◽  
Yuewei Lin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document