An adversarial attack detection method in deep neural networks based on re-attacking approach

Author(s):  
Morteza Ali Ahmadi ◽  
Rouhollah Dianat ◽  
Hossein Amirkhani
Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


2021 ◽  
Author(s):  
Hoang-Quoc Nguyen-Son ◽  
Tran Phuong Thao ◽  
Seira Hidano ◽  
Vanessa Bracamonte ◽  
Shinsaku Kiyomoto ◽  
...  

2020 ◽  
Vol 34 (07) ◽  
pp. 10901-10908 ◽  
Author(s):  
Abdullah Hamdi ◽  
Matthias Mueller ◽  
Bernard Ghanem

One major factor impeding more widespread adoption of deep neural networks (DNNs) is their lack of robustness, which is essential for safety-critical applications such as autonomous driving. This has motivated much recent work on adversarial attacks for DNNs, which mostly focus on pixel-level perturbations void of semantic meaning. In contrast, we present a general framework for adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task as well as pixel-level attacks. To do this, we re-frame the adversarial attack problem as learning a distribution of parameters that always fools the agent. In the semantic case, our proposed adversary (denoted as BBGAN) is trained to sample parameters that describe the environment with which the black-box agent interacts, such that the agent performs its dedicated task poorly in this environment. We apply BBGAN on three different tasks, primarily targeting aspects of autonomous navigation: object detection, self-driving, and autonomous UAV racing. On these tasks, BBGAN can generate failure cases that consistently fool a trained agent.


SinkrOn ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 163
Author(s):  
Amir Mahmud Husein ◽  
Christopher Christopher ◽  
Andy Gracia ◽  
Rio Brandlee ◽  
Muhammad Haris Hasibuan

Vehicle classification and detection aims to extract certain types of vehicle information from images or videos containing vehicles and is one of the important things in a smart transportation system. However, due to the different size of the vehicle, it became a challenge that directly and interested many researchers . In this paper, we compare YOLOv3's one-stage detection method with MobileNet-SSD for direct vehicle detection on a highway vehicle video dataset specifically recorded using two cellular devices on highway activities in Medan City, producing 42 videos, both methods evaluated based on Mean Average Precision (mAP) where YOLOv3 produces better accuracy of 81.9% compared to MobileNet-SSD at 67.9%, but the size of the resulting video file detection is greater. Mobilenet-SSD performs faster with smaller video output sizes, but it is difficult to detect small objects.


Sign in / Sign up

Export Citation Format

Share Document