scholarly journals Transferable Adversarial Attacks for Image and Video Object Detection

Author(s):  
Xingxing Wei ◽  
Siyuan Liang ◽  
Ning Chen ◽  
Xiaochun Cao

Identifying adversarial examples is beneficial for understanding deep networks and developing robust models. However, existing attacking methods for image object detection have two limitations: weak transferability---the generated adversarial examples often have a low success rate to attack other kinds of detection methods, and high computation cost---they need much time to deal with video data, where many frames need polluting. To address these issues, we present a generative method to obtain adversarial images and videos, thereby significantly reducing the processing time. To enhance transferability, we manipulate the feature maps extracted by a feature network, which usually constitutes the basis of object detectors. Our method is based on the Generative Adversarial Network (GAN) framework, where we combine a high-level class loss and a low-level feature loss to jointly train the adversarial example generator. Experimental results on PASCAL VOC and ImageNet VID datasets show that our method efficiently generates image and video adversarial examples, and more importantly, these adversarial examples have better transferability, therefore being able to simultaneously attack two kinds of  representative object detection models: proposal based models like Faster-RCNN and regression based models like SSD.

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Fangchao Yu ◽  
Li Wang ◽  
Xianjin Fang ◽  
Youwen Zhang

Deep neural network approaches have made remarkable progress in many machine learning tasks. However, the latest research indicates that they are vulnerable to adversarial perturbations. An adversary can easily mislead the network models by adding well-designed perturbations to the input. The cause of the adversarial examples is unclear. Therefore, it is challenging to build a defense mechanism. In this paper, we propose an image-to-image translation model to defend against adversarial examples. The proposed model is based on a conditional generative adversarial network, which consists of a generator and a discriminator. The generator is used to eliminate adversarial perturbations in the input. The discriminator is used to distinguish generated data from original clean data to improve the training process. In other words, our approach can map the adversarial images to the clean images, which are then fed to the target deep learning model. The defense mechanism is independent of the target model, and the structure of the framework is universal. A series of experiments conducted on MNIST and CIFAR10 show that the proposed method can defend against multiple types of attacks while maintaining good performance.


2020 ◽  
Vol 39 (5) ◽  
pp. 7085-7095
Author(s):  
Shuqi Liu ◽  
Mingwen Shao ◽  
Xinping Liu

In recent years, deep neural networks have made significant progress in image classification, object detection and face recognition. However, they still have the problem of misclassification when facing adversarial examples. In order to address security issue and improve the robustness of the neural network, we propose a novel defense network based on generative adversarial network (GAN). The distribution of clean - and adversarial examples are matched to solve the mentioned problem. This guides the network to remove invisible noise accurately, and restore the adversarial example to a clean example to achieve the effect of defense. In addition, in order to maintain the classification accuracy of clean examples and improve the fidelity of neural network, we input clean examples into proposed network for denoising. Our method can effectively remove the noise of the adversarial examples, so that the denoised adversarial examples can be correctly classified. In this paper, extensive experiments are conducted on five benchmark datasets, namely MNIST, Fashion-MNIST, CIFAR10, CIFAR100 and ImageNet. Moreover, six mainstream attack methods are adopted to test the robustness of our defense method including FGSM, PGD, MIM, JSMA, CW and Deep-Fool. Results show that our method has strong defensive capabilities against the tested attack methods, which confirms the effectiveness of the proposed method.


2021 ◽  
Vol 11 (8) ◽  
pp. 3547
Author(s):  
Xiaoyu Hou ◽  
Kunlin Zhang ◽  
Jihui Xu ◽  
Wei Huang ◽  
Xinmiao Yu ◽  
...  

With the advent of drones, new potential applications have emerged for the unconstrained analysis of images and videos from aerial view cameras. Despite the tremendous success of the generic object detection methods developed using ground-based photos, a considerable performance drop is observed when these same methods are directly applied to images captured by Unmanned Aerial Vehicles (UAVs). Usually, most of the work goes into improving the performance of the detector in aspects such as design loss, training sample selection, feature enhancement, and so forth. This paper proposes a detection framework based on an anchor-free detector with several modules, including a sample balance strategies module and super-resolved generated feature module, to improve performance. We proposed the sample balance strategies module to optimize the imbalance among training samples, especially the imbalance between positive and negative, and easy and hard samples. Due to the high frequencies and noisy representation of the small objects in images captured by drones, the detection task is extraordinarily challenging. However, when compared with other algorithms of this kind, our method achieves better results. We also propose a super-resolved generated GAN (Generative Adversarial Network) module with center-ness weights to effectively enhance the local feature map. Finally, we demonstrate our method’s effectiveness with the proposed modules by carrying out a state-of-the-art performance on Visdrone2020 benchmarks.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 52
Author(s):  
Richard Evan Sutanto ◽  
Sukho Lee

Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can pose a major threat to an AI-led society when an attacker uses them as means to attack an AI system, which is called an adversarial attack. Therefore, major IT companies such as Google are now studying ways to build AI systems which are robust against adversarial attacks by developing effective defense methods. However, one of the reasons why it is difficult to establish an effective defense system is due to the fact that it is difficult to know in advance what kind of adversarial attack method the opponent is using. Therefore, in this paper, we propose a method to detect the adversarial noise without knowledge of the kind of adversarial noise used by the attacker. For this end, we propose a blurring network that is trained only with normal images and also use it as an initial condition of the Deep Image Prior (DIP) network. This is in contrast to other neural network based detection methods, which require the use of many adversarial noisy images for the training of the neural network. Experimental results indicate the validity of the proposed method.


2020 ◽  
Vol 34 (05) ◽  
pp. 8830-8837
Author(s):  
Xin Sheng ◽  
Linli Xu ◽  
Junliang Guo ◽  
Jingchang Liu ◽  
Ruoyu Zhao ◽  
...  

We propose a novel introspective model for variational neural machine translation (IntroVNMT) in this paper, inspired by the recent successful application of introspective variational autoencoder (IntroVAE) in high quality image synthesis. Different from the vanilla variational NMT model, IntroVNMT is capable of improving itself introspectively by evaluating the quality of the generated target sentences according to the high-level latent variables of the real and generated target sentences. As a consequence of introspective training, the proposed model is able to discriminate between the generated and real sentences of the target language via the latent variables generated by the encoder of the model. In this way, IntroVNMT is able to generate more realistic target sentences in practice. In the meantime, IntroVNMT inherits the advantages of the variational autoencoders (VAEs), and the model training process is more stable than the generative adversarial network (GAN) based models. Experimental results on different translation tasks demonstrate that the proposed model can achieve significant improvements over the vanilla variational NMT model.


2020 ◽  
Vol 79 (35-36) ◽  
pp. 25403-25425 ◽  
Author(s):  
Zhengyi Liu ◽  
Jiting Tang ◽  
Qian Xiang ◽  
Peng Zhao

Object detection in videos is gaining more attention recently as it is related to video analytics and facilitates image understanding and applicable to . The video object detection methods can be divided into traditional and deep learning based methods. Trajectory classification, low rank sparse matrix, background subtraction and object tracking are considered as traditional object detection methods as they primary focus is informative feature collection, region selection and classification. The deep learning methods are more popular now days as they facilitate high-level features and problem solving in object detection algorithms. We have discussed various object detection methods and challenges in this paper.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yirui Wu ◽  
Dabao Wei ◽  
Jun Feng

With the development of the fifth-generation networks and artificial intelligence technologies, new threats and challenges have emerged to wireless communication system, especially in cybersecurity. In this paper, we offer a review on attack detection methods involving strength of deep learning techniques. Specifically, we firstly summarize fundamental problems of network security and attack detection and introduce several successful related applications using deep learning structure. On the basis of categorization on deep learning methods, we pay special attention to attack detection methods built on different kinds of architectures, such as autoencoders, generative adversarial network, recurrent neural network, and convolutional neural network. Afterwards, we present some benchmark datasets with descriptions and compare the performance of representing approaches to show the current working state of attack detection methods with deep learning structures. Finally, we summarize this paper and discuss some ways to improve the performance of attack detection under thoughts of utilizing deep learning structures.


Sign in / Sign up

Export Citation Format

Share Document