An Efficient Adversarial Example Generation Algorithm Based on an Accelerated Gradient Iterative Fast Gradient

2021 ◽  
pp. 103612
Author(s):  
Jiabao Liu ◽  
Qixiang Zhang ◽  
Kanghua Mo ◽  
Xiaoyu Xiang ◽  
Jin Li ◽  
...  
Author(s):  
Chunlong Fan ◽  
Cailong Li ◽  
Jici Zhang ◽  
Yiping Teng ◽  
Jianzhong Qiao

Neural network technology has achieved good results in many tasks, such as image classification. However, for some input examples of neural networks, after the addition of designed and imperceptible perturbations to the examples, these adversarial examples can change the output results of the original examples. For image classification problems, we derive low-dimensional attack perturbation solutions on multidimensional linear classifiers and extend them to multidimensional nonlinear neural networks. Based on this, a new adversarial example generation algorithm is designed to modify a specified number of pixels. The algorithm adopts a greedy iterative strategy, and gradually iteratively determines the importance and attack range of pixel points. Finally, experiments demonstrate that the algorithm-generated adversarial example is of good quality, and the effects of key parameters in the algorithm are also analyzed.


2021 ◽  
pp. 1-12
Author(s):  
Bo Yang ◽  
Kaiyong Xu ◽  
Hengjun Wang ◽  
Hengwei Zhang

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptible perturbations to the original images, but make the model output inaccurate predictions. Before DNNs are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications. However, under the challenging black-box setting, the attack success rate, i.e., the transferability of adversarial examples, still needs to be improved. Based on image augmentation methods, this paper found that random transformation of image brightness can eliminate overfitting in the generation of adversarial examples and improve their transferability. In light of this phenomenon, this paper proposes an adversarial example generation method, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and to generate adversarial examples with better transferability. Extensive experiments on the ImageNet dataset have demonstrated the effectiveness of the aforementioned method. Whether on normally or adversarially trained networks, our method has a higher success rate for black-box attacks than other attack methods based on data augmentation. It is hoped that this method can help evaluate and improve the robustness of models.


Sign in / Sign up

Export Citation Format

Share Document