Fast Gradient-based Algorithm for a Quadratic Envelope Relaxation of the $\ell_{0}$ Gradient Regularization

Author(s):  
Eduar A. Vasquez-Ortiz ◽  
Paul Rodriguez
Keyword(s):  
1994 ◽  
Vol 13 (4) ◽  
pp. 687-701 ◽  
Author(s):  
E.U. Mumcuoglu ◽  
R. Leahy ◽  
S.R. Cherry ◽  
Zhenyu Zhou

2018 ◽  
Vol 54 (4) ◽  
pp. 2086-2090
Author(s):  
Qiang Li ◽  
Lei Huang ◽  
Wei Liu ◽  
Weize Sun ◽  
Peichang Zhang

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Heng Yin ◽  
Hengwei Zhang ◽  
Jindong Wang ◽  
Ruiyu Dou

Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Since these data are crafted by adding imperceptible noise to normal images, their existence poses potential security threats to deep learning systems. Sophisticated adversarial examples with strong attack performance can also be used as a tool to evaluate the robustness of a model. However, the success rate of adversarial attacks can be further improved in black-box environments. Therefore, this study combines a modified Adam gradient descent algorithm with the iterative gradient-based attack method. The proposed Adam iterative fast gradient method is then used to improve the transferability of adversarial examples. Extensive experiments on ImageNet showed that the proposed method offers a higher attack success rate than existing iterative methods. By extending our method, we achieved a state-of-the-art attack success rate of 95.0% on defense models.


2021 ◽  
pp. 1-12
Author(s):  
Bo Yang ◽  
Kaiyong Xu ◽  
Hengjun Wang ◽  
Hengwei Zhang

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptible perturbations to the original images, but make the model output inaccurate predictions. Before DNNs are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications. However, under the challenging black-box setting, the attack success rate, i.e., the transferability of adversarial examples, still needs to be improved. Based on image augmentation methods, this paper found that random transformation of image brightness can eliminate overfitting in the generation of adversarial examples and improve their transferability. In light of this phenomenon, this paper proposes an adversarial example generation method, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and to generate adversarial examples with better transferability. Extensive experiments on the ImageNet dataset have demonstrated the effectiveness of the aforementioned method. Whether on normally or adversarially trained networks, our method has a higher success rate for black-box attacks than other attack methods based on data augmentation. It is hoped that this method can help evaluate and improve the robustness of models.


Sign in / Sign up

Export Citation Format

Share Document