projected gradient descent
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 31)

H-INDEX

7
(FIVE YEARS 4)

2022 ◽  
Vol 2022 (1) ◽  
Author(s):  
Jing Lin ◽  
Laurent L. Njilla ◽  
Kaiqi Xiong

AbstractDeep neural networks (DNNs) are widely used to handle many difficult tasks, such as image classification and malware detection, and achieve outstanding performance. However, recent studies on adversarial examples, which have maliciously undetectable perturbations added to their original samples that are indistinguishable by human eyes but mislead the machine learning approaches, show that machine learning models are vulnerable to security attacks. Though various adversarial retraining techniques have been developed in the past few years, none of them is scalable. In this paper, we propose a new iterative adversarial retraining approach to robustify the model and to reduce the effectiveness of adversarial inputs on DNN models. The proposed method retrains the model with both Gaussian noise augmentation and adversarial generation techniques for better generalization. Furthermore, the ensemble model is utilized during the testing phase in order to increase the robust test accuracy. The results from our extensive experiments demonstrate that the proposed approach increases the robustness of the DNN model against various adversarial attacks, specifically, fast gradient sign attack, Carlini and Wagner (C&W) attack, Projected Gradient Descent (PGD) attack, and DeepFool attack. To be precise, the robust classifier obtained by our proposed approach can maintain a performance accuracy of 99% on average on the standard test set. Moreover, we empirically evaluate the runtime of two of the most effective adversarial attacks, i.e., C&W attack and BIM attack, to find that the C&W attack can utilize GPU for faster adversarial example generation than the BIM attack can. For this reason, we further develop a parallel implementation of the proposed approach. This parallel implementation makes the proposed approach scalable for large datasets and complex models.


Author(s):  
Trinh Quang Kien

In recent years with the explosion of research in artificial intelligence, deep learning models based on convolutional neural networks (CNNs) are one of the promising architectures for practical applications thanks to their reasonably good achievable accuracy. However, CNNs characterized by convolutional layers often have a large number of parameters and computational workload, leading to large energy consumption for training and network inference. The binarized neural network (BNN) model has been recently proposed to overcome that drawback. The BNNs use binary representation for the inputs and weights, which inherently reduces memory requirements and simplifies computations while still maintaining acceptable accuracy. BNN thereby is very suited for the practical realization of Edge-AI application on resource- and energy-constrained devices such as embedded or mobile devices. As CNN and BNN both compose linear transformations layers,  they can be fooled by adversarial attack patterns. This topic has been actively studied recently but most of them are for CNN. In this work, we examine the impact of the adversarial attack on BNNs and propose a solution to improve the accuracy of BNN against this type of attack. Specifically, we use an Enhanced Fast Adversarial Training (EFAT) method to train the network that helps the BNN be more robust against major adversarial attack models with a very short training time. Experimental results with Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attack models on our trained BNN network with MNIST dataset increased accuracy from 31.34% and 0.18% to 96.96% and 85.08%, respectively.


2021 ◽  
Vol 18 (1) ◽  
pp. 1-8
Author(s):  
Ansam Kadhim ◽  
Salah Al-Darraji

Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD.


Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1824
Author(s):  
Claudiu Popescu ◽  
Lacrimioara Grama ◽  
Corneliu Rusu

The paper describes a convex optimization formulation of the extractive text summarization problem and a simple and scalable algorithm to solve it. The optimization program is constructed as a convex relaxation of an intuitive but computationally hard integer programming problem. The objective function is highly symmetric, being invariant under unitary transformations of the text representations. Another key idea is to replace the constraint on the number of sentences in the summary with a convex surrogate. For solving the program we have designed a specific projected gradient descent algorithm and analyzed its performance in terms of execution time and quality of the approximation. Using the datasets DUC 2005 and Cornell Newsroom Summarization Dataset, we have shown empirically that the algorithm can provide competitive results for single document summarization and multi-document query-based summarization. On the Cornell Newsroom Summarization Dataset, it ranked second among the unsupervised methods tested. For the more challenging task of multi-document query-based summarization, the method was tested on the DUC 2005 Dataset. Our algorithm surpassed the other reported methods with respect to the ROUGE-SU4 metric, and it was at less than 0.01 from the top performing algorithms with respect to ROUGE-1 and ROUGE-2 metrics.


2021 ◽  
Author(s):  
Reza Amini Gougeh

Abstract Considering the global crisis of Coronavirus infection (COVID-19), the essence of utilizing novel approaches to achieve quick and accurate diagnosing methods is required. Deep Neural Networks (DNN) showed outstanding capabilities in classifying various data types, including medical images, in order to build a practical automatic diagnosing system. Therefore, DNNs can help the healthcare system to reduce patients waiting time. However, despite acceptable accuracy and low false-negative rate of DNNs in medical image classification, they have shown vulnerabilities in terms of adversarial attacks. Such input can lead the model to misclassification. This paper investigated the effect of these attacks on five commonly used neural networks, including ResNet-18, ResNet-50, Wide ResNet-16-8 (WRN-16-8), VGG-19, and Inception v3. Four adversarial attacks, including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini and Wagner (C&W), and Spatial Transformations Attack (ST), were used to complete this investigation. Average accuracy on test images was 96.7\% and decreased to 41.1%, 25.5%, 50.1%, and 56.3% in FGSM, PGD, C&W, and ST, respectively. Results are indicating that ResNet-50 and WRN-16-8 were generally less affected by attacks. Therefore using defence methods in these two models can enhance their performance encountering adversarial perturbations.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yan Jiang ◽  
Guisheng Yin ◽  
Ye Yuan ◽  
Qingan Da

Deep learning technology (a deeper and optimized network structure) and remote sensing imaging (i.e., the more multisource and the more multicategory remote sensing data) have developed rapidly. Although the deep convolutional neural network (CNN) has achieved state-of-the-art performance on remote sensing image (RSI) scene classification, the existence of adversarial attacks poses a potential security threat to the RSI scene classification task based on CNN. The corresponding adversarial samples can be generated by adding a small perturbation to the original images. Feeding the CNN-based classifier with the adversarial samples leads to the classifier misclassify with high confidence. To achieve a higher attack success rate against scene classification based on CNN, we introduce the projected gradient descent method to generate adversarial remote sensing images. Then, we select several mainstream CNN-based classifiers as the attacked models to demonstrate the effectiveness of our method. The experimental results show that our proposed method can dramatically reduce the classification accuracy under untargeted and targeted attacks. Furthermore, we also evaluate the quality of the generated adversarial images by visual and quantitative comparisons. The results show that our method can generate the imperceptible adversarial samples and has a stronger attack ability for the RSI scene classification.


2021 ◽  
Author(s):  
Tiangang Li

The deep learning algorithm has achieved great success in the field of computer vision, but some studies have pointed out that the deep learning model is vulnerable to attacks adversarial examples and makes false decisions. This challenges the further development of deep learning, and urges researchers to pay more attention to the relationship between adversarial examples attacks and deep learning security. This work focuses on adversarial examples, optimizes the generation of adversarial examples from the view of adversarial robustness, takes the perturbations added in adversarial examples as the optimization parameter. We propose RWR-NM-PGD attack algorithm based on random warm restart mechanism and improved Nesterov momentum from the view of gradient optimization. The algorithm introduces improved Nesterov momentum, using its characteristics of accelerating convergence and improving gradient update direction in optimization algorithm to accelerate the generation of adversarial examples. In addition, the random warm restart mechanism is used for optimization, and the projected gradient descent algorithm is used to limit the range of the generated perturbations in each warm restart, which can obtain better attack effect. Experiments on two public datasets show that the algorithm proposed in this work can improve the success rate of attacking deep learning models without extra time cost. Compared with the benchmark attack method, the algorithm proposed in this work can achieve better attack success rate for both normal training model and defense model. Our method has average attack success rate of 46.3077%, which is 27.19% higher than I-FGSM and 9.27% higher than PGD. The attack results in 13 defense models show that the attack algorithm proposed in this work is superior to the benchmark algorithm in attack universality and transferability.


2021 ◽  
Author(s):  
Tiangang Li

The deep learning algorithm has achieved great success in the field of computer vision, but some studies have pointed out that the deep learning model is vulnerable to attacks adversarial examples and makes false decisions. This challenges the further development of deep learning, and urges researchers to pay more attention to the relationship between adversarial examples attacks and deep learning security. This work focuses on adversarial examples, optimizes the generation of adversarial examples from the view of adversarial robustness, takes the perturbations added in adversarial examples as the optimization parameter. We propose RWR-NM-PGD attack algorithm based on random warm restart mechanism and improved Nesterov momentum from the view of gradient optimization. The algorithm introduces improved Nesterov momentum, using its characteristics of accelerating convergence and improving gradient update direction in optimization algorithm to accelerate the generation of adversarial examples. In addition, the random warm restart mechanism is used for optimization, and the projected gradient descent algorithm is used to limit the range of the generated perturbations in each warm restart, which can obtain better attack effect. Experiments on two public datasets show that the algorithm proposed in this work can improve the success rate of attacking deep learning models without extra time cost. Compared with the benchmark attack method, the algorithm proposed in this work can achieve better attack success rate for both normal training model and defense model. Our method has average attack success rate of 46.3077%, which is 27.19% higher than I-FGSM and 9.27% higher than PGD. The attack results in 13 defense models show that the attack algorithm proposed in this work is superior to the benchmark algorithm in attack universality and transferability.


Sign in / Sign up

Export Citation Format

Share Document