fast gradient
Recently Published Documents


TOTAL DOCUMENTS

215
(FIVE YEARS 44)

H-INDEX

37
(FIVE YEARS 1)

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Raheel Siddiqi

AbstractAn accurate and robust fruit image classifier can have a variety of real-life and industrial applications including automated pricing, intelligent sorting, and information extraction. This paper demonstrates how adversarial training can enhance the robustness of fruit image classifiers. In the past, research in deep-learning-based fruit image classification has focused solely on attaining the highest possible accuracy of the model used in the classification process. However, even the highest accuracy models are still susceptible to adversarial attacks which pose serious problems for such systems in practice. As a robust fruit classifier can only be developed with the aid of a fruit image dataset consisting of fruit images photographed in realistic settings (rather than images taken in controlled laboratory settings), a new dataset of over three thousand fruit images belonging to seven fruit classes is presented. Each image is carefully selected so that its classification poses a significant challenge for the proposed classifiers. Three Convolutional Neural Network (CNN)-based classifiers are suggested: 1) IndusNet, 2) fine-tuned VGG16, and 3) fine-tuned MobileNet. Fine-tuned VGG16 produced the best test set accuracy of 94.82% compared to the 92.32% and the 94.28% produced by the other two models, respectively. Fine-tuned MobileNet has proved to be the most efficient model with a test time of 9 ms/step compared to the test times of 28 ms/step and 29 ms/step for the other two models. The empirical evidence presented demonstrates that adversarial training enables fruit image classifiers to resist attacks crafted through the Fast Gradient Sign Method (FGSM), while simultaneously improving classifiers’ robustness against other noise forms including ‘Gaussian’, ‘Salt and pepper’ and ‘Speckle’. For example, when the amplitude of the perturbations generated through the Fast Gradient Sign Method (FGSM) was kept at 0.1, adversarial training improved the fine-tuned VGG16’s performance on adversarial images by around 18% (i.e., from 76.6% to 94.82%), while simultaneously improving the classifier’s performance on fruit images corrupted with ‘salt and pepper’ noise by around 8% (i.e., from 69.82% to 77.85%). Other reported results also follow this pattern and demonstrate the effectiveness of adversarial training as a means of enhancing the robustness of fruit image classifiers.


2021 ◽  
Vol 15 ◽  
Author(s):  
Pengfei Xie ◽  
Shuhao Shi ◽  
Shuai Yang ◽  
Kai Qiao ◽  
Ningning Liang ◽  
...  

Deep neural networks (DNNs) are proven vulnerable to attack against adversarial examples. Black-box transfer attacks pose a massive threat to AI applications without accessing target models. At present, the most effective black-box attack methods mainly adopt data enhancement methods, such as input transformation. Previous data enhancement frameworks only work on input transformations that satisfy accuracy or loss invariance. However, it does not work for other transformations that do not meet the above conditions, such as the transformation which will lose information. To solve this problem, we propose a new noise data enhancement framework (NDEF), which only transforms adversarial perturbation to avoid the above issues effectively. In addition, we introduce random erasing under this framework to prevent the over-fitting of adversarial examples. Experimental results show that the black-box attack success rate of our method Random Erasing Iterative Fast Gradient Sign Method (REI-FGSM) is 4.2% higher than DI-FGSM in six models on average and 6.6% higher than DI-FGSM in three defense models. REI-FGSM can combine with other methods to achieve excellent performance. The attack performance of SI-FGSM can be improved by 22.9% on average when combined with REI-FGSM. Besides, our combined version with DI-TI-MI-FGSM, i.e., DI-TI-MI-REI-FGSM can achieve an average attack success rate of 97.0% against three ensemble adversarial training models, which is greater than the current gradient iterative attack method. We also introduce Gaussian blur to prove the compatibility of our framework.


2021 ◽  
pp. 1-12
Author(s):  
Bo Yang ◽  
Kaiyong Xu ◽  
Hengjun Wang ◽  
Hengwei Zhang

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptible perturbations to the original images, but make the model output inaccurate predictions. Before DNNs are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications. However, under the challenging black-box setting, the attack success rate, i.e., the transferability of adversarial examples, still needs to be improved. Based on image augmentation methods, this paper found that random transformation of image brightness can eliminate overfitting in the generation of adversarial examples and improve their transferability. In light of this phenomenon, this paper proposes an adversarial example generation method, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and to generate adversarial examples with better transferability. Extensive experiments on the ImageNet dataset have demonstrated the effectiveness of the aforementioned method. Whether on normally or adversarially trained networks, our method has a higher success rate for black-box attacks than other attack methods based on data augmentation. It is hoped that this method can help evaluate and improve the robustness of models.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Pavlo Haleta ◽  
Dmytro Likhomanov ◽  
Oleksandra Sokol

AbstractRecently, adversarial attacks have drawn the community’s attention as an effective tool to degrade the accuracy of neural networks. However, their actual usage in the world is limited. The main reason is that real-world machine learning systems, such as content filters or face detectors, often consist of multiple neural networks, each performing an individual task. To attack such a system, adversarial example has to pass through many distinct networks at once, which is the major challenge addressed by this paper. In this paper, we investigate multitask adversarial attacks as a threat for real-world machine learning solutions. We provide a novel black-box adversarial attack, which significantly outperforms the current state-of-the-art methods, such as Fast Gradient Sign Attack (FGSM) and Basic Iterative Method (BIM, also known as Iterative-FGSM) in the multitask setting.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Hyun Kwon

Deep neural networks perform well for image recognition, speech recognition, and pattern analysis. This type of neural network has also been used in the medical field, where it has displayed good performance in predicting or classifying patient diagnoses. An example is the U-Net model, which has demonstrated good performance in data segmentation, an important technology in the field of medical imaging. However, deep neural networks are vulnerable to adversarial examples. Adversarial examples are samples created by adding a small amount of noise to an original data sample in such a way that to human perception they appear to be normal data but they will be incorrectly classified by the classification model. Adversarial examples pose a significant threat in the medical field, as they can cause models to misidentify or misclassify patient diagnoses. In this paper, I propose an advanced adversarial training method to defend against such adversarial examples. An advantage of the proposed method is that it creates a wide variety of adversarial examples for use in training, which are generated by the fast gradient sign method (FGSM) for a range of epsilon values. A U-Net model trained on these diverse adversarial examples will be more robust to unknown adversarial examples. Experiments were conducted using the ISBI 2012 dataset, with TensorFlow as the machine learning library. According to the experimental results, the proposed method builds a model that demonstrates segmentation robustness against adversarial examples by reducing the pixel error between the original labels and the adversarial examples to an average of 1.45.


Author(s):  
Eaman T. Karim ◽  
Miao He ◽  
Ahmed Salhoumi ◽  
Leonid V. Zhigilei ◽  
Peter K. Galenko

The results of molecular dynamics (MD) simulations of the crystallization process in one-component materials and solid solution alloys reveal a complex temperature dependence of the velocity of the crystal–liquid interface featuring an increase up to a maximum at 10–30% undercooling below the equilibrium melting temperature followed by a gradual decrease of the velocity at deeper levels of undercooling. At the qualitative level, such non-monotonous behaviour of the crystallization front velocity is consistent with the diffusion-controlled crystallization process described by the Wilson–Frenkel model, where the almost linear increase of the interface velocity in the vicinity of melting temperature is defined by the growth of the thermodynamic driving force for the phase transformation, while the decrease in atomic mobility with further increase of the undercooling drives the velocity through the maximum and into a gradual decrease at lower temperatures. At the quantitative level, however, the diffusional model fails to describe the results of MD simulations in the whole range of temperatures with a single set of parameters for some of the model materials. The limited ability of the existing theoretical models to adequately describe the MD results is illustrated in the present work for two materials, chromium and silicon. It is also demonstrated that the MD results can be well described by the solution following from the hodograph equation, previously found from the kinetic phase-field model (kinetic PFM) in the sharp interface limit. The ability of the hodograph equation to describe the predictions of MD simulation in the whole range of temperatures is related to the introduction of slow (phase field) and fast (gradient flow) variables into the original kinetic PFM from which the hodograph equation is obtained. The slow phase-field variable is responsible for the description of data at small undercoolings and the fast gradient flow variable accounts for local non-equilibrium effects at high undercoolings. The introduction of these two types of variables makes the solution of the hodograph equation sufficiently flexible for a reliable description of all nonlinearities of the kinetic curves predicted in MD simulations of Cr and Si. This article is part of the theme issue ‘Transport phenomena in complex systems (part 1)’.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4772
Author(s):  
Richard N. M. Rudd-Orthner ◽  
Lyudmila Mihaylova

A repeatable and deterministic non-random weight initialization method in convolutional layers of neural networks examined with the Fast Gradient Sign Method (FSGM). Using the FSGM approach as a technique to measure the initialization effect with controlled distortions in transferred learning, varying the dataset numerical similarity. The focus is on convolutional layers with induced earlier learning through the use of striped forms for image classification. Which provided a higher performing accuracy in the first epoch, with improvements of between 3–5% in a well known benchmark model, and also ~10% in a color image dataset (MTARSI2), using a dissimilar model architecture. The proposed method is robust to limit optimization approaches like Glorot/Xavier and He initialization. Arguably the approach is within a new category of weight initialization methods, as a number sequence substitution of random numbers, without a tether to the dataset. When examined under the FGSM approach with transferred learning, the proposed method when used with higher distortions (numerically dissimilar datasets), is less compromised against the original cross-validation dataset, at ~31% accuracy instead of ~9%. This is an indication of higher retention of the original fitting in transferred learning.


Sign in / Sign up

Export Citation Format

Share Document