sign method
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 27)

H-INDEX

4
(FIVE YEARS 1)

2022 ◽  
pp. 155005942110708
Author(s):  
Ayse Nur Ozdag Acarli ◽  
Ayse Deniz Elmali ◽  
Nermin Gorkem Sirin ◽  
Betul Baykan ◽  
Nerses Bebek

Introduction. Although ictal blinking is significantly more frequent in generalized epilepsy, it has been reported as a rare but useful lateralizing sign in focal seizures when it is not associated with facial clonic twitching. This study aimed to raise awareness of eye blinking as a semiological lateralizing sign. Method. Our database over an 11-year period reviewed retrospectively to assess patients who had ictal blinking associated with focal seizures. Results. Among 632 patients, 14 (2.2%), who had 3 to 13 (7 ± 3) seizures during video-EEG monitoring, were included. Twenty-five percent of all 92 seizures displayed ictal blinking and each patient had one to five seizures with ictal blinking. Ictal blinking was unilateral in 17%, asymmetrical in 22% and symmetrical in 61%. The blinking appeared with a mean latency of 6.3 s (range 0-39) after the clinical seizure-onset, localized most often to fronto-temporal, then in frontal or occipital regions. Blinking was ipsilateral to ictal scalp EEG lateralization side in 83% (5/6) of the patients with unilateral/asymmetrical blinking. The exact lateralization and localization of ictal activity could not have been determined via EEG in most of the patients with symmetrical blinking, remarkably. Conclusions. Unilateral/asymmetrical blinking is one of the early components of the seizures and appears as a useful lateralizing sign, often associated with fronto-temporal seizure-onset. Symmetrical blinking, on the other hand, did not seem to be valuable in lateralization and localization of focal seizures. Future studies using invasive recordings and periocular electrodes are needed to evaluate the value of blinking in lateralization and localization.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Raheel Siddiqi

AbstractAn accurate and robust fruit image classifier can have a variety of real-life and industrial applications including automated pricing, intelligent sorting, and information extraction. This paper demonstrates how adversarial training can enhance the robustness of fruit image classifiers. In the past, research in deep-learning-based fruit image classification has focused solely on attaining the highest possible accuracy of the model used in the classification process. However, even the highest accuracy models are still susceptible to adversarial attacks which pose serious problems for such systems in practice. As a robust fruit classifier can only be developed with the aid of a fruit image dataset consisting of fruit images photographed in realistic settings (rather than images taken in controlled laboratory settings), a new dataset of over three thousand fruit images belonging to seven fruit classes is presented. Each image is carefully selected so that its classification poses a significant challenge for the proposed classifiers. Three Convolutional Neural Network (CNN)-based classifiers are suggested: 1) IndusNet, 2) fine-tuned VGG16, and 3) fine-tuned MobileNet. Fine-tuned VGG16 produced the best test set accuracy of 94.82% compared to the 92.32% and the 94.28% produced by the other two models, respectively. Fine-tuned MobileNet has proved to be the most efficient model with a test time of 9 ms/step compared to the test times of 28 ms/step and 29 ms/step for the other two models. The empirical evidence presented demonstrates that adversarial training enables fruit image classifiers to resist attacks crafted through the Fast Gradient Sign Method (FGSM), while simultaneously improving classifiers’ robustness against other noise forms including ‘Gaussian’, ‘Salt and pepper’ and ‘Speckle’. For example, when the amplitude of the perturbations generated through the Fast Gradient Sign Method (FGSM) was kept at 0.1, adversarial training improved the fine-tuned VGG16’s performance on adversarial images by around 18% (i.e., from 76.6% to 94.82%), while simultaneously improving the classifier’s performance on fruit images corrupted with ‘salt and pepper’ noise by around 8% (i.e., from 69.82% to 77.85%). Other reported results also follow this pattern and demonstrate the effectiveness of adversarial training as a means of enhancing the robustness of fruit image classifiers.


2021 ◽  
Vol 15 ◽  
Author(s):  
Pengfei Xie ◽  
Shuhao Shi ◽  
Shuai Yang ◽  
Kai Qiao ◽  
Ningning Liang ◽  
...  

Deep neural networks (DNNs) are proven vulnerable to attack against adversarial examples. Black-box transfer attacks pose a massive threat to AI applications without accessing target models. At present, the most effective black-box attack methods mainly adopt data enhancement methods, such as input transformation. Previous data enhancement frameworks only work on input transformations that satisfy accuracy or loss invariance. However, it does not work for other transformations that do not meet the above conditions, such as the transformation which will lose information. To solve this problem, we propose a new noise data enhancement framework (NDEF), which only transforms adversarial perturbation to avoid the above issues effectively. In addition, we introduce random erasing under this framework to prevent the over-fitting of adversarial examples. Experimental results show that the black-box attack success rate of our method Random Erasing Iterative Fast Gradient Sign Method (REI-FGSM) is 4.2% higher than DI-FGSM in six models on average and 6.6% higher than DI-FGSM in three defense models. REI-FGSM can combine with other methods to achieve excellent performance. The attack performance of SI-FGSM can be improved by 22.9% on average when combined with REI-FGSM. Besides, our combined version with DI-TI-MI-FGSM, i.e., DI-TI-MI-REI-FGSM can achieve an average attack success rate of 97.0% against three ensemble adversarial training models, which is greater than the current gradient iterative attack method. We also introduce Gaussian blur to prove the compatibility of our framework.


2021 ◽  
Vol 18 (1) ◽  
pp. 1-8
Author(s):  
Ansam Kadhim ◽  
Salah Al-Darraji

Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD.


2021 ◽  
pp. 1-12
Author(s):  
Bo Yang ◽  
Kaiyong Xu ◽  
Hengjun Wang ◽  
Hengwei Zhang

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptible perturbations to the original images, but make the model output inaccurate predictions. Before DNNs are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications. However, under the challenging black-box setting, the attack success rate, i.e., the transferability of adversarial examples, still needs to be improved. Based on image augmentation methods, this paper found that random transformation of image brightness can eliminate overfitting in the generation of adversarial examples and improve their transferability. In light of this phenomenon, this paper proposes an adversarial example generation method, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and to generate adversarial examples with better transferability. Extensive experiments on the ImageNet dataset have demonstrated the effectiveness of the aforementioned method. Whether on normally or adversarially trained networks, our method has a higher success rate for black-box attacks than other attack methods based on data augmentation. It is hoped that this method can help evaluate and improve the robustness of models.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Hyun Kwon

Deep neural networks perform well for image recognition, speech recognition, and pattern analysis. This type of neural network has also been used in the medical field, where it has displayed good performance in predicting or classifying patient diagnoses. An example is the U-Net model, which has demonstrated good performance in data segmentation, an important technology in the field of medical imaging. However, deep neural networks are vulnerable to adversarial examples. Adversarial examples are samples created by adding a small amount of noise to an original data sample in such a way that to human perception they appear to be normal data but they will be incorrectly classified by the classification model. Adversarial examples pose a significant threat in the medical field, as they can cause models to misidentify or misclassify patient diagnoses. In this paper, I propose an advanced adversarial training method to defend against such adversarial examples. An advantage of the proposed method is that it creates a wide variety of adversarial examples for use in training, which are generated by the fast gradient sign method (FGSM) for a range of epsilon values. A U-Net model trained on these diverse adversarial examples will be more robust to unknown adversarial examples. Experiments were conducted using the ISBI 2012 dataset, with TensorFlow as the machine learning library. According to the experimental results, the proposed method builds a model that demonstrates segmentation robustness against adversarial examples by reducing the pixel error between the original labels and the adversarial examples to an average of 1.45.


2021 ◽  
Author(s):  
Reza Amini Gougeh

Abstract Considering the global crisis of Coronavirus infection (COVID-19), the essence of utilizing novel approaches to achieve quick and accurate diagnosing methods is required. Deep Neural Networks (DNN) showed outstanding capabilities in classifying various data types, including medical images, in order to build a practical automatic diagnosing system. Therefore, DNNs can help the healthcare system to reduce patients waiting time. However, despite acceptable accuracy and low false-negative rate of DNNs in medical image classification, they have shown vulnerabilities in terms of adversarial attacks. Such input can lead the model to misclassification. This paper investigated the effect of these attacks on five commonly used neural networks, including ResNet-18, ResNet-50, Wide ResNet-16-8 (WRN-16-8), VGG-19, and Inception v3. Four adversarial attacks, including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini and Wagner (C&W), and Spatial Transformations Attack (ST), were used to complete this investigation. Average accuracy on test images was 96.7\% and decreased to 41.1%, 25.5%, 50.1%, and 56.3% in FGSM, PGD, C&W, and ST, respectively. Results are indicating that ResNet-50 and WRN-16-8 were generally less affected by attacks. Therefore using defence methods in these two models can enhance their performance encountering adversarial perturbations.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4772
Author(s):  
Richard N. M. Rudd-Orthner ◽  
Lyudmila Mihaylova

A repeatable and deterministic non-random weight initialization method in convolutional layers of neural networks examined with the Fast Gradient Sign Method (FSGM). Using the FSGM approach as a technique to measure the initialization effect with controlled distortions in transferred learning, varying the dataset numerical similarity. The focus is on convolutional layers with induced earlier learning through the use of striped forms for image classification. Which provided a higher performing accuracy in the first epoch, with improvements of between 3–5% in a well known benchmark model, and also ~10% in a color image dataset (MTARSI2), using a dissimilar model architecture. The proposed method is robust to limit optimization approaches like Glorot/Xavier and He initialization. Arguably the approach is within a new category of weight initialization methods, as a number sequence substitution of random numbers, without a tether to the dataset. When examined under the FGSM approach with transferred learning, the proposed method when used with higher distortions (numerically dissimilar datasets), is less compromised against the original cross-validation dataset, at ~31% accuracy instead of ~9%. This is an indication of higher retention of the original fitting in transferred learning.


Sign in / Sign up

Export Citation Format

Share Document