scholarly journals Universal adversarial attacks on deep neural networks for medical image classification

2020 ◽  
Author(s):  
Hokuto Hirano ◽  
Akinori Minagi ◽  
Kazuhiro Takemoto

Abstract Background. Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision-making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks, such as universal adversarial perturbation (UAP), which is a single perturbation that can induce DNN failure in most classification tasks has not been evaluated yet.Methods. We focus on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigate their vulnerability to the seven model architectures of UAPs.Results. We demonstrate that DNNs are vulnerable to both nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect class, and to targeted UAPs, which cause the DNN to classify an input into a specific class. The almost imperceptible UAPs achieved > 80% success rates for nontargeted and targeted attacks. The vulnerability to UAPs depended very little on the model architecture. Moreover, we discovered that adversarial retraining, which is known to be an effective method for adversarial defenses, increased DNNs’ robustness against UAPs in only very few cases.Conclusion. Unlike previous assumptions, the results indicate that DNN-based clinical diagnosis is easier to deceive because of adversarial attacks. Adversaries can cause failed diagnoses at lower costs (e.g., without consideration of data distribution); moreover, they can affect the diagnosis. The effects of adversarial defenses may not be limited. Our findings emphasize that more careful consideration is required in developing DNNs for medical imaging and their practical applications.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Hokuto Hirano ◽  
Akinori Minagi ◽  
Kazuhiro Takemoto

Abstract Background Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision-making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks, such as universal adversarial perturbation (UAP), which is a single perturbation that can induce DNN failure in most classification tasks has not been evaluated yet. Methods We focus on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigate their vulnerability to the seven model architectures of UAPs. Results We demonstrate that DNNs are vulnerable to both nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect class, and to targeted UAPs, which cause the DNN to classify an input into a specific class. The almost imperceptible UAPs achieved > 80% success rates for nontargeted and targeted attacks. The vulnerability to UAPs depended very little on the model architecture. Moreover, we discovered that adversarial retraining, which is known to be an effective method for adversarial defenses, increased DNNs’ robustness against UAPs in only very few cases. Conclusion Unlike previous assumptions, the results indicate that DNN-based clinical diagnosis is easier to deceive because of adversarial attacks. Adversaries can cause failed diagnoses at lower costs (e.g., without consideration of data distribution); moreover, they can affect the diagnosis. The effects of adversarial defenses may not be limited. Our findings emphasize that more careful consideration is required in developing DNNs for medical imaging and their practical applications.


2020 ◽  
Author(s):  
Hokuto Hirano ◽  
Akinori Minagi ◽  
Kazuhiro Takemoto

Abstract Background. Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks have not been evaluated yet, i.e., universal adversarial perturbation (UAP), which is a single perturbation that can induce DNN failure in most classification tasks.Methods. We focus on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigate their vulnerability of DNNs with various model architectures to UAPs.Results. We demonstrate that the DNNs are vulnerable to both nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect class, and to targeted UAPs, which cause the DNN to classify an input into a specific class. The almost imperceptible UAPs achieved > 80% success rates for nontargeted and targeted attacks. The vulnerability to UAPs barely depended on model architecture. Moreover, we discovered that adversarial retraining, which is known to be an effective method for adversarial defenses, increased the robustness of DNNs against UAPs in only limited cases. Conclusion. Unlike previous assumptions, the results indicate that DNN-based clinical diagnosis is easier to deceive because of adversarial attacks. Adversaries can result in failed diagnoses at lower costs (e.g., without consideration of data distribution); moreover, they can affect the diagnosis. The effects of adversarial defenses may be not limited. Our findings emphasize that more careful consideration is required in developing DNNs for medical imaging and their practical applications.


2021 ◽  
Author(s):  
Akinori Minagi ◽  
Hokuto Hirano ◽  
Kazuhiro Takemoto

Abstract Transfer learning from natural images is well used in deep neural networks (DNNs) for medical image classification to achieve computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training data — which are often required for adversarial attacks — are generally unavailable in terms of security and privacy preservation. Nevertheless, we hypothesized that adversarial attacks are also possible using natural images because pre-trained models do not change significantly after fine-tuning. We focused on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigated whether medical DNN models with transfer learning are vulnerable to universal adversarial perturbations (UAPs), generated using natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls, although slightly lower than that of UAPs from training images. Vulnerability to UAPs from natural images was observed between different natural image datasets and between different model architectures. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization (without transfer learning) reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs from natural images will become a remarkable security threat.


Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 268 ◽  
Author(s):  
Hokuto Hirano ◽  
Kazuhiro Takemoto

Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, a single perturbation known as the universal adversarial perturbation (UAP) can foil most classification tasks conducted by DNNs. Thus, different methods for generating UAPs are required to fully evaluate the vulnerability of DNNs. A realistic evaluation would be with cases that consider targeted attacks; wherein the generated UAP causes the DNN to classify an input into a specific class. However, the development of UAPs for targeted attacks has largely fallen behind that of UAPs for non-targeted attacks. Therefore, we propose a simple iterative method to generate UAPs for targeted attacks. Our method combines the simple iterative method for generating non-targeted UAPs and the fast gradient sign method for generating a targeted adversarial perturbation for an input. We applied the proposed method to state-of-the-art DNN models for image classification and proved the existence of almost imperceptible UAPs for targeted attacks; further, we demonstrated that such UAPs can be easily generated.


2021 ◽  
Author(s):  
Akinori Minagi ◽  
Hokuto Hirano ◽  
Kazuhiro Takemoto

Abstract Background. Transfer learning from natural images is well used in deep neural networks (DNNs) for medical image classification to achieve computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training data — which are often required for adversarial attacks — are generally unavailable in terms of security and privacy preservation. Nevertheless, we hypothesized that adversarial attacks are also possible using natural images because pre-trained models do not change significantly after fine-tuning.Methods. We considered three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) to investigate whether medical DNN models with transfer learning are vulnerable to universal adversarial perturbations (UAPs), generated using natural images.Results. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls, although slightly lower than that of UAPs from training images. Vulnerability to UAPs from natural images was observed between different natural image datasets and between different model architectures.Conclusion. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization (without transfer learning) reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs from natural images will become a remarkable security threat.


Author(s):  
Priyanka Jindal ◽  
Dharmender Kumar

: Medical imaging has been utilized in various forms in clinical applications for better diagnostic and treatment of diseases. These imaging technologies help in recognizing body's ailing region with ease. In addition, it causes no pain to patient as the interior part of the body can be seen without opening too much of the body. Nowadays, various image processing techniques such as segmentation, registration, classification, restoration, contrast enhancement and many more exists to enhance image quality. Among all these techniques, classification plays an important role in computer-aided diagnosis for easy analysis and interpretation of these images. Image classification not only classifies diseases with high accuracy but also finds out which part of the body is infected by the disease. The usage of Neural networks classifier in medical imaging applications opened new doors or opportunities to researchers stirring them to excel in this domain. Moreover, accuracy in clinical practices and development of more sophisticated equipment is necessary in medical field for more accurate and quicker decisions. Therefore, keeping this in mind, researchers started focusing on adding intelligence by using meta-heuristic techniques to classification methods. This paper provides a brief survey on role of artificial neural networks in medical image classification, various types of meta-heuristic algorithms applied for optimization purpose, their hybridization. A comparative analysis showing the effect of applying these algorithms on some classification parameters such as accuracy, sensitivity, specificity is also provided. From the comparison, it can be observed that the usage of these methods significantly optimizes these parameters leading us to diagnosis and treatment of a number of diseases in their early stage.


Sign in / Sign up

Export Citation Format

Share Document