scholarly journals Deep learning model improves radiologists’ performance in detection and classification of breast lesions

2021 ◽  
Vol 33 (6) ◽  
pp. 682-693
Author(s):  
Yingshi Sun ◽  
◽  
Yuhong Qu ◽  
Dong Wang ◽  
Yi Li ◽  
...  
Author(s):  
Yong-Yeon Jo ◽  
Joon-myoung Kwon ◽  
Ki-Hyun Jeon ◽  
Yong-Hyeon Cho ◽  
Jae-Hyun Shin ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Dapeng Lang ◽  
Deyun Chen ◽  
Ran Shi ◽  
Yongjun He

Deep learning has been widely used in the field of image classification and image recognition and achieved positive practical results. However, in recent years, a number of studies have found that the accuracy of deep learning model based on classification greatly drops when making only subtle changes to the original examples, thus realizing the attack on the deep learning model. The main methods are as follows: adjust the pixels of attack examples invisible to human eyes and induce deep learning model to make the wrong classification; by adding an adversarial patch on the detection target, guide and deceive the classification model to make it misclassification. Therefore, these methods have strong randomness and are of very limited use in practical application. Different from the previous perturbation to traffic signs, our paper proposes a method that is able to successfully hide and misclassify vehicles in complex contexts. This method takes into account the complex real scenarios and can perturb with the pictures taken by a camera and mobile phone so that the detector based on deep learning model cannot detect the vehicle or misclassification. In order to improve the robustness, the position and size of the adversarial patch are adjusted according to different detection models by introducing the attachment mechanism. Through the test of different detectors, the patch generated in the single target detection algorithm can also attack other detectors and do well in transferability. Based on the experimental part of this paper, the proposed algorithm is able to significantly lower the accuracy of the detector. Affected by the real world, such as distance, light, angles, resolution, etc., the false classification of the target is realized by reducing the confidence level and background of the target, which greatly perturbs the detection results of the target detector. In COCO Dataset 2017, it reveals that the success rate of this algorithm reaches 88.7%.


2021 ◽  
Vol 32 ◽  
pp. S926-S927
Author(s):  
G. Toyokawa ◽  
Y. Yamada ◽  
N. Haratake ◽  
Y. Shiraishi ◽  
T. Takenaka ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document