Integrate domain knowledge in training multi-task cascade deep learning model for benign–malignant thyroid nodule classification on ultrasound images

2021 ◽  
Vol 98 ◽  
pp. 104064
Author(s):  
Wenkai Yang ◽  
Yunyun Dong ◽  
Qianqian Du ◽  
Yan Qiang ◽  
Kun Wu ◽  
...  
2020 ◽  
Vol 185 ◽  
pp. 03021
Author(s):  
Meng Zhou ◽  
Rui Wang ◽  
Peng Fu ◽  
Yang Bai ◽  
Ligang Cui

As the most common malignancy in the endocrine system, thyroid cancer is usually diagnosed by discriminating the malignant nodules from the benign ones using ultrasonography, whose interpretation results primarily depends on the subjectivity judgement of the radiologists. In this study, we propose a novel cascade deep learning model to achieve automatic objective diagnose during ultrasound examination for assisting radiologists in recognizing benign and malignant thyroid nodules. First, the simplified U-net is employed to segment the region of interesting (ROI) of the thyroid nodules in each frame of the ultrasound image automatically. Then, to alleviate the limitation that medical training data are relatively small in size, the improved Conditional Variational Auto-Encoder (CVAE) learning the probability distribution of ROI images is trained to generate new images for data augmentation. Finally, ResNet50 is trained with both original and generated ROI images. As consequence, the deep learning model formed by the trained U-net and trained Resnet-50 cascade can achieve malignant thyroid nodule recognition with the accuracy of 87.4%, the sensitivity of 92%, and the specificity of 86.8%.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yubo Liu ◽  
Liangzhen Cheng

In order to discuss the clinical characteristics of patients with scapular fracture, deep learning model was adopted in ultrasound images of patients to locate the anesthesia point of patients during scapular fracture surgery treated with the regional nerve block. 100 patients with scapular fracture who were hospitalized for emergency treatment in the hospital were recruited. Patients in the algorithm group used ultrasound-guided regional nerve block puncture, and patients in the control group used traditional body surface anatomy for anesthesia positioning. The ultrasound images of the scapula of the contrast group were used for the identification of the deep learning model and analysis of anesthesia acupuncture sites. The ultrasound images of the scapula anatomy of the patients in the contrast group were extracted, and the convolutional neural network model was employed for training and test. Moreover, the model performance was evaluated. It was found that the adoption of deep learning greatly improved the accuracy of the image. It took an average of 7.5 ± 2.07 minutes from the time the puncture needle touched the skin to the completion of the injection in the algorithm group (treated with artificial intelligence ultrasound positioning). The operation time of the control group (anatomical positioning) averaged 10.2 ± 2.62 min. Moreover, there was a significant difference between the two groups ( p < 0.05 ). The method adopted in the contrast group had high positioning accuracy and good anesthesia effect, and the patients had reduced postoperative complications of patients (all P < 0.005 ). The deep learning model can effectively improve the accuracy of ultrasound images and measure and assist the treatment of future clinical cases of scapular fractures. While improving medical efficiency, it can also accurately identify patient fractures, which has great adoption potential in improving the effect of surgical anesthesia.


2021 ◽  
Vol 11 ◽  
Author(s):  
Xianyu Zhang ◽  
Hui Li ◽  
Chaoyun Wang ◽  
Wen Cheng ◽  
Yuntao Zhu ◽  
...  

Background: Breast ultrasound is the first choice for breast tumor diagnosis in China, but the Breast Imaging Reporting and Data System (BI-RADS) categorization routinely used in the clinic often leads to unnecessary biopsy. Radiologists have no ability to predict molecular subtypes with important pathological information that can guide clinical treatment.Materials and Methods: This retrospective study collected breast ultrasound images from two hospitals and formed training, test and external test sets after strict selection, which included 2,822, 707, and 210 ultrasound images, respectively. An optimized deep learning model (DLM) was constructed with the training set, and the performance was verified in both the test set and the external test set. Diagnostic results were compared with the BI-RADS categorization determined by radiologists. We divided breast cancer into different molecular subtypes according to hormone receptor (HR) and human epidermal growth factor receptor 2 (HER2) expression. The ability to predict molecular subtypes using the DLM was confirmed in the test set.Results: In the test set, with pathological results as the gold standard, the accuracy, sensitivity and specificity were 85.6, 98.7, and 63.1%, respectively, according to the BI-RADS categorization. The same set achieved an accuracy, sensitivity, and specificity of 89.7, 91.3, and 86.9%, respectively, when using the DLM. For the test set, the area under the curve (AUC) was 0.96. For the external test set, the AUC was 0.90. The diagnostic accuracy was 92.86% with the DLM in BI-RADS 4a patients. Approximately 70.76% of the cases were judged as benign tumors. Unnecessary biopsy was theoretically reduced by 67.86%. However, the false negative rate was 10.4%. A good prediction effect was shown for the molecular subtypes of breast cancer with the DLM. The AUC were 0.864, 0.811, and 0.837 for the triple-negative subtype, HER2 (+) subtype and HR (+) subtype predictions, respectively.Conclusion: This study showed that the DLM was highly accurate in recognizing breast tumors from ultrasound images. Thus, the DLM can greatly reduce the incidence of unnecessary biopsy, especially for patients with BI-RADS 4a. In addition, the predictive ability of this model for molecular subtypes was satisfactory,which has specific clinical application value.


Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 850
Author(s):  
Pablo Zinemanas ◽  
Martín Rocamora ◽  
Marius Miron ◽  
Frederic Font ◽  
Xavier Serra

Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classification, which explains its predictions based on the similarity of the input to a set of learned prototypes in a latent space. We leverage domain knowledge by designing a frequency-dependent similarity measure and by considering different time-frequency resolutions in the feature space. The proposed model achieves results that are comparable to that of the state-of-the-art methods in three different sound classification tasks involving speech, music, and environmental audio. In addition, we present two automatic methods to prune the proposed model that exploit its interpretability. Our system is open source and it is accompanied by a web application for the manual editing of the model, which allows for a human-in-the-loop debugging approach.


2020 ◽  
Vol 1693 ◽  
pp. 012160
Author(s):  
Jiahao Xie ◽  
Lehang Guo ◽  
Chongke Zhao ◽  
Xiaolong Li ◽  
Ye Luo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document