scholarly journals Application of Deep Convolutional Neural Networks for Discriminating Benign, Borderline, and Malignant Serous Ovarian Tumors From Ultrasound Images

2021 ◽  
Vol 11 ◽  
Author(s):  
Huiquan Wang ◽  
Chunli Liu ◽  
Zhe Zhao ◽  
Chao Zhang ◽  
Xin Wang ◽  
...  

ObjectiveThis study aimed to evaluate the performance of the deep convolutional neural network (DCNN) to discriminate between benign, borderline, and malignant serous ovarian tumors (SOTs) on ultrasound(US) images.Material and MethodsThis retrospective study included 279 pathology-confirmed SOTs US images from 265 patients from March 2013 to December 2016. Two- and three-class classification task based on US images were proposed to classify benign, borderline, and malignant SOTs using a DCNN. The 2-class classification task was divided into two subtasks: benign vs. borderline & malignant (task A), borderline vs. malignant (task B). Five DCNN architectures, namely VGG16, GoogLeNet, ResNet34, MobileNet, and DenseNet, were trained and model performance before and after transfer learning was tested. Model performance was analyzed using accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC).ResultsThe best overall performance was for the ResNet34 model, which also achieved the better performance after transfer learning. When classifying benign and non-benign tumors, the AUC was 0.96, the sensitivity was 0.91, and the specificity was 0.91. When predicting malignancy and borderline tumors, the AUC was 0.91, the sensitivity was 0.98, and the specificity was 0.74. The model had an overall accuracy of 0.75 for in directly classifying the three categories of benign, malignant and borderline SOTs, and a sensitivity of 0.89 for malignant, which was better than the overall diagnostic accuracy of 0.67 and sensitivity of 0.75 for malignant of the senior ultrasonographer.ConclusionDCNN model analysis of US images can provide complementary clinical diagnostic information and is thus a promising technique for effective differentiation of benign, borderline, and malignant SOTs.

2020 ◽  
Vol 22 (4) ◽  
pp. 415
Author(s):  
Qi Wei ◽  
Shu-E Zeng ◽  
Li-Ping Wang ◽  
Yu-Jing Yan ◽  
Ting Wang ◽  
...  

Aims: To compare the diagnostic value of S-Detect (a computer aided diagnosis system using deep learning) in differentiating thyroid nodules in radiologists with different experience and to assess if S-Detect can improve the diagnostic performance of radiologists.Materials and methods: Between February 2018 and October 2019, 204 thyroid nodules in 181 patients were included. An experienced radiologist performed ultrasound for thyroid nodules and obtained the result of S-Detect. Four radiologists with different experience on thyroid ultrasound (Radiologist 1, 2, 3, 4 with 1, 4, 9, 20 years, respectively) analyzed the conventional ultrasound images of each thyroid nodule and made a diagnosis of “benign” or “malignant” based on the TI-RADS category. After referring to S-Detect results, they re-evaluated the diagnoses. The diagnostic performance of radiologists was analyzed before and after referring to the results of S-Detect.Results: The accuracy, sensitivity, specificity, positive predictive value and negative predictive value of S-Detect were 77.0, 91.3, 65.2, 68.3 and 90.1%, respectively. In comparison with the less experienced radiologists (radiologist 1 and 2), S-Detect had a higher area under receiver operating characteristic curve (AUC), accuracy and specificity (p <0.05). In comparison with the most experienced radiologist, the diagnostic accuracy and AUC were lower (p<0.05). In the less experienced radiologists, the diagnostic accuracy, specificity and AUC were significantly improved when combined with S-Detect (p<0.05), but not for experienced radiologists (radiologist 3 and 4) (p>0.05).Conclusions: S-Detect may become an additional diagnostic method for the diagnosis of thyroid nodules and improve the diagnostic performance of less experienced radiologists. 


Author(s):  
Hwaseong Ryu ◽  
Seung Yeon Shin ◽  
Jae Young Lee ◽  
Kyoung Mu Lee ◽  
Hyo-jin Kang ◽  
...  

Abstract Objectives To develop a convolutional neural network system to jointly segment and classify a hepatic lesion selected by user clicks in ultrasound images. Methods In total, 4309 anonymized ultrasound images of 3873 patients with hepatic cyst (n = 1214), hemangioma (n = 1220), metastasis (n = 1001), or hepatocellular carcinoma (HCC) (n = 874) were collected and annotated. The images were divided into 3909 training and 400 test images. Our network is composed of one shared encoder and two inference branches used for segmentation and classification and takes the concatenation of an input image and two Euclidean distance maps of foreground and background clicks provided by a user as input. The performance of hepatic lesion segmentation was evaluated based on the Jaccard index (JI), and the performance of classification was based on accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC). Results We achieved performance improvements by jointly conducting segmentation and classification. In the segmentation only system, the mean JI was 68.5%. In the classification only system, the accuracy of classifying four types of hepatic lesions was 79.8%. The mean JI and classification accuracy were 68.5% and 82.2%, respectively, for the proposed joint system. The optimal sensitivity and specificity and the AUROC of classifying benign and malignant hepatic lesions of the joint system were 95.0%, 86.0%, and 0.970, respectively. The respective sensitivity, specificity, and the AUROC for classifying four hepatic lesions of the joint system were 86.7%, 89.7%, and 0.947. Conclusions The proposed joint system exhibited fair performance compared to segmentation only and classification only systems. Key Points • The joint segmentation and classification system using deep learning accurately segmented and classified hepatic lesions selected by user clicks in US examination. • The joint segmentation and classification system for hepatic lesions in US images exhibited higher performance than segmentation only and classification only systems. • The joint segmentation and classification system could assist radiologists with minimal experience in US imaging by characterizing hepatic lesions.


2003 ◽  
Vol 58 (4) ◽  
pp. 185-192 ◽  
Author(s):  
Eduardo Cardoso Blanco ◽  
Ayrton Roberto Pastore ◽  
Angela Maggio da Fonseca ◽  
Filomena Marino Carvalho ◽  
Jesus Paula Carvalho ◽  
...  

The objective of this study was to differentiate benign ovarian tumors from malignant ones before surgery using color and pulsed Doppler sonography, and to compare results obtained before and after use of contrast medium, thereby verifying whether contrast results in an improvement in the diagnostic sensitivity. METHODS: Sixty two women (mean age 49.9 years) with ovarian tumors were studied, 45 with benign and 17 with malignant tumors. All women underwent a transvaginal color Doppler ultrasonographic exam. A study of the arterial vascular flow was made in all tumor areas, as well as an impedance evaluation of arterial vascular flow using the resistance index. RESULT: Localization of the vessels in the tumor revealed a greater proportion of malignant tumors with detectable internal vascular flows (64%) than benign tumors with such flows (22%). There was a considerable overlap of these findings. The use of contrast identified a greater number of vessels with confirmation in the totality of tumors, but did not improve the Doppler capacity in tumoral differentiation. Malignant tumors presented lower values of resistance index than the benign ones, whether or not contrast was used. The cutoff value for resistance index that better maximized the Doppler sensitivity and specificity was 0.55. Through this value, an increase of the sensitivity after contrast use was obtained, varying from 47% to 82%, while specificity remained statistically unchanged. CONCLUSION: Although the injection of a microbubble agent improved the sensitivity of the method detecting vascularization of tumors, a positive finding for vascularization by this method was not clinically useful in the differentiation of benign and malignant ovarian tumors.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Hui-Zhao Wu ◽  
Li-Feng Yan ◽  
Xiao-Qing Liu ◽  
Yi-Zhou Yu ◽  
Zuo-Jun Geng ◽  
...  

AbstractThis study was performed to propose a method, the Feature Ambiguity Mitigate Operator (FAMO) model, to mitigate feature ambiguity in bone fracture detection on radiographs of various body parts. A total of 9040 radiographic studies were extracted. These images were classified into several body part types including 1651 hand, 1302 wrist, 406 elbow, 696 shoulder, 1580 pelvic, 948 knee, 1180 ankle, and 1277 foot images. Instance segmentation was annotated by radiologists. The ResNext-101+FPN was employed as the baseline network structure and the FAMO model for processing. The proposed FAMO model and other ablative models were tested on a test set of 20% total radiographs in a balanced body part distribution. To the per-fracture extent, an AP (average precision) analysis was performed. For per-image and per-case, the sensitivity, specificity, and AUC (area under the receiver operating characteristic curve) were analyzed. At the per-fracture level, the controlled experiment set the baseline AP to 76.8% (95% CI: 76.1%, 77.4%), and the major experiment using FAMO as a preprocessor improved the AP to 77.4% (95% CI: 76.6%, 78.2%). At the per-image level, the sensitivity, specificity, and AUC were 61.9% (95% CI: 58.7%, 65.0%), 91.5% (95% CI: 89.5%, 93.3%), and 74.9% (95% CI: 74.1%, 75.7%), respectively, for the controlled experiment, and 64.5% (95% CI: 61.3%, 67.5%), 92.9% (95% CI: 91.0%, 94.5%), and 77.5% (95% CI: 76.5%, 78.5%), respectively, for the experiment with FAMO. At the per-case level, the sensitivity, specificity, and AUC were 74.9% (95% CI: 70.6%, 78.7%), 91.7%% (95% CI: 88.8%, 93.9%), and 85.7% (95% CI: 84.8%, 86.5%), respectively, for the controlled experiment, and 77.5% (95% CI: 73.3%, 81.1%), 93.4% (95% CI: 90.7%, 95.4%), and 86.5% (95% CI: 85.6%, 87.4%), respectively, for the experiment with FAMO. In conclusion, in bone fracture detection, FAMO is an effective preprocessor to enhance model performance by mitigating feature ambiguity in the network.


Life ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 200
Author(s):  
Yu-Hsuan Li ◽  
Wayne Huey-Herng Sheu ◽  
Chien-Chih Chou ◽  
Chun-Hsien Lin ◽  
Yuan-Shao Cheng ◽  
...  

Deep learning-based software is developed to assist physicians in terms of diagnosis; however, its clinical application is still under investigation. We integrated deep-learning-based software for diabetic retinopathy (DR) grading into the clinical workflow of an endocrinology department where endocrinologists grade for retinal images and evaluated the influence of its implementation. A total of 1432 images from 716 patients and 1400 images from 700 patients were collected before and after implementation, respectively. Using the grading by ophthalmologists as the reference standard, the sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) to detect referable DR (RDR) were 0.91 (0.87–0.96), 0.90 (0.87–0.92), and 0.90 (0.87–0.93) at the image level; and 0.91 (0.81–0.97), 0.84 (0.80–0.87), and 0.87 (0.83–0.91) at the patient level. The monthly RDR rate dropped from 55.1% to 43.0% after implementation. The monthly percentage of finishing grading within the allotted time increased from 66.8% to 77.6%. There was a wide range of agreement values between the software and endocrinologists after implementation (kappa values of 0.17–0.65). In conclusion, we observed the clinical influence of deep-learning-based software on graders without the retinal subspecialty. However, the validation using images from local datasets is recommended before clinical implementation.


Author(s):  
Nidhi Verma ◽  
Vandana Tiwari ◽  
S. P. Sharma ◽  
Preeti Singh ◽  
Monika Rathi ◽  
...  

Background: Ovarian tumors and tumor like lesions of ovary frequently form pelvic masses and are associated with hormonal manifestations. Clinically or surgically they can mimic malignancy but pathologically they could be benign tumors or tumor like lesions.Methods: The aim of present study is to do clinico-histopathological correlation of ovarian tumors and tumor like lesions of ovary and also evaluate the role of serum CA125, HE4 and calculate risk of ovarian malignancy algorithm (ROMA), for differentiation of benign and malignant ovarian tumors.233 cases of ovarian tumors and tumor like lesions were studied. Tumors were classified according to WHO classification. Clinical and histological findings were compiled on proforma and subjected to analysis.Results: In present study, out of total 233 cases, 41.2% were ovarian tumors and 58.8% tumor like lesions of ovary. Among tumor like lesions, follicular cyst was commonest lesion while among ovarian tumors, benign serous surface epithelial tumor was commonest. In patients with ovarian tumors, blood samples were collected, before and after the treatment for analysis of CA125, HE4 and ROMA.Conclusions: Serum values of CA125and HE4 as well as ROMA were highly elevated in women with malignant epithelial tumors as compared to women with benign lesions. Also, all the parameters i.e. HE4, CA125 and ROMA showed significant difference before and after surgery. Hence measuring serum HE4 and CA125 along with ROMA calculation may provide higher accuracy for detecting malignant epithelial ovarian tumor.


2018 ◽  
Author(s):  
Alexander Rakhlin ◽  
Alexey Shvets ◽  
Vladimir Iglovikov ◽  
Alexandr A. Kalinin

AbstractBreast cancer is one of the main causes of cancer death worldwide. Early diagnostics significantly increases the chances of correct treatment and survival, but this process is tedious and often leads to a disagreement between pathologists. Computer-aided diagnosis systems showed potential for improving the diagnostic accuracy. In this work, we develop the computational approach based on deep convolution neural networks for breast cancer histology image classification. Hematoxylin and eosin stained breast histology microscopy image dataset is provided as a part of the ICIAR 2018 Grand Challenge on Breast Cancer Histology Images. Our approach utilizes several deep neural network architectures and gradient boosted trees classifier. For 4-class classification task, we report 87.2% accuracy. For 2-class classification task to detect carcinomas we report 93.8% accuracy, AUC 97.3%, and sensitivity/specificity 96.5/88.0% at the high-sensitivity operating point. To our knowledge, this approach outperforms other common methods in automated histopathological image classification. The source code for our approach is made publicly available at https://github.com/alexander-rakhlin/ICIAR2018


2021 ◽  
Author(s):  
kaiwen wu ◽  
Bo Xu ◽  
Ying Wu

Abstract Manual recognition of breast ultrasound images is a heavy workload for radiologists and misdiagnosis. Traditional machine learning methods and deep learning methods require huge data sets and a lot of time for training. To solve the above problems, this paper had proposed a deep transfer learning method. the transfer learning models ResNet18 and ResNet50 after pre-training on the ImageNet dataset, and the ResNet18 and ResNet50 models without pre-training. The dataset consists of 131 breast ultrasound images (109 benign and 22 malignant), all of which had been collected, labeled and provided by UDIAT Diagnostic Center. The experimental results had shown that the pre-trained ResNet18 model has the best classification performance on breast ultrasound images. It had achieved an accuracy of 93.9%, an F1score of 0.94, and an area under the receiver operating characteristic curve (AUC) of 0.944. Compared with ordinary deep learning models, its classification performance had been greatly improved, which had proved the significant advantages of deep transfer learning in the classification of small samples of medical images.


2022 ◽  
Author(s):  
James Devasia ◽  
Hridyanand Goswami ◽  
Subitha Lakshminarayanan ◽  
Manju Rajaram ◽  
Subathra Adithan ◽  
...  

Abstract Chest X-ray based diagnosis of active Tuberculosis (TB) is one of the oldest ubiquitous tests in medical practice. Artificial Intelligence (AI) based automated detection of abnormality in chest radiography is crucial in radiology workflow. Most deep convolutional neural networks (DCNN) for diagnosing TB by transfer learning from natural images and using the same dataset to evaluate the model performance and diagnostic accuracy. However, dataset shift is a known issue in predictive models in AI, which is unexplored. In this work, we fine-tuned, validated, and tested two benchmark architectures and utilized the transfer learning methodology to measure the diagnostic accuracy on cross-population datasets. We achieved remarkable calcification accuracy of 100% and area under the receiver operating characteristic (AUC) 1.000 [1.000 – 1.000] (with a sensitivity 0.985 [0.971 – 1.000] and a specificity of 0.986 [0.971 – 1.000]) on intramural test set, but significant drop in extramural test set. Accuracy on various extramural test sets varies 50% - 70%, AUC ranges 0.527 – 0.865 (sensitivity and specificity fluctuate 0.394 – 0.995 and 0.443 – 0.864 respectively). Diagnostic performance on the intramural test set observed in this study shows that DCNN can accurately classify active TB and normal chest radiographs, however the external test set shows DCNN is less likely to generalize well on models trained on specific population dataset.


Sign in / Sign up

Export Citation Format

Share Document