Classification of Breast Masses on Ultrasound Shear Wave Elastography using Convolutional Neural Networks

2020 ◽  
Vol 42 (4-5) ◽  
pp. 213-220 ◽  
Author(s):  
Tomoyuki Fujioka ◽  
Leona Katsuta ◽  
Kazunori Kubota ◽  
Mio Mori ◽  
Yuka Kikuchi ◽  
...  

We aimed to use deep learning with convolutional neural networks (CNNs) to discriminate images of benign and malignant breast masses on ultrasound shear wave elastography (SWE). We retrospectively gathered 158 images of benign masses and 146 images of malignant masses as training data for SWE. A deep learning model was constructed using several CNN architectures (Xception, InceptionV3, InceptionResNetV2, DenseNet121, DenseNet169, and NASNetMobile) with 50, 100, and 200 epochs. We analyzed SWE images of 38 benign masses and 35 malignant masses as test data. Two radiologists interpreted these test data through a consensus reading using a 5-point visual color assessment (SWEc) and the mean elasticity value (in kPa) (SWEe). Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. The best CNN model (which was DenseNet169 with 100 epochs), SWEc, and SWEe had a sensitivity of 0.857, 0.829, and 0.914 and a specificity of 0.789, 0.737, and 0.763 respectively. The CNNs exhibited a mean AUC of 0.870 (range, 0.844–0.898), and SWEc and SWEe had an AUC of 0.821 and 0.855. The CNNs had an equal or better diagnostic performance compared with radiologist readings. DenseNet169 with 100 epochs, Xception with 50 epochs, and Xception with 100 epochs had a better diagnostic performance compared with SWEc ( P = 0.018–0.037). Deep learning with CNNs exhibited equal or higher AUC compared with radiologists when discriminating benign from malignant breast masses on ultrasound SWE.

2020 ◽  
pp. 028418512096142
Author(s):  
Yasemin Altıntas ◽  
Mehmet Bayrak ◽  
Ömer Alabaz ◽  
Medih Celiktas

Background Ultrasound (US) elastography has become a routine instrument in ultrasonographic diagnosis that measures the consistency and stiffness of tissues. Purpose To distinguish benign and malignant breast masses using a single US system by comparing the diagnostic parameters of three kinds of breast elastography simultaneously added to B-mode ultrasonography. Material and Methods A total of 163 breast lesions in 159 consecutive women who underwent US-guided core needle biopsy were included in this prospective study. Before the biopsy, the lesions were examined with B-mode ultrasonography and strain (SE), shear wave (SWE), and point shear wave (STQ) elastography. The strain ratio was computed and the Tsukuba score determined. The mean elasticity values using SWE and STQ were computed and converted to Young’s modulus E (kPa). Results All SE, SWE, and STQ parameters showed similar diagnostic performance. The SE score, SE ratio, SWEmean, SWEmax, STQmean, and STQmax yielded higher specificity than B-mode US alone to differentiate benign and malignant masses. The sensitivity of B-mode US, SWE, and STQ was slightly higher than that of the SE score and SE ratio. The SE score, SE ratio, SWEmean, SWEmax, STQmean, and STQmax had significantly higher positive predictive value and diagnostic accuracy than B-mode US alone. The area under the curve for each of these elastography methods in differentiating benign and malignant breast lesions was 0.93, 0.93, 0.98, 0.97, 0.98, and 0.96, respectively; P<0.001 for all measurements. Conclusion SE (ratio and score), SWE, and STQ had higher diagnostic performance individually than B-mode US alone in distinguishing between malignant and benign breast masses.


2019 ◽  
Vol 41 (04) ◽  
pp. 390-396 ◽  
Author(s):  
Ji Hyun Youk ◽  
Jin Young Kwak ◽  
Eunjung Lee ◽  
Eun Ju Son ◽  
Jeong-Ah Kim

Abstract Purpose To identify and compare diagnostic performance of radiomic features between grayscale ultrasound (US) and shear-wave elastography (SWE) in breast masses. Materials and Methods We retrospectively collected 328 pathologically confirmed breast masses in 296 women who underwent grayscale US and SWE before biopsy or surgery. A representative SWE image of the mass displayed with a grayscale image in split-screen mode was selected. An ROI was delineated around the mass boundary on the grayscale image and copied and pasted to the SWE image by a dedicated breast radiologist for lesion segmentation. A total of 730 candidate radiomic features including first-order statistics and textural and wavelet features were extracted from each image. LASSO regression was used for data dimension reduction and feature selection. Univariate and multivariate logistic regression was performed to identify independent radiomic features, differentiating between benign and malignant masses with calculation of the AUC. Results Of 328 breast masses, 205 (62.5 %) were benign and 123 (37.5 %) were malignant. Following radiomic feature selection, 22 features from grayscale and 6 features from SWE remained. On univariate analysis, all 6 SWE radiomic features (P < 0.0001) and 21 of 22 grayscale radiomic features (P < 0.03) were significantly different between benign and malignant masses. After multivariate analysis, three grayscale radiomic features and two SWE radiomic features were independently associated with malignant breast masses. The AUC was 0.929 for grayscale US and 0.992 for SWE (P < 0.001). Conclusion US radiomic features may have the potential to improve diagnostic performance for breast masses, but further investigation of independent and larger datasets is needed.


2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Jie Tian ◽  
Qianqi Liu ◽  
Xi Wang ◽  
Ping Xing ◽  
Zhuowen Yang ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Xue Zheng ◽  
Fei Li ◽  
Zhi-Dong Xuan ◽  
Yu Wang ◽  
Lei Zhang

Abstract Background To explore the value of quantitative shear wave elastography (SWE) plus the Breast Imaging Reporting and Data System (BI-RADS) in the identification of solid breast masses. Methods A total of 108 patients with 120 solid breast masses admitted to our hospital from January 2019 to January 2020 were enrolled in this study. The pathological examination served as the gold standard for definitive diagnosis. Both SWE and BI-RADS grading were performed. Results Out of the 120 solid breast masses in 108 patients, 75 benign and 45 malignant masses were pathologically confirmed. The size, shape, margin, internal echo, microcalcification, lateral acoustic shadow, and posterior acoustic enhancement of benign and malignant masses were significantly different (all P < 0.05). The E mean, E max, SD, and E ratio of benign and malignant masses were significantly different (all P < 0.05). The E min was similar between benign and malignant masses (P > 0.05). The percentage of Adler grade II-III of the benign masses was lower than that of the malignant masses (P < 0.05). BI-RADS plus SWE yielded higher diagnostic specificity and positive predictive value than either BI-RADS or SWE; BI-RADS plus SWE yielded the highest diagnostic accuracy among the three methods (all P < 0.05). Conclusion SWE plus routine ultrasonography BI-RADS has a higher value in differentiating benign from malignant breast masses than color doppler or SWE alone, which should be further promoted in clinical practice.


2020 ◽  
Vol 24 (1) ◽  
Author(s):  
Nichanametla Sravani ◽  
Ananthakrishnan Ramesh ◽  
Sathasivam Sureshkumar ◽  
Chellappa Vijayakumar ◽  
K.M. Abdulbasith ◽  
...  

Geophysics ◽  
2021 ◽  
pp. 1-45
Author(s):  
Runhai Feng ◽  
Dario Grana ◽  
Niels Balling

Segmentation of faults based on seismic images is an important step in reservoir characterization. With the recent developments of deep-learning methods and the availability of massive computing power, automatic interpretation of seismic faults has become possible. The likelihood of occurrence for a fault can be quantified using a sigmoid function. Our goal is to quantify the fault model uncertainty that is generally not captured by deep-learning tools. We propose to use the dropout approach, a regularization technique to prevent overfitting and co-adaptation in hidden units, to approximate the Bayesian inference and estimate the principled uncertainty over functions. Particularly, the variance of the learned model has been decomposed into aleatoric and epistemic parts. The proposed method is applied to a real dataset from the Netherlands F3 block with two different dropout ratios in convolutional neural networks. The aleatoric uncertainty is irreducible since it relates to the stochastic dependency within the input observations. As the number of Monte-Carlo realizations increases, the epistemic uncertainty asymptotically converges and the model standard deviation decreases, because the variability of model parameters is better simulated or explained with a larger sample size. This analysis can quantify the confidence to use fault predictions with less uncertainty. Additionally, the analysis suggests where more training data are needed to reduce the uncertainty in low confidence regions.


2020 ◽  
Vol 12 (5) ◽  
pp. 765 ◽  
Author(s):  
Calimanut-Ionut Cira ◽  
Ramon Alcarria ◽  
Miguel-Ángel Manso-Callejo ◽  
Francisco Serradilla

Remote sensing imagery combined with deep learning strategies is often regarded as an ideal solution for interpreting scenes and monitoring infrastructures with remarkable performance levels. In addition, the road network plays an important part in transportation, and currently one of the main related challenges is detecting and monitoring the occurring changes in order to update the existent cartography. This task is challenging due to the nature of the object (continuous and often with no clearly defined borders) and the nature of remotely sensed images (noise, obstructions). In this paper, we propose a novel framework based on convolutional neural networks (CNNs) to classify secondary roads in high-resolution aerial orthoimages divided in tiles of 256 × 256 pixels. We will evaluate the framework’s performance on unseen test data and compare the results with those obtained by other popular CNNs trained from scratch.


2020 ◽  
Vol 12 (7) ◽  
pp. 1092
Author(s):  
David Browne ◽  
Michael Giering ◽  
Steven Prestwich

Scene classification is an important aspect of image/video understanding and segmentation. However, remote-sensing scene classification is a challenging image recognition task, partly due to the limited training data, which causes deep-learning Convolutional Neural Networks (CNNs) to overfit. Another difficulty is that images often have very different scales and orientation (viewing angle). Yet another is that the resulting networks may be very large, again making them prone to overfitting and unsuitable for deployment on memory- and energy-limited devices. We propose an efficient deep-learning approach to tackle these problems. We use transfer learning to compensate for the lack of data, and data augmentation to tackle varying scale and orientation. To reduce network size, we use a novel unsupervised learning approach based on k-means clustering, applied to all parts of the network: most network reduction methods use computationally expensive supervised learning methods, and apply only to the convolutional or fully connected layers, but not both. In experiments, we set new standards in classification accuracy on four remote-sensing and two scene-recognition image datasets.


2019 ◽  
Author(s):  
Yang Cao ◽  
Scott Montgomery ◽  
Johan Ottosson ◽  
Erik Näslund ◽  
Erik Stenberg

BACKGROUND Obesity is one of today’s most visible public health problems worldwide. Although modern bariatric surgery is ostensibly considered safe, serious complications and mortality still occur in some patients. OBJECTIVE This study aimed to explore whether serious postoperative complications of bariatric surgery recorded in a national quality registry can be predicted preoperatively using deep learning methods. METHODS Patients who were registered in the Scandinavian Obesity Surgery Registry (SOReg) between 2010 and 2015 were included in this study. The patients who underwent a bariatric procedure between 2010 and 2014 were used as training data, and those who underwent a bariatric procedure in 2015 were used as test data. Postoperative complications were graded according to the Clavien-Dindo classification, and complications requiring intervention under general anesthesia or resulting in organ failure or death were considered serious. Three supervised deep learning neural networks were applied and compared in our study: multilayer perceptron (MLP), convolutional neural network (CNN), and recurrent neural network (RNN). The synthetic minority oversampling technique (SMOTE) was used to artificially augment the patients with serious complications. The performances of the neural networks were evaluated using accuracy, sensitivity, specificity, Matthews correlation coefficient, and area under the receiver operating characteristic curve. RESULTS In total, 37,811 and 6250 patients were used as the training data and test data, with incidence rates of serious complication of 3.2% (1220/37,811) and 3.0% (188/6250), respectively. When trained using the SMOTE data, the MLP appeared to have a desirable performance, with an area under curve (AUC) of 0.84 (95% CI 0.83-0.85). However, its performance was low for the test data, with an AUC of 0.54 (95% CI 0.53-0.55). The performance of CNN was similar to that of MLP. It generated AUCs of 0.79 (95% CI 0.78-0.80) and 0.57 (95% CI 0.59-0.61) for the SMOTE data and test data, respectively. Compared with the MLP and CNN, the RNN showed worse performance, with AUCs of 0.65 (95% CI 0.64-0.66) and 0.55 (95% CI 0.53-0.57) for the SMOTE data and test data, respectively. CONCLUSIONS MLP and CNN showed improved, but limited, ability for predicting the postoperative serious complications after bariatric surgery in the Scandinavian Obesity Surgery Registry data. However, the overfitting issue is still apparent and needs to be overcome by incorporating intra- and perioperative information. CLINICALTRIAL


Sign in / Sign up

Export Citation Format

Share Document