Deep learning applied to breast imaging classification and segmentation with human expert intervention

Author(s):  
Rory Wilding ◽  
Vivek M. Sheraton ◽  
Lysabella Soto ◽  
Niketa Chotai ◽  
Ern Yu Tan
BMJ Open ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. e035757
Author(s):  
Chenyang Zhao ◽  
Mengsu Xiao ◽  
He Liu ◽  
Ming Wang ◽  
Hongyan Wang ◽  
...  

ObjectiveThe aim of the study is to explore the potential value of S-Detect for residents-in-training, a computer-assisted diagnosis system based on deep learning (DL) algorithm.MethodsThe study was designed as a cross-sectional study. Routine breast ultrasound examinations were conducted by an experienced radiologist. The ultrasonic images of the lesions were retrospectively assessed by five residents-in-training according to the Breast Imaging Report and Data System (BI-RADS) lexicon, and a dichotomic classification of the lesions was provided by S-Detect. The diagnostic performances of S-Detect and the five residents were measured and compared using the pathological results as the gold standard. The category 4a lesions assessed by the residents were downgraded to possibly benign as classified by S-Detect. The diagnostic performance of the integrated results was compared with the original results of the residents.ParticipantsA total of 195 focal breast lesions were consecutively enrolled, including 82 malignant lesions and 113 benign lesions.ResultsS-Detect presented higher specificity (77.88%) and area under the curve (AUC) (0.82) than the residents (specificity: 19.47%–48.67%, AUC: 0.62–0.74). A total of 24, 31, 38, 32 and 42 identified as BI-RADS 4a lesions by residents 1, 2, 3, 4 and 5 were downgraded to possibly benign lesions by S-Detect, respectively. Among these downgraded lesions, 24, 28, 35, 30 and 40 lesions were proven to be pathologically benign, respectively. After combining the residents' results with the results of the software in category 4a lesions, the specificity and AUC of the five residents significantly improved (specificity: 46.02%–76.11%, AUC: 0.71–0.85, p<0.001). The intraclass correlation coefficient of the five residents also increased after integration (from 0.480 to 0.643).ConclusionsWith the help of the DL software, the specificity, overall diagnostic performance and interobserver agreement of the residents greatly improved. The software can be used as adjunctive tool for residents-in-training, downgrading 4a lesions to possibly benign and reducing unnecessary biopsies.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4050 ◽  
Author(s):  
Vahab Khoshdel ◽  
Ahmed Ashraf ◽  
Joe LoVetri

We present a deep learning method used in conjunction with dual-modal microwave-ultrasound imaging to produce tomographic reconstructions of the complex-valued permittivity of numerical breast phantoms. We also assess tumor segmentation performance using the reconstructed permittivity as a feature. The contrast source inversion (CSI) technique is used to create the complex-permittivity images of the breast with ultrasound-derived tissue regions utilized as prior information. However, imaging artifacts make the detection of tumors difficult. To overcome this issue we train a convolutional neural network (CNN) that takes in, as input, the dual-modal CSI reconstruction and attempts to produce the true image of the complex tissue permittivity. The neural network consists of successive convolutional and downsampling layers, followed by successive deconvolutional and upsampling layers based on the U-Net architecture. To train the neural network, the input-output pairs consist of CSI’s dual-modal reconstructions, along with the true numerical phantom images from which the microwave scattered field was synthetically generated. The reconstructed permittivity images produced by the CNN show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but can also improve the detectability of tumors. The performance of the CNN is assessed using a four-fold cross-validation on our dataset that shows improvement over CSI both in terms of reconstruction error and tumor segmentation performance.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Asma Baccouche ◽  
Begonya Garcia-Zapirain ◽  
Cristian Castillo Olea ◽  
Adel S. Elmaghraby

AbstractBreast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset.


Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 66
Author(s):  
Yung-Hsien Hsieh ◽  
Fang-Rong Hsu ◽  
Seng-Tong Dai ◽  
Hsin-Ya Huang ◽  
Dar-Ren Chen ◽  
...  

In this study, we applied semantic segmentation using a fully convolutional deep learning network to identify characteristics of the Breast Imaging Reporting and Data System (BI-RADS) lexicon from breast ultrasound images to facilitate clinical malignancy tumor classification. Among 378 images (204 benign and 174 malignant images) from 189 patients (102 benign breast tumor patients and 87 malignant patients), we identified seven malignant characteristics related to the BI-RADS lexicon in breast ultrasound. The mean accuracy and mean IU of the semantic segmentation were 32.82% and 28.88, respectively. The weighted intersection over union was 85.35%, and the area under the curve was 89.47%, showing better performance than similar semantic segmentation networks, SegNet and U-Net, in the same dataset. Our results suggest that the utilization of a deep learning network in combination with the BI-RADS lexicon can be an important supplemental tool when using ultrasound to diagnose breast malignancy.


2019 ◽  
Author(s):  
Tomomichi Iizuka ◽  
Makoto Fukasawa ◽  
Masashi Kameyama

abstractThe differentiation of dementia with Lewy bodies (DLB) from Alzheimer’s disease (AD) using brain perfusion single photon emission tomography is important but has been a challenge because these conditions have common features. The cingulate island sign (CIS) is the most recently identified specific feature of DLB for a differential diagnosis. The present study aimed to examine the usefulness of deep learning-based imaging classification for the diagnoses of DLB and AD. We also investigated whether CIS was focused by the deep convolutional neural network (CNN) during differentiation.Brain perfusion single photon emission tomography images were acquired from 80 patients each with DLB and with AD and 80 individuals with normal cognition (NL). The CNN was trained on brain surface perfusion images. Gradient-weighted class activation mapping (Grad-CAM) was applied to the CNN for visualization of the features that the trained CNN focused on.Binary classifications between DLB and NL, DLB and AD and AD and NL were 94.69%, 87.81% and 94.38% accurate, respectively. The CIS ratios closely correlated with softmax output scores for DLB-AD discrimination (DLB/AD scores). The Grad-CAM highlighted CIS in the DLB discrimination. Visualization of learning process by guided Grad-CAM revealed that CIS became more focused by the CNN as the training progressed. DLB/AD score was significantly associated with three core-features of DLB.Deep learning-based imaging classification was useful not only for objective and accurate differentiation of DLB from AD but also for predicting clinical features of DLB. The CIS was identified as a specific feature during DLB classification. The visualization of specific features and learning process could have important implications for the potential of deep learning to discover new imaging features.


Sign in / Sign up

Export Citation Format

Share Document