Deep Learning Based Lesion Detection For Mammograms

Author(s):  
Zhenjie Cao ◽  
Zhicheng Yang ◽  
Xinya Liu ◽  
Yanbo Zhang ◽  
Shibin Wu ◽  
...  
Author(s):  
Xiao Luo PhD ◽  
Min Xu ◽  
Guoxue Tang ◽  
Yi Wang PhD ◽  
Na Wang ◽  
...  

Objectives: The aim of this study was to investigate the detection efficacy of deep learning (DL) for automatic breast ultrasound (ABUS) and factors affecting its efficacy. Methods: Women who underwent ABUS and handheld ultrasound from May 2016 to June 2017 (N = 397) were enrolled and divided into training (n = 163 patients with breast cancer and 33 with benign lesions), test (n = 57) and control (n = 144) groups. A convolutional neural network was optimised to detect lesions in ABUS. The sensitivity and false positives (FPs) were evaluated and compared for different breast tissue compositions, lesion sizes, morphologies and echo patterns. Results: In the training set, with 688 lesion regions (LRs), the network achieved sensitivities of 93.8%, 97.2 and 100%, based on volume, lesion and patient, respectively, with 1.9 FPs per volume. In the test group with 247 LRs, the sensitivities were 92.7%, 94.5 and 96.5%, respectively, with 2.4 FPs per volume. The control group, with 900 volumes, showed 0.24 FPs per volume. The sensitivity was 98% for lesions > 1 cm3, but 87% for those ≤1 cm3 (p < 0.05). Similar sensitivities and FPs were observed for different breast tissue compositions (homogeneous, 97.5%, 2.1; heterogeneous, 93.6%, 2.1), lesion morphologies (mass, 96.3%, 2.1; non-mass, 95.8%, 2.0) and echo patterns (homogeneous, 96.1%, 2.1; heterogeneous 96.8%, 2.1). Conclusions: DL had high detection sensitivity with a low FP but was affected by lesion size. Advances in knowledge: DL is technically feasible for the automatic detection of lesions in ABUS.


2021 ◽  
Author(s):  
Loay Hassan ◽  
Mohamed Abedl-Nasser ◽  
Adel Saleh ◽  
Domenec Puig

Digital breast tomosynthesis (DBT) is one of the powerful breast cancer screening technologies. DBT can improve the ability of radiologists to detect breast cancer, especially in the case of dense breasts, where it beats mammography. Although many automated methods were proposed to detect breast lesions in mammographic images, very few methods were proposed for DBT due to the unavailability of enough annotated DBT images for training object detectors. In this paper, we present fully automated deep-learning breast lesion detection methods. Specifically, we study the effectiveness of two data augmentation techniques (channel replication and channel-concatenation) with five state-of-the-art deep learning detection models. Our preliminary results on a challenging publically available DBT dataset showed that the channel-concatenation data augmentation technique can significantly improve the breast lesion detection results for deep learning-based breast lesion detectors.


Author(s):  
Baris Turkbey ◽  
Masoom A. Haider

Prostate cancer (PCa) is the most common cancer type in males in the Western World. MRI has an established role in diagnosis of PCa through guiding biopsies. Due to multistep complex nature of the MRI-guided PCa diagnosis pathway, diagnostic performance has a big variation. Developing artificial intelligence (AI) models using machine learning, particularly deep learning, has an expanding role in radiology. Specifically, for prostate MRI, several AI approaches have been defined in the literature for prostate segmentation, lesion detection and classification with the aim of improving diagnostic performance and interobserver agreement. In this review article, we summarize the use of radiology applications of AI in prostate MRI.


2021 ◽  
Vol 94 (1121) ◽  
pp. 20201329
Author(s):  
Yoshifumi Noda ◽  
Tetsuro Kaga ◽  
Nobuyuki Kawai ◽  
Toshiharu Miyoshi ◽  
Hiroshi Kawada ◽  
...  

Objectives: To evaluate image quality and lesion detection capabilities of low-dose (LD) portal venous phase whole-body computed tomography (CT) using deep learning image reconstruction (DLIR). Methods: The study cohort of 59 consecutive patients (mean age, 67.2 years) who underwent whole-body LD CT and a prior standard-dose (SD) CT reconstructed with hybrid iterative reconstruction (SD-IR) within one year for surveillance of malignancy were assessed. The LD CT images were reconstructed with hybrid iterative reconstruction of 40% (LD-IR) and DLIR (LD-DLIR). The radiologists independently evaluated image quality (5-point scale) and lesion detection. Attenuation values in Hounsfield units (HU) of the liver, pancreas, spleen, abdominal aorta, and portal vein; the background noise and signal-to-noise ratio (SNR) of the liver, pancreas, and spleen were calculated. Qualitative and quantitative parameters were compared between the SD-IR, LD-IR, and LD-DLIR images. The CT dose-index volumes (CTDIvol) and dose-length product (DLP) were compared between SD and LD scans. Results: The image quality and lesion detection rate of the LD-DLIR was comparable to the SD-IR. The image quality was significantly better in SD-IR than in LD-IR (p < 0.017). The attenuation values of all anatomical structures were comparable between the SD-IR and LD-DLIR (p = 0.28–0.96). However, background noise was significantly lower in the LD-DLIR (p < 0.001) and resulted in improved SNRs (p < 0.001) compared to the SD-IR and LD-IR images. The mean CTDIvol and DLP were significantly lower in the LD (2.9 mGy and 216.2 mGy•cm) than in the SD (13.5 mGy and 1011.6 mGy•cm) (p < 0.0001). Conclusion: LD CT images reconstructed with DLIR enable radiation dose reduction of >75% while maintaining image quality and lesion detection rate and superior SNR in comparison to SD-IR. Advances in knowledge: Deep learning image reconstruction algorithm enables around 80% reduction in radiation dose while maintaining the image quality and lesion detection compared to standard-dose whole-body CT.


2020 ◽  
Author(s):  
AmirAbbas Davari ◽  
Thorsten Seehaus ◽  
Matthias Braun ◽  
Andreas Maier

&lt;p&gt;Glacier and ice sheets are currently contributing 2/3 of the observed global sea level rise of about 3.2 mm a&lt;sup&gt;-1&lt;/sup&gt;. Many of these glaciated regions (Antarctica, sub-Antarctic islands, Greenland, Russian and Canadian Arctic, Alaska, Patagonia), often with ocean calving ice front. Many glaciers on those regions show already considerable ice mass loss, with an observed acceleration in the last decade [1]. Most of this mass loss is caused by dynamic adjustment of glaciers, with considerable glacier retreat and elevation change being the major observables. The continuous and precise extraction of glacier calving fronts is hence of paramount importance for monitoring the rapid glacier changes. Detection and monitoring the ice shelves and glacier fronts from optical and Synthetic Aperture Radar (SAR) satellite images needs well-identified spectral and physical properties of glacier characteristics.&lt;/p&gt;&lt;p&gt;Earth Observation (EO) is producing massive amounts of data that are currently often processed either by the expensive and slow manual digitization or with simple unreliable methods such as heuristically found rule-based systems. As it was mentioned earlier, due to the variable occurrence of sea ice, icebergs and the similarity of fronts to crevasses, exact mapping of the glacier front position poses considerable difficulties to existing algorithms. Deep learning techniques are successfully applied in many tasks in image analysis [2]. Recently, Zhang et al. [3] adopted the state-of-the-art deep learning-based image segmentation method, i.e., U-net [4], on TerraSAR-X images for glacier front segmentation. The main motivation in using SAR modality instead of the optical aerial imagery is the capability of the SAR waves to penetrate cloud cover and its all year acquisition.&lt;/p&gt;&lt;p&gt;We intend to bridge the gap for a fully automatic and end-to-end deep learning-based glacier front detection using time series SAR imagery. U-net has performed extremely well in image segmentation, specifically in medical image processing community [5]. However, it is a large and complex model and is rather slow to train. Fully Convolutional Network (FCN) [6] can be considered as architecturally less complex variant of U-net, which has faster training and inference time. In this work, we will investigate the suitability of FCN for the glacier front segmentation and compare their performance with U-net. Our preliminary results on segmenting the glaciers demonstrate the dice coefficient of 92.96% by FCN and 93.20% by U-net, which essentially indicate the suitability of FCN for this task and its comparable performance to U-net.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;[1] Vaughan et al. &quot;Observations: cryosphere.&quot; Climate change 2103 (2013): 317-382.&lt;/p&gt;&lt;p&gt;[2] LeCun et al. &quot;Deep learning.&quot; nature 521, no. 7553 (2015): 436.&lt;/p&gt;&lt;p&gt;[3] Zhang et al. &quot;Automatically delineating the calving front of Jakobshavn Isbr&amp;#230; from multitemporal TerraSAR-X images: a deep learning approach.&quot; The Cryosphere 13, no. 6 (2019): 1729-1741.&lt;/p&gt;&lt;p&gt;[4] Ronneberger et al. &quot;U-net: Convolutional networks for biomedical image segmentation.&quot; MICCAI 2015.&lt;/p&gt;&lt;p&gt;[5] Vesal et al. &quot;A multi-task framework for skin lesion detection and segmentation.&quot; In&amp;#160;OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, 2018.&lt;/p&gt;&lt;p&gt;[6] Long et al. &quot;Fully convolutional networks for semantic segmentation.&quot; CVPR 2015.&lt;/p&gt;


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e16605-e16605
Author(s):  
Choongheon Yoon ◽  
Jasper Van ◽  
Michelle Bardis ◽  
Param Bhatter ◽  
Alexander Ushinsky ◽  
...  

e16605 Background: Prostate Cancer is the most commonly diagnosed male cancer in the U.S. Multiparametric magnetic resonance imaging (mpMRI) is increasingly used for both prostate cancer evaluation and biopsy guidance. The PI-RADS v2 scoring paradigm was developed to stratify prostate lesions on MRI and to predict lesion grade. Prostate organ and lesion segmentation is an essential step in pre-biopsy surgical planning. Deep learning convolutional neural networks (CNN) for image recognition are becoming a more common method of machine learning. In this study, we develop a comprehensive deep learning pipeline of 3D/2D CNN based on U-Net architecture for automatic localization and segmentation of prostates, detection of prostate lesions and PI-RADS v2 lesion scoring of mpMRIs. Methods: This IRB approved retrospective review included a total of 303 prostate nodules from 217 patients who had a prostate mpMRI between September 2014 and December 2016 and an MR-guided transrectal biopsy. For each T2 weighted image, a board-certified abdominal radiologist manually segmented the prostate and each prostate lesion. The T2 weighted and ADC series were co-registered and each lesion was assigned an overall PI-RADS score, T2 weighted PI-RADS score, and ADC PI-RADS score. After a U-Net neural network segmented the prostate organ, a mask regional convolutional neural network (R-CNN) was applied. The mask R-CNN is composed of three neural networks: feature pyramid network, region proposal network, and head network. The mask R-CNN detected the prostate lesion, segmented it, and estimated its PI-RADS score. Instead, the mask R-CNN was implemented to regress along dimensions of the PI-RADS criteria. The mask R-CNN performance was assessed with AUC, Sørensen–Dice coefficient, and Cohen’s Kappa for PI-RADS scoring agreement. Results: The AUC for prostate nodule detection was 0.79. By varying detection thresholds, sensitivity/PPV were 0.94/.54 and 0.60/0.87 at either ends of the spectrum. For detected nodules, the segmentation Sørensen–Dice coefficient was 0.76 (0.72 – 0.80). Weighted Cohen’s Kappa for PI-RADS scoring agreement was 0.63, 0.71, and 0.51 for composite, T2 weighted, and ADC respectively. Conclusions: These results demonstrate the feasibility of implementing a comprehensive 3D/2D CNN-based deep learning pipeline for evaluation of prostate mpMRI. This method is highly accurate for organ segmentation. The results for lesion detection and categorization are modest; however, the PI-RADS v2 score accuracy is comparable to previously published human interobserver agreement.


Author(s):  
Bhavani Sambaturu ◽  
Bhargav Srinivasan ◽  
Sahana Muraleedhara Prabhu ◽  
Kumar Thirunellai Rajamani ◽  
Thennarasu Palanisamy ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document