scholarly journals Prostate Cancer Delineation in MRI Images Based on Deep Learning: Quantitative Comparison and Promising Perspective

2021 ◽  
Author(s):  
Eddardaa Ben Loussaief ◽  
Mohamed Abdel-Nasser ◽  
Domènec Puig

Prostate cancer is the most common malignant male tumor. Magnetic Resonance Imaging (MRI) plays a crucial role in the detection, diagnosis, and treatment of prostate cancer diseases. Computer-aided diagnosis systems can help doctors to analyze MRI images and detect prostate cancer earlier. One of the key stages of prostate cancer CAD systems is the automatic delineation of the prostate. Deep learning has recently demonstrated promising segmentation results with medical images. The purpose of this paper is to compare the state-of-the-art of deep learning-based approaches for prostate delineation in MRI images and discussing their limitations and strengths. Besides, we introduce a promising perspective for prostate tumor classification in MRI images. This perspective includes the use of the best segmentation model to detect the prostate tumors in MRI images. Then, we will employ the segmented images to extract the radiomics features that will be used to discriminate benign or malignant prostate tumors.

PLoS ONE ◽  
2010 ◽  
Vol 5 (5) ◽  
pp. e10747 ◽  
Author(s):  
Christina Hägglöf ◽  
Peter Hammarsten ◽  
Andreas Josefsson ◽  
Pär Stattin ◽  
Janna Paulsson ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5411
Author(s):  
Luca Brunese ◽  
Francesco Mercaldo ◽  
Alfonso Reginelli ◽  
Antonella Santone

Prostate cancer is classified into different stages, each stage is related to a different Gleason score. The labeling of a diagnosed prostate cancer is a task usually performed by radiologists. In this paper we propose a deep architecture, based on several convolutional layers, aimed to automatically assign the Gleason score to Magnetic Resonance Imaging (MRI) under analysis. We exploit a set of 71 radiomic features belonging to five categories: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. The radiomic features are gathered directly from segmented MRIs using two free-available dataset for research purpose obtained from different institutions. The results, obtained in terms of accuracy, are promising: they are ranging between 0.96 and 0.98 for Gleason score prediction.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e16600-e16600 ◽  
Author(s):  
Jasper Van ◽  
Choongheon Yoon ◽  
Justin Glavis-Bloom ◽  
Michelle Bardis ◽  
Alexander Ushinsky ◽  
...  

e16600 Background: Prostate cancer is the most common cancer of men in the United States, with over 200,000 new cases diagnosed in 2018. Multiparametric MRI of the prostate (mpMRI) has emerged as valuable adjunct for the detection and characterization of prostate cancer as well as for guidance of prostate biopsy. As mpMRI progresses towards widespread clinical use, major challenges have been identified, arising from the need to increase accuracy of mpMRI localization of prostate lesions, improve in lesion categorization, and decrease the time and technical complexity of mpMRI evaluation by radiologists or urologists. Deep learning convolutional neural networks (CNN) for image recognition are becoming a more common method of machine learning and show promise in evaluation of complex medical imaging. In this study we describe a deep learning approach for automatic localization and segmentation of prostates organ on clinically acquired mpMRIs. Methods: This IRB approved retrospective review included patients who had a prostate MRI between September 2014 and August 2018 and an MR-guided transrectal biopsy. For each mpMRI the prostate was manually segmented by a board-certified abdominal radiologist on T2 weighted sequence. A hybrid 3D/2D CNN based on U-Net architecture was developed and trained using these manually segmented images to perform automated organ segmentation. After training, the CNN was used to produce prostate segmentations autonomously on clinical mpMRI. Accuracy of the CNN was assessed by Sørensen–Dice coefficient and Pearson coefficient. Five-fold validation was performed. Results: The CNN was successfully trained and five-fold validation performed on 411 prostate mpMRIs. The Sørensen–Dice coefficient from the five-fold cross validation was 0.87 and the Pearson correlation coefficient for segmented volume was 0.99. Conclusions: These results demonstrate that a CNN can be developed and trained to automatically localize and volumetrically segment the prostate on clinical mpMRI with high accuracy. This study supports the potential for developing an automated deep learning CNN for organ segmentation to replace clinical manual segmentation. Future studies will look towards prostate lesion localization and categorization on mpMRI.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1964
Author(s):  
Reza Kalantar ◽  
Gigin Lin ◽  
Jessica M. Winfield ◽  
Christina Messiou ◽  
Susan Lalondrelle ◽  
...  

The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.


Diagnostics ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 951
Author(s):  
David J. Winkel ◽  
Christian Wetterauer ◽  
Marc Oliver Matthias ◽  
Bin Lou ◽  
Bibo Shi ◽  
...  

Background: Opportunistic prostate cancer (PCa) screening is a controversial topic. Magnetic resonance imaging (MRI) has proven to detect prostate cancer with a high sensitivity and specificity, leading to the idea to perform an image-guided prostate cancer (PCa) screening; Methods: We evaluated a prospectively enrolled cohort of 49 healthy men participating in a dedicated image-guided PCa screening trial employing a biparametric MRI (bpMRI) protocol consisting of T2-weighted (T2w) and diffusion weighted imaging (DWI) sequences. Datasets were analyzed both by human readers and by a fully automated artificial intelligence (AI) software using deep learning (DL). Agreement between the algorithm and the reports—serving as the ground truth—was compared on a per-case and per-lesion level using metrics of diagnostic accuracy and k statistics; Results: The DL method yielded an 87% sensitivity (33/38) and 50% specificity (5/10) with a k of 0.42. 12/28 (43%) Prostate Imaging Reporting and Data System (PI-RADS) 3, 16/22 (73%) PI-RADS 4, and 5/5 (100%) PI-RADS 5 lesions were detected compared to the ground truth. Targeted biopsy revealed PCa in six participants, all correctly diagnosed by both the human readers and AI. Conclusions: The results of our study show that in our AI-assisted, image-guided prostate cancer screening the software solution was able to identify highly suspicious lesions and has the potential to effectively guide the targeted-biopsy workflow.


2021 ◽  
Author(s):  
Kemal Üreten ◽  
Yüksel Maraş ◽  
Semra Duran ◽  
Kevser Gök

Abstract Objectives The aim of this study is to develop a computer-aided diagnosis method to assist physicians in evaluating sacroiliac radiographs. Methods Convolutional neural networks, a deep learning method, were used in this retrospective study. Transfer learning was implemented with pre-trained VGG-16, ResNet-101 and Inception-v3 networks. Normal pelvic radiographs (n = 290) and pelvic radiographs with sacroiliitis (n = 295) were used for the training of networks. Results The training results were evaluated with the criteria of accuracy, sensitivity, specificity and precision calculated from the confusion matrix and AUC (Area under the ROC curve) calculated from ROC (receiver operating characteristic) curve. Pre-trained VGG-16 model revealed accuracy, sensitivity, specificity, precision and AUC figures of 89.9%, 90.9%, 88.9%, 88.9% and 0.96 with test images, respectively. These results were 84.3%, 91.9%, 78.8%, 75.6 and 0.92 with pre-trained ResNet-101, and 82.0%, 79.6%, 85.0%, 86.7% and 0.90 with pre-trained inception-v3, respectively. Conclusions Successful results were obtained with all three models in this study where transfer learning was applied with pre-trained VGG-16, ResNet-101 and Inception-v3 networks. This method can assist clinicians in the diagnosis of sacroiliitis, provide them with a second objective interpretation, and also reduce the need for advanced imaging methods such as magnetic resonance imaging (MRI).


Sign in / Sign up

Export Citation Format

Share Document