A Two‐Stage Deep Learning Model for Fully Automated Pancreas Segmentation on Computed Tomography: Comparison with Intra‐Reader and Inter‐Reader Reliability at Full and Reduced Radiation Dose on an External Dataset

2021 ◽  
Author(s):  
Ananya Panda ◽  
Panagiotis Korfiatis ◽  
Garima Suman ◽  
Sushil K. Garg ◽  
Eric C. Polley ◽  
...  
2020 ◽  
Vol 196 ◽  
pp. 105711
Author(s):  
Mizuho Nishio ◽  
Sho Koyasu ◽  
Shunjiro Noguchi ◽  
Takao Kiguchi ◽  
Kanako Nakatsu ◽  
...  

2021 ◽  
Author(s):  
Yunan Wu ◽  
Junchi Liu ◽  
Gregory M White ◽  
Jie Deng

AbstractLiver MRI images often suffer degraded quality from ghosting or blurring artifact caused by patient respiratory or bulk motion. In this study, we developed a two-stage deep learning model to reduce motion artifact on dynamic contrast enhanced (DCE) liver MRIs. The stage-I network utilized a deep residual network with a densely connected multi-resolution block (DRN-DCMB) network to remove the majority of motion artifacts. The stage-II network applied the perceptual loss to preserve image structural features by updating the parameters of the stage-I network via backpropagation. The stage-I network was trained using small image patches simulated with five types of motion, i.e., rotational, sinusoidal, random, elastic deformation and through-plane, to mimic actual liver motion patterns. The stage-II network training used full-size images with the same types of motion as the stage-I network. The motion reduction deep learning model was testing using simulated motion images and images with real motion artifacts. The resulted images after two-stage processing demonstrated substantially reduced motion artifacts while preserved anatomic details without image blurriness. This model outperformed existing methods of motion reduction artifact on liver DCE-MRI.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 30373-30385 ◽  
Author(s):  
Farrukh Aslam Khan ◽  
Abdu Gumaei ◽  
Abdelouahid Derhab ◽  
Amir Hussain

2019 ◽  
Vol 53 (3) ◽  
pp. 1800986 ◽  
Author(s):  
Shuo Wang ◽  
Jingyun Shi ◽  
Zhaoxiang Ye ◽  
Di Dong ◽  
Dongdong Yu ◽  
...  

Epidermal growth factor receptor (EGFR) genotyping is critical for treatment guidelines such as the use of tyrosine kinase inhibitors in lung adenocarcinoma. Conventional identification of EGFR genotype requires biopsy and sequence testing which is invasive and may suffer from the difficulty of accessing tissue samples. Here, we propose a deep learning model to predict EGFR mutation status in lung adenocarcinoma using non-invasive computed tomography (CT).We retrospectively collected data from 844 lung adenocarcinoma patients with pre-operative CT images, EGFR mutation and clinical information from two hospitals. An end-to-end deep learning model was proposed to predict the EGFR mutation status by CT scanning.By training in 14 926 CT images, the deep learning model achieved encouraging predictive performance in both the primary cohort (n=603; AUC 0.85, 95% CI 0.83–0.88) and the independent validation cohort (n=241; AUC 0.81, 95% CI 0.79–0.83), which showed significant improvement over previous studies using hand-crafted CT features or clinical characteristics (p<0.001). The deep learning score demonstrated significant differences in EGFR-mutant and EGFR-wild type tumours (p<0.001).Since CT is routinely used in lung cancer diagnosis, the deep learning model provides a non-invasive and easy-to-use method for EGFR mutation status prediction.


2020 ◽  
pp. 000313482095377
Author(s):  
Michael D. Watson ◽  
William B. Lyman ◽  
Michael J. Passeri ◽  
Keith J. Murphy ◽  
John P. Sarantou ◽  
...  

Background Society consensus guidelines are commonly used to guide management of pancreatic cystic neoplasms (PCNs). However, downsides of these guidelines include unnecessary surgery and missed malignancy. The aim of this study was to use computed tomography (CT)-guided deep learning techniques to predict malignancy of PCNs. Materials and Methods Patients with PCNs who underwent resection were retrospectively reviewed. Axial images of the mucinous cystic neoplasms were collected and based on final pathology were assigned a binary outcome of advanced neoplasia or benign. Advanced neoplasia was defined as adenocarcinoma or intraductal papillary mucinous neoplasm with high-grade dysplasia. A convolutional neural network (CNN) deep learning model was trained on 66% of images, and this trained model was used to test 33% of images. Predictions from the deep learning model were compared to Fukuoka guidelines. Results Twenty-seven patients met the inclusion criteria, with 18 used for training and 9 for model testing. The trained deep learning model correctly predicted 3 of 3 malignant lesions and 5 of 6 benign lesions. Fukuoka guidelines correctly classified 2 of 3 malignant lesions as high risk and 4 of 6 benign lesions as worrisome. Following deep learning model predictions would have avoided 1 missed malignancy and 1 unnecessary operation. Discussion In this pilot study, a deep learning model correctly classified 8 of 9 PCNs and performed better than consensus guidelines. Deep learning can be used to predict malignancy of PCNs; however, further model improvements are necessary before clinical use.


Sign in / Sign up

Export Citation Format

Share Document