Convolutional neural network for discriminating nasopharyngeal carcinoma and benign hyperplasia on MRI

Author(s):  
Lun M. Wong ◽  
Ann D. King ◽  
Qi Yong H. Ai ◽  
W. K. Jacky Lam ◽  
Darren M. C. Poon ◽  
...  
2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Qiaoliang Li ◽  
Yuzhen Xu ◽  
Zhewei Chen ◽  
Dexiang Liu ◽  
Shi-Ting Feng ◽  
...  

Objectives. To evaluate the application of a deep learning architecture, based on the convolutional neural network (CNN) technique, to perform automatic tumor segmentation of magnetic resonance imaging (MRI) for nasopharyngeal carcinoma (NPC). Materials and Methods. In this prospective study, 87 MRI containing tumor regions were acquired from newly diagnosed NPC patients. These 87 MRI were augmented to >60,000 images. The proposed CNN network is composed of two phases: feature representation and scores map reconstruction. We designed a stepwise scheme to train our CNN network. To evaluate the performance of our method, we used case-by-case leave-one-out cross-validation (LOOCV). The ground truth of tumor contouring was acquired by the consensus of two experienced radiologists. Results. The mean values of dice similarity coefficient, percent match, and their corresponding ratio with our method were 0.89±0.05, 0.90±0.04, and 0.84±0.06, respectively, all of which were better than reported values in the similar studies. Conclusions. We successfully established a segmentation method for NPC based on deep learning in contrast-enhanced magnetic resonance imaging. Further clinical trials with dedicated algorithms are warranted.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Deli Wang ◽  
Zheng Gong ◽  
Yanfen Zhang ◽  
Shouxi Wang

The aim of this study was to explore the adoption value of convolutional neural network- (CNN-) based magnetic resonance imaging (MRI) image intelligent segmentation model in the identification of nasopharyngeal carcinoma (NPC) lesions. The multisequence cross convolutional (MSCC) method was used in the complex convolutional network algorithm to establish the intelligent segmentation model two-dimensional (2D) ResUNet for the MRI image of the NPC lesion. Moreover, a multisequence multidimensional fusion segmentation model (MSCC-MDF) was further established. With 45 patients with NPC as the research objects, the Dice coefficient, Hausdorff distance (HD), and percentage of area difference (PAD) were calculated to evaluate the segmentation effect of MRI lesions. The results showed that the 2D-ResUNet model processed by MSCC had the largest Dice coefficient of 0.792 ± 0.045 for segmenting the tumor lesions of NPC, and it also had the smallest HD and PAD, which were 5.94 ± 0.41 mm and 15.96 ± 1.232%, respectively. When batch size = 5, the convergence curve was relatively gentle, and the convergence speed was the best. The largest Dice coefficient of MSCC-MDF model segmenting NPC tumor lesions was 0.896 ± 0.09, and its HD and PAD were the smallest, which were 5.07 ± 0.54 mm and 14.41 ± 1.33%, respectively. Its Dice coefficient was lower than other algorithms ( P < 0.05 ), but HD and PAD were significantly higher than other algorithms ( P < 0.05 ). To sum up, the MSCC-MDF model significantly improved the segmentation performance of MRI lesions in NPC patients, which provided a reference for the diagnosis of NPC.


2021 ◽  
Vol 11 ◽  
Author(s):  
Yaoying Liu ◽  
Zhaocai Chen ◽  
Jinyuan Wang ◽  
Xiaoshen Wang ◽  
Baolin Qu ◽  
...  

PurposeThis study focused on predicting 3D dose distribution at high precision and generated the prediction methods for nasopharyngeal carcinoma patients (NPC) treated with Tomotherapy based on the patient-specific gap between organs at risk (OARs) and planning target volumes (PTVs).MethodsA convolutional neural network (CNN) is trained using the CT and contour masks as the input and dose distributions as output. The CNN is based on the “3D Dense-U-Net”, which combines the U-Net and the Dense-Net. To evaluate the model, we retrospectively used 124 NPC patients treated with Tomotherapy, in which 96 and 28 patients were randomly split and used for model training and test, respectively. We performed comparison studies using different training matrix shapes and dimensions for the CNN models, i.e., 128 ×128 ×48 (for Model I), 128 ×128 ×16 (for Model II), and 2D Dense U-Net (for Model III). The performance of these models was quantitatively evaluated using clinically relevant metrics and statistical analysis.ResultsWe found a more considerable height of the training patch size yields a better model outcome. The study calculated the corresponding errors by comparing the predicted dose with the ground truth. The mean deviations from the mean and maximum doses of PTVs and OARs were 2.42 and 2.93%. Error for the maximum dose of right optic nerves in Model I was 4.87 ± 6.88%, compared with 7.9 ± 6.8% in Model II (p=0.08) and 13.85 ± 10.97% in Model III (p&lt;0.01); the Model I performed the best. The gamma passing rates of PTV60 for 3%/3 mm criteria was 83.6 ± 5.2% in Model I, compared with 75.9 ± 5.5% in Model II (p&lt;0.001) and 77.2 ± 7.3% in Model III (p&lt;0.01); the Model I also gave the best outcome. The prediction error of D95 for PTV60 was 0.64 ± 0.68% in Model I, compared with 2.04 ± 1.38% in Model II (p&lt;0.01) and 1.05 ± 0.96% in Model III (p=0.01); the Model I was also the best one.ConclusionsIt is significant to train the dose prediction model by exploiting deep-learning techniques with various clinical logic concepts. Increasing the height (Y direction) of training patch size can improve the dose prediction accuracy of tiny OARs and the whole body. Our dose prediction network model provides a clinically acceptable result and a training strategy for a dose prediction model. It should be helpful to build automatic Tomotherapy planning.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document