Automatic Diagnosis of Rectal Cancer Based on CT Images by Deep Learning Method

Author(s):  
Hongyi Wang ◽  
Hongyu Wang ◽  
Limei Song ◽  
Qinhua Guo
Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


Author(s):  
Kotaro MAYUMI ◽  
Takayuki MATSUNO ◽  
Tetsushi KAMEGAWA ◽  
Takao HIRAKI ◽  
Yuichiro TODA ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Yafen Li ◽  
Wen Li ◽  
Jing Xiong ◽  
Jun Xia ◽  
Yaoqin Xie

Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis.


2019 ◽  
Vol 32 (4) ◽  
pp. 638-643 ◽  
Author(s):  
Seokmin Han ◽  
Sung Il Hwang ◽  
Hak Jong Lee

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Weijun Chen ◽  
Cheng Wang ◽  
Wenming Zhan ◽  
Yongshi Jia ◽  
Fangfang Ruan ◽  
...  

AbstractRadiotherapy requires the target area and the organs at risk to be contoured on the CT image of the patient. During the process of organs-at-Risk (OAR) of the chest and abdomen, the doctor needs to contour at each CT image. The delineations of large and varied shapes are time-consuming and laborious. This study aims to evaluate the results of two automatic contouring softwares on OARs definition of CT images of lung cancer and rectal cancer patients. The CT images of 15 patients with rectal cancer and 15 patients with lung cancer were selected separately, and the organs at risk were manually contoured by experienced physicians as reference structures. And then the same datasets were automatically contoured based on AiContour (version 3.1.8.0, Manufactured by Linking MED, Beijing, China) and Raystation (version 4.7.5.4, Manufactured by Raysearch, Stockholm, Sweden) respectively. Deep learning auto-segmentations and Atlas were respectively performed with AiContour and Raystation. Overlap index (OI), Dice similarity index (DSC) and Volume difference (Dv) were evaluated based on the auto-contours, and independent-sample t-test analysis is applied to the results. The results of deep learning auto-segmentations on OI and DSC were better than that of Atlas with statistical difference. There was no significant difference in Dv between the results of two software. With deep learning auto-segmentations, auto-contouring results of most organs in the chest and abdomen are good, and with slight modification, it can meet the clinical requirements for planning. With Atlas, auto-contouring results in most OAR is not as good as deep learning auto-segmentations, and only the auto-contouring results of some organs can be used clinically after modification.


2021 ◽  
Vol 11 ◽  
Author(s):  
Jun Zhao ◽  
Zhi Chen ◽  
Jiazhou Wang ◽  
Fan Xia ◽  
Jiayuan Peng ◽  
...  

Due to image quality limitations, online Megavoltage cone beam CT (MV CBCT), which represents real online patient anatomy, cannot be used to perform adaptive radiotherapy (ART). In this study, we used a deep learning method, the cycle-consistent adversarial network (CycleGAN), to improve the MV CBCT image quality and Hounsfield-unit (HU) accuracy for rectal cancer patients to make the generated synthetic CT (sCT) eligible for ART. Forty rectal cancer patients treated with the intensity modulated radiotherapy (IMRT) were involved in this study. The CT and MV CBCT images of 30 patients were used for model training, and the images of the remaining 10 patients were used for evaluation. Image quality, autosegmentation capability and dose calculation capability using the autoplanning technique of the generated sCT were evaluated. The mean absolute error (MAE) was reduced from 135.84 ± 41.59 HU for the CT and CBCT comparison to 52.99 ± 12.09 HU for the CT and sCT comparison. The structural similarity (SSIM) index for the CT and sCT comparison was 0.81 ± 0.03, which is a great improvement over the 0.44 ± 0.07 for the CT and CBCT comparison. The autosegmentation model performance on sCT for femoral heads was accurate and required almost no manual modification. For the CTV and bladder, although modification was needed for autocontouring, the Dice similarity coefficient (DSC) indices were high, at 0.93 and 0.94 for the CTV and bladder, respectively. For dose evaluation, the sCT-based plan has a much smaller dose deviation from the CT-based plan than that of the CBCT-based plan. The proposed method solved a key problem for rectal cancer ART realization based on MV CBCT. The generated sCT enables ART based on the actual patient anatomy at the treatment position.


2020 ◽  
Author(s):  
Wei Zhang ◽  
Zixing Huang ◽  
Jian Zhao ◽  
Du He ◽  
Mou Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document