scholarly journals Feasibility of Synthetic Computed Tomography Images Generated from Magnetic Resonance Imaging Scans Using Various Deep Learning Methods in the Planning of Radiation Therapy for Prostate Cancer

Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 40
Author(s):  
Gyu Sang Yoo ◽  
Huan Minh Luu ◽  
Heejung Kim ◽  
Won Park ◽  
Hongryull Pyo ◽  
...  

We aimed to evaluate and compare the qualities of synthetic computed tomography (sCT) generated by various deep-learning methods in volumetric modulated arc therapy (VMAT) planning for prostate cancer. Simulation computed tomography (CT) and T2-weighted simulation magnetic resonance image from 113 patients were used in the sCT generation by three deep-learning approaches: generative adversarial network (GAN), cycle-consistent GAN (CycGAN), and reference-guided CycGAN (RgGAN), a new model which performed further adjustment of sCTs generated by CycGAN with available paired images. VMAT plans on the original simulation CT images were recalculated on the sCTs and the dosimetric differences were evaluated. For soft tissue, a significant difference in the mean Hounsfield unites (HUs) was observed between the original CT images and only sCTs from GAN (p = 0.03). The mean relative dose differences for planning target volumes or organs at risk were within 2% among the sCTs from the three deep-learning approaches. The differences in dosimetric parameters for D98% and D95% from original CT were lowest in sCT from RgGAN. In conclusion, HU conservation for soft tissue was poorest for GAN. There was the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies.

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Yafen Li ◽  
Wen Li ◽  
Jing Xiong ◽  
Jun Xia ◽  
Yaoqin Xie

Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis.


2017 ◽  
Vol 25 (2) ◽  
pp. 78-83 ◽  
Author(s):  
Olivia A. Ho ◽  
Nikoo Saber ◽  
Derek Stephens ◽  
April Clausen ◽  
James Drake ◽  
...  

Purpose: Single-suture nonsyndromic craniosynostosis is diagnosed using clinical assessment and computed tomography (CT). With increasing awareness of the associated risks of radiation exposure, the use of CT is particularly concerning in patients with craniosynostosis since they are exposed at a younger age and more frequently than the average child. Three-dimensional (3D) photogrammetry is advantageous—it involves no radiation, is conveniently obtainable within clinic, and does not require general anaesthesia. This study aims to assess how 3D photogrammetry compares to CT in the assessment of craniosynostosis severity, to quantify surgical outcomes, and analyze the validity of 3D photogrammetry in craniosynostosis. Methods: Computed tomography images and 3D photographs of patients who underwent craniosynostosis surgery were assessed and aligned to best fit. The intervening area between the CT and 3D photogrammetry curves at the supraorbital bar (bandeau) level in axial view was calculated. Statistical analysis was performed using Student t test. Ninety-five percent confidence intervals were determined and equivalence margins were applied. Results: In total, 41 pairs of CTs and 3D photographs were analyzed. The 95% confidence interval was 198.16 to 264.18 mm2 and the mean was 231.17 mm2. When comparisons were made in the same bandeau region omitting the temporalis muscle, the 95% confidence interval was 108.94 to 147.38 mm2, and the mean was 128.16 mm2. Although statistically significant difference between the modalities was found, they can be attributable to the dampening effect of soft tissue. Conclusion: Within certain error margins, 3D photogrammetry is comparable to CT in assessing the severity of single-suture nonsyndromic craniosynostosis. However, a dampening effect can be attributable to the soft tissue. Three-dimensional photogrammetry may be more applicable for severe cases of craniosynostosis but not milder deformity. It may also be beneficial for assessing the overall appearance and aesthetics but not for determining underlying bony severity.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Johannes Leuschner ◽  
Maximilian Schmidt ◽  
Daniel Otero Baguer ◽  
Peter Maass

AbstractDeep learning approaches for tomographic image reconstruction have become very effective and have been demonstrated to be competitive in the field. Comparing these approaches is a challenging task as they rely to a great extent on the data and setup used for training. With the Low-Dose Parallel Beam (LoDoPaB)-CT dataset, we provide a comprehensive, open-access database of computed tomography images and simulated low photon count measurements. It is suitable for training and comparing deep learning methods as well as classical reconstruction approaches. The dataset contains over 40000 scan slices from around 800 patients selected from the LIDC/IDRI database. The data selection and simulation setup are described in detail, and the generating script is publicly accessible. In addition, we provide a Python library for simplified access to the dataset and an online reconstruction challenge. Furthermore, the dataset can also be used for transfer learning as well as sparse and limited-angle reconstruction scenarios.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Frank Li ◽  
Jiwoong Choi ◽  
Chunrui Zou ◽  
John D. Newell ◽  
Alejandro P. Comellas ◽  
...  

AbstractChronic obstructive pulmonary disease (COPD) is a heterogeneous disease and the traditional variables extracted from computed tomography (CT) images may not be sufficient to describe all the topological features of lung tissues in COPD patients. We employed an unsupervised three-dimensional (3D) convolutional autoencoder (CAE)-feature constructor (FC) deep learning network to learn from CT data and derive tissue pattern-clusters jointly. We then applied exploratory factor analysis (EFA) to discover the unobserved latent traits (factors) among pattern-clusters. CT images at total lung capacity (TLC) and residual volume (RV) of 541 former smokers and 59 healthy non-smokers from the cohort of the SubPopulations and Intermediate Outcome Measures in the COPD Study (SPIROMICS) were analyzed. TLC and RV images were registered to calculate the Jacobian (determinant) values for all the voxels in TLC images. 3D Regions of interest (ROIs) with two data channels of CT intensity and Jacobian value were randomly extracted from training images and were fed to the 3D CAE-FC model. 80 pattern-clusters and 7 factors were identified. Factor scores computed for individual subjects were able to predict spirometry-measured pulmonary functions. Two factors which correlated with various emphysema subtypes, parametric response mapping (PRM) metrics, airway variants, and airway tree to lung volume ratio were discriminants of patients across all severity stages. Our findings suggest the potential of developing factor-based surrogate markers for new COPD phenotypes.


2021 ◽  
pp. 1063293X2110214
Author(s):  
RT Subhalakshmi ◽  
S Appavu alias Balamurugan ◽  
S Sasikala

Recently, the COVID-19 pandemic becomes increased in a drastic way, with the availability of a limited quantity of rapid testing kits. Therefore, automated COVID-19 diagnosis models are essential to identify the existence of disease from radiological images. Earlier studies have focused on the development of Artificial Intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. This paper aims to develop a Deep Learning Based MultiModal Fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely Weiner Filtering (WF) based pre-processing, feature extraction and classification. The proposed model incorporates the fusion of deep features using VGG16 and Inception v4 models. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%.


Author(s):  
K. A. Saneera Hemantha Kulathilake ◽  
Nor Aniza Abdullah ◽  
Aznul Qalid Md Sabri ◽  
Khin Wee Lai

AbstractComputed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A Chandrashekar ◽  
N Shivakumar ◽  
P Lapolla ◽  
A Handa ◽  
V Grau ◽  
...  

Abstract Introduction Contrast-enhanced computerised tomographic (CT) angiograms are widely used in cardiovascular imaging to obtain a non-invasive view of arterial structures. In aortic aneurysmal disease (AAA), CT angiograms are required prior to surgical intervention to differentiate between blood and the intra-luminal thrombus, which is present in 95% of cases. However, contrast agents are associated with complications at the injection site as well as renal toxicity leading to contrast-induced nephropathy (CIN) and renal failure. Purpose We hypothesised that the raw data acquired from a non-contrast CT contains sufficient information to differentiate blood and other soft tissue components. Therefore, we utilised deep learning methods to define the subtleties between the various components of soft tissue in order to simulate contrast enhanced CT images without the need of contrast agents. Methods Twenty-six AAA patients with paired non-contrast and contrast-enhanced CT images were randomly selected from an ethically approved ongoing study (Ethics Ref 13/SC/0250) and used for model training and evaluation (13/13). Non-contrast axial slices within the aneurysmal region from 10 patients (n=100) were sampled for the underlying Hounsfield unit (HU) distribution at the lumen, intra-luminal thrombus and interface locations, identified from their paired contrast axial slices. Subsequently, paired axial slices within the training cohort were augmented in a ratio of 10:1 to produce a total of 23,551 2-D images. We trained a 2-D Cycle Generative Adversarial Network (cycleGAN) for this non-contrast to contrast transformation task. Model output was assessed by comparison to the contrast image, which serves as a gold standard, using image similarity metrics (ex. SSIM Index). Results Sampling HUs within the non-contrast CT scan across multiple axial slices (Figure 1A) revealed significant differences between the blood flow lumen (yellow), blood/thrombus interface (red), and thrombus (blue) regions (p<0.001 for all comparisons). This highlighted the intrinsic differences between the regions and established the foundation for subsequent deep learning methods. The Non-Contrast-to-Contrast (NC2C)-cycleGAN was trained with a learning rate of 0.0002 for 200 epochs on 256 x 256 images centred around the aorta. Figure 1B depicts “contrast-enhanced” images generated from non-contrast CT images across the aortic length from the testing cohort. This preliminary model is able to differentiate between the lumen and intra-luminal thrombus of aneurysmal sections with reasonable resemblance to the ground truth. Conclusion This study describes, for the first time, the ability to differentiate between visually incoherent soft tissue regions in non-contrast CT images using deep learning methods. Ultimately, refinement of this methodology may negate the use of intravenous contrast and prevent related complications. CTA Generation from Non-Contrast CTs Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): Clarendon


Sign in / Sign up

Export Citation Format

Share Document