scholarly journals A review on Deep Learning approaches for low-dose Computed Tomography restoration

Author(s):  
K. A. Saneera Hemantha Kulathilake ◽  
Nor Aniza Abdullah ◽  
Aznul Qalid Md Sabri ◽  
Khin Wee Lai

AbstractComputed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.

Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 40
Author(s):  
Gyu Sang Yoo ◽  
Huan Minh Luu ◽  
Heejung Kim ◽  
Won Park ◽  
Hongryull Pyo ◽  
...  

We aimed to evaluate and compare the qualities of synthetic computed tomography (sCT) generated by various deep-learning methods in volumetric modulated arc therapy (VMAT) planning for prostate cancer. Simulation computed tomography (CT) and T2-weighted simulation magnetic resonance image from 113 patients were used in the sCT generation by three deep-learning approaches: generative adversarial network (GAN), cycle-consistent GAN (CycGAN), and reference-guided CycGAN (RgGAN), a new model which performed further adjustment of sCTs generated by CycGAN with available paired images. VMAT plans on the original simulation CT images were recalculated on the sCTs and the dosimetric differences were evaluated. For soft tissue, a significant difference in the mean Hounsfield unites (HUs) was observed between the original CT images and only sCTs from GAN (p = 0.03). The mean relative dose differences for planning target volumes or organs at risk were within 2% among the sCTs from the three deep-learning approaches. The differences in dosimetric parameters for D98% and D95% from original CT were lowest in sCT from RgGAN. In conclusion, HU conservation for soft tissue was poorest for GAN. There was the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Johannes Leuschner ◽  
Maximilian Schmidt ◽  
Daniel Otero Baguer ◽  
Peter Maass

AbstractDeep learning approaches for tomographic image reconstruction have become very effective and have been demonstrated to be competitive in the field. Comparing these approaches is a challenging task as they rely to a great extent on the data and setup used for training. With the Low-Dose Parallel Beam (LoDoPaB)-CT dataset, we provide a comprehensive, open-access database of computed tomography images and simulated low photon count measurements. It is suitable for training and comparing deep learning methods as well as classical reconstruction approaches. The dataset contains over 40000 scan slices from around 800 patients selected from the LIDC/IDRI database. The data selection and simulation setup are described in detail, and the generating script is publicly accessible. In addition, we provide a Python library for simplified access to the dataset and an online reconstruction challenge. Furthermore, the dataset can also be used for transfer learning as well as sparse and limited-angle reconstruction scenarios.


2021 ◽  
Author(s):  
Hoon Ko ◽  
Jimi Huh ◽  
Kyung Won Kim ◽  
Heewon Chung ◽  
Yousun Ko ◽  
...  

BACKGROUND Detection and quantification of intraabdominal free fluid (i.e., ascites) on computed tomography (CT) are essential processes to find emergent or urgent conditions in patients. In an emergent department, automatic detection and quantification of ascites will be beneficial. OBJECTIVE We aimed to develop an artificial intelligence (AI) algorithm for the automatic detection and quantification of ascites simultaneously using a single deep learning model (DLM). METHODS 2D deep learning models (DLMs) based on a deep residual U-Net, U-Net, bi-directional U-Net, and recurrent residual U-net were developed to segment areas of ascites on an abdominopelvic CT. Based on segmentation results, the DLMs detected ascites by classifying CT images into ascites images and non-ascites images. The AI algorithms were trained using 6,337 CT images from 160 subjects (80 with ascites and 80 without ascites) and tested using 1,635 CT images from 40 subjects (20 with ascites and 20 without ascites). The performance of AI algorithms was evaluated for diagnostic accuracy of ascites detection and for segmentation accuracy of ascites areas. Of these DLMs, we proposed an AI algorithm with the best performance. RESULTS The segmentation accuracy was the highest in the deep residual U-Net with a mean intersection over union (mIoU) value of 0.87, followed by U-Net, bi-directional U-Net, and recurrent residual U-net (mIoU values 0.80, 0.77, and 0.67, respectively). The detection accuracy was the highest in the deep residual U-net (0.96), followed by U-Net, bi-directional U-net, and recurrent residual U-net (0.90, 0.88, and 0.82, respectively). The deep residual U-net also achieved high sensitivity (0.96) and high specificity (0.96). CONCLUSIONS We propose the deep residual U-net-based AI algorithm for automatic detection and quantification of ascites on abdominopelvic CT scans, which provides excellent performance.


2019 ◽  
Vol 25 (6) ◽  
pp. 954-961 ◽  
Author(s):  
Diego Ardila ◽  
Atilla P. Kiraly ◽  
Sujeeth Bharadwaj ◽  
Bokyung Choi ◽  
Joshua J. Reicher ◽  
...  

2021 ◽  
Vol 94 (1117) ◽  
pp. 20200677
Author(s):  
Andrea Steuwe ◽  
Marie Weber ◽  
Oliver Thomas Bethge ◽  
Christin Rademacher ◽  
Matthias Boschheidgen ◽  
...  

Objectives: Modern reconstruction and post-processing software aims at reducing image noise in CT images, potentially allowing for a reduction of the employed radiation exposure. This study aimed at assessing the influence of a novel deep-learning based software on the subjective and objective image quality compared to two traditional methods [filtered back-projection (FBP), iterative reconstruction (IR)]. Methods: In this institutional review board-approved retrospective study, abdominal low-dose CT images of 27 patients (mean age 38 ± 12 years, volumetric CT dose index 2.9 ± 1.8 mGy) were reconstructed with IR, FBP and, furthermore, post-processed using a novel software. For the three reconstructions, qualitative and quantitative image quality was evaluated by means of CT numbers, noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) in six different ROIs. Additionally, the reconstructions were compared using SNR, peak SNR, root mean square error and mean absolute error to assess structural differences. Results: On average, CT numbers varied within 1 Hounsfield unit (HU) for the three assessed methods in the assessed ROIs. In soft tissue, image noise was up to 42% lower compared to FBP and up to 27% lower to IR when applying the novel software. Consequently, SNR and CNR were highest with the novel software. For both IR and the novel software, subjective image quality was equal but higher than the image quality of FBP-images. Conclusion: The assessed software reduces image noise while maintaining image information, even in comparison to IR, allowing for a potential dose reduction of approximately 20% in abdominal CT imaging. Advances in knowledge: The assessed software reduces image noise by up to 27% compared to IR and 48% compared to FBP while maintaining the image information. The reduced image noise allows for a potential dose reduction of approximately 20% in abdominal imaging.


Healthcare ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1579
Author(s):  
Wansuk Choi ◽  
Seoyoon Heo

The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.


Sign in / Sign up

Export Citation Format

Share Document