scholarly journals A residual dense network assisted sparse view reconstruction for breast computed tomography

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Zhiyang Fu ◽  
Hsin Wu Tseng ◽  
Srinivasan Vedantham ◽  
Andrew Karellas ◽  
Ali Bilgin

AbstractTo develop and investigate a deep learning approach that uses sparse-view acquisition in dedicated breast computed tomography for radiation dose reduction, we propose a framework that combines 3D sparse-view cone-beam acquisition with a multi-slice residual dense network (MS-RDN) reconstruction. Projection datasets (300 views, full-scan) from 34 women were reconstructed using the FDK algorithm and served as reference. Sparse-view (100 views, full-scan) projection data were reconstructed using the FDK algorithm. The proposed MS-RDN uses the sparse-view and reference FDK reconstructions as input and label, respectively. Our MS-RDN evaluated with respect to fully sampled FDK reference yields superior performance, quantitatively and visually, compared to conventional compressed sensing methods and state-of-the-art deep learning based methods. The proposed deep learning driven framework can potentially enable low dose breast CT imaging.

2021 ◽  
pp. 1141-1150
Author(s):  
Felipe Soares Torres ◽  
Shazia Akbar ◽  
Srinivas Raman ◽  
Kazuhiro Yasufuku ◽  
Carola Schmidt ◽  
...  

PURPOSE Clinical TNM staging is a key prognostic factor for patients with lung cancer and is used to inform treatment and monitoring. Computed tomography (CT) plays a central role in defining the stage of disease. Deep learning applied to pretreatment CTs may offer additional, individualized prognostic information to facilitate more precise mortality risk prediction and stratification. METHODS We developed a fully automated imaging-based prognostication technique (IPRO) using deep learning to predict 1-year, 2-year, and 5-year mortality from pretreatment CTs of patients with stage I-IV lung cancer. Using six publicly available data sets from The Cancer Imaging Archive, we performed a retrospective five-fold cross-validation using pretreatment CTs of 1,689 patients, of whom 1,110 were diagnosed with non–small-cell lung cancer and had available TNM staging information. We compared the association of IPRO and TNM staging with patients' survival status and assessed an Ensemble risk score that combines IPRO and TNM staging. Finally, we evaluated IPRO's ability to stratify patients within TNM stages using hazard ratios (HRs) and Kaplan-Meier curves. RESULTS IPRO showed similar prognostic power (concordance index [C-index] 1-year: 0.72, 2-year: 0.70, 5-year: 0.68) compared with that of TNM staging (C-index 1-year: 0.71, 2-year: 0.71, 5-year: 0.70) in predicting 1-year, 2-year, and 5-year mortality. The Ensemble risk score yielded superior performance across all time points (C-index 1-year: 0.77, 2-year: 0.77, 5-year: 0.76). IPRO stratified patients within TNM stages, discriminating between highest- and lowest-risk quintiles in stages I (HR: 8.60), II (HR: 5.03), III (HR: 3.18), and IV (HR: 1.91). CONCLUSION Deep learning applied to pretreatment CT combined with TNM staging enhances prognostication and risk stratification in patients with lung cancer.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8164
Author(s):  
Linlin Zhu ◽  
Yu Han ◽  
Xiaoqi Xi ◽  
Lei Li ◽  
Bin Yan

In computed tomography (CT) images, the presence of metal artifacts leads to contaminated object structures. Theoretically, eliminating metal artifacts in the sinogram domain can correct projection deviation and provide reconstructed images that are more real. Contemporary methods that use deep networks for completing metal-damaged sinogram data are limited to discontinuity at the boundaries of traces, which, however, lead to secondary artifacts. This study modifies the traditional U-net and adds two sinogram feature losses of projection images—namely, continuity and consistency of projection data at each angle, improving the accuracy of the complemented sinogram data. Masking the metal traces also ensures the stability and reliability of the unaffected data during metal artifacts reduction. The projection and reconstruction results and various evaluation metrics reveal that the proposed method can accurately repair missing data and reduce metal artifacts in reconstructed CT images.


2019 ◽  
Vol 26 (4) ◽  
pp. 1343-1353 ◽  
Author(s):  
Renata Longo ◽  
Fulvia Arfelli ◽  
Deborah Bonazza ◽  
Ubaldo Bottigli ◽  
Luca Brombal ◽  
...  

Breast computed tomography (BCT) is an emerging application of X-ray tomography in radiological practice. A few clinical prototypes are under evaluation in hospitals and new systems are under development aiming at improving spatial and contrast resolution and reducing delivered dose. At the same time, synchrotron-radiation phase-contrast mammography has been demonstrated to offer substantial advantages when compared with conventional mammography. At Elettra, the Italian synchrotron radiation facility, a clinical program of phase-contrast BCT based on the free-space propagation approach is under development. In this paper, full-volume breast samples imaged with a beam energy of 32 keV delivering a mean glandular dose of 5 mGy are presented. The whole acquisition setup mimics a clinical study in order to evaluate its feasibility in terms of acquisition time and image quality. Acquisitions are performed using a high-resolution CdTe photon-counting detector and the projection data are processed via a phase-retrieval algorithm. Tomographic reconstructions are compared with conventional mammographic images acquired prior to surgery and with histologic examinations. Results indicate that BCT with monochromatic beam and free-space propagation phase-contrast imaging provide relevant three-dimensional insights of breast morphology at clinically acceptable doses and scan times.


2014 ◽  
Vol 20 (4) ◽  
pp. 364-374 ◽  
Author(s):  
Posy Seifert ◽  
David Conover ◽  
Yan Zhang ◽  
Renee Morgan ◽  
Andrea Arieno ◽  
...  

2021 ◽  
pp. 1063293X2110214
Author(s):  
RT Subhalakshmi ◽  
S Appavu alias Balamurugan ◽  
S Sasikala

Recently, the COVID-19 pandemic becomes increased in a drastic way, with the availability of a limited quantity of rapid testing kits. Therefore, automated COVID-19 diagnosis models are essential to identify the existence of disease from radiological images. Earlier studies have focused on the development of Artificial Intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. This paper aims to develop a Deep Learning Based MultiModal Fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely Weiner Filtering (WF) based pre-processing, feature extraction and classification. The proposed model incorporates the fusion of deep features using VGG16 and Inception v4 models. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%.


Sign in / Sign up

Export Citation Format

Share Document