ct reconstruction
Recently Published Documents


TOTAL DOCUMENTS

1015
(FIVE YEARS 277)

H-INDEX

39
(FIVE YEARS 8)

2022 ◽  
Vol 8 (1) ◽  
pp. 12
Author(s):  
Jürgen Hofmann ◽  
Alexander Flisch ◽  
Robert Zboray

This article describes the implementation of an efficient and fast in-house computed tomography (CT) reconstruction framework. The implementation principles of this cone-beam CT reconstruction tool chain are described here. The article mainly covers the core part of CT reconstruction, the filtered backprojection and its speed up on GPU hardware. Methods and implementations of tools for artifact reduction such as ring artifacts, beam hardening, algorithms for the center of rotation determination and tilted rotation axis correction are presented. The framework allows the reconstruction of CT images of arbitrary data size. Strategies on data splitting and GPU kernel optimization techniques applied for the backprojection process are illustrated by a few examples.


Photonics ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 35
Author(s):  
Xuru Li ◽  
Xueqin Sun ◽  
Yanbo Zhang ◽  
Jinxiao Pan ◽  
Ping Chen

Spectral computed tomography (CT) can divide collected photons into multi-energy channels and gain multi-channel projections synchronously by using photon-counting detectors. However, reconstructed images usually contain severe noise due to the limited number of photons in the corresponding energy channel. Tensor dictionary learning (TDL)-based methods have achieved better performance, but usually lose image edge information and details, especially from an under-sampling dataset. To address this problem, this paper proposes a method termed TDL with an enhanced sparsity constraint for spectral CT reconstruction. The proposed algorithm inherits the superiority of TDL by exploring the correlation of spectral CT images. Moreover, the method designs a regularization using the L0-norm of the image gradient to constrain images and the difference between images and a prior image in each energy channel simultaneously, further improving the ability to preserve edge information and subtle image details. The split-Bregman algorithm has been applied to address the proposed objective minimization model. Several numerical simulations and realistic preclinical mice are studied to assess the effectiveness of the proposed algorithm. The results demonstrate that the proposed method improves the quality of spectral CT images in terms of noise elimination, edge preservation, and image detail recovery compared to the several existing better methods.


2022 ◽  
Author(s):  
Yun Chen ◽  
Yao Lu ◽  
Xiangyuan Ma ◽  
Yuesheng Xu

Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting illposed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the outperformance of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with quadratic regularization on the CAUG, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid.


2021 ◽  
Vol 12 (1) ◽  
pp. 404
Author(s):  
Dominik F. Bauer ◽  
Constantin Ulrich ◽  
Tom Russ ◽  
Alena-Kathrin Golla ◽  
Lothar R. Schad ◽  
...  

Metal artifacts are common in CT-guided interventions due to the presence of metallic instruments. These artifacts often obscure clinically relevant structures, which can complicate the intervention. In this work, we present a deep learning CT reconstruction called iCTU-Net for the reduction of metal artifacts. The network emulates the filtering and back projection steps of the classical filtered back projection (FBP). A U-Net is used as post-processing to refine the back projected image. The reconstruction is trained end-to-end, i.e., the inputs of the iCTU-Net are sinograms and the outputs are reconstructed images. The network does not require a predefined back projection operator or the exact X-ray beam geometry. Supervised training is performed on simulated interventional data of the abdomen. For projection data exhibiting severe artifacts, the iCTU-Net achieved reconstructions with SSIM = 0.970±0.009 and PSNR = 40.7±1.6. The best reference method, an image based post-processing network, only achieved SSIM = 0.944±0.024 and PSNR = 39.8±1.9. Since the whole reconstruction process is learned, the network was able to fully utilize the raw data, which benefited from the removal of metal artifacts. The proposed method was the only studied method that could eliminate the metal streak artifacts.


Author(s):  
Rob Heylen ◽  
Aditi Thanki ◽  
Dries Verhees ◽  
Domenico Iuso ◽  
Jan De Beenhouwer ◽  
...  

Abstract X-ray computed tomography (X-CT) plays an important role in non-destructive quality inspection and process evaluation in metal additive manufacturing, as several types of defects such as keyhole and lack of fusion pores can be observed in these 3D images as local changes in material density. Segmentation of these defects often relies on threshold methods applied to the reconstructed attenuation values of the 3D image voxels. However, the segmentation accuracy is affected by unavoidable X-CT reconstruction features such as partial volume effects, voxel noise and imaging artefacts. These effects create false positives, difficulties in threshold value selection and unclear or jagged defect edges. In this paper, we present a new X-CT defect segmentation method based on preprocessing the X-CT image with a 3D total variation denoising method. By comparing the changes in the histogram, threshold selection can be significantly better, and the resulting segmentation is of much higher quality. We derive the optimal algorithm parameter settings and demonstrate robustness for deviating settings. The technique is presented on simulated data sets, compared between low- and high-quality X-CT scans, and evaluated with optical microscopy after destructive tests.


Tomography ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. 932-949
Author(s):  
Chang Sun ◽  
Yitong Liu ◽  
Hongwen Yang

Sparse-view CT reconstruction is a fundamental task in computed tomography to overcome undesired artifacts and recover the details of textual structure in degraded CT images. Recently, many deep learning-based networks have achieved desirable performances compared to iterative reconstruction algorithms. However, the performance of these methods may severely deteriorate when the degradation strength of the test image is not consistent with that of the training dataset. In addition, these methods do not pay enough attention to the characteristics of different degradation levels, so solely extending the training dataset with multiple degraded images is also not effective. Although training plentiful models in terms of each degradation level can mitigate this problem, extensive parameter storage is involved. Accordingly, in this paper, we focused on sparse-view CT reconstruction for multiple degradation levels. We propose a single degradation-aware deep learning framework to predict clear CT images by understanding the disparity of degradation in both the frequency domain and image domain. The dual-domain procedure can perform particular operations at different degradation levels in frequency component recovery and spatial details reconstruction. The peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and visual results demonstrate that our method outperformed the classical deep learning-based reconstruction methods in terms of effectiveness and scalability.


Author(s):  
Genwei Ma ◽  
Xing Zhao ◽  
Yining Zhu ◽  
Huitao Zhang

Abstract To solve the problem of learning based computed tomography (CT) reconstruction, several reconstruction networks were invented. However, applying neural network to tomographic reconstruction still remains challenging due to unacceptable memory space requirement. In this study, we presents a novel lightweight block reconstruction network (LBRN), which transforms the reconstruction operator into a deep neural network by unrolling the filter back-projection (FBP) method. Specifically, the proposed network contains two main modules, which, respectively, correspond to the filter and back-projection of FBP method. The first module of LBRN decouples the relationship of Radon transform between the reconstructed image and the projection data. Therefore, the following module, block back-projection module, can use the block reconstruction strategy. Due to each image block is only connected with part filtered projection data, the network structure is greatly simplified and the parameters of the whole network is dramatically reduced. Moreover, this approach is trained end-to-end, working directly from raw projection data and does not depend on any initial images. Five reconstruction experiments are conducted to evaluate the performance of the proposed LBRN: full angle, low-dose CT, region of interest (ROI), metal artifacts reduction and real data experiment. The results of the experiments show that the LBRN can be effectively introduced into the reconstruction process and has outstanding advantages in terms of different reconstruction problems.


Tomography ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. 877-892
Author(s):  
Jin H. Yoon ◽  
Shawn H. Sun ◽  
Manjun Xiao ◽  
Hao Yang ◽  
Lin Lu ◽  
...  

Achieving high feature reproducibility while preserving biological information is one of the main challenges for the generalizability of current radiomics studies. Non-clinical imaging variables, such as reconstruction kernels, have shown to significantly impact radiomics features. In this study, we retrain an open-source convolutional neural network (CNN) to harmonize computerized tomography (CT) images with various reconstruction kernels to improve feature reproducibility and radiomic model performance using epidermal growth factor receptor (EGFR) mutation prediction in lung cancer as a paradigm. In the training phase, the CNN was retrained and tested on 32 lung cancer patients’ CT images between two different groups of reconstruction kernels (smooth and sharp). In the validation phase, the retrained CNN was validated on an external cohort of 223 lung cancer patients’ CT images acquired using different CT scanners and kernels. The results showed that the retrained CNN could be successfully applied to external datasets with different CT scanner parameters, and harmonization of reconstruction kernels from sharp to smooth could significantly improve the performance of radiomics model in predicting EGFR mutation status in lung cancer. In conclusion, the CNN based method showed great potential in improving feature reproducibility and generalizability by harmonizing medical images with heterogeneous reconstruction kernels.


2021 ◽  
pp. 1-19
Author(s):  
Wei Wang ◽  
Xiang-Gen Xia ◽  
Chuanjiang He ◽  
Zemin Ren ◽  
Jian Lu

In this paper, we present an arc based fan-beam computed tomography (CT) reconstruction algorithm by applying Katsevich’s helical CT image reconstruction formula to 2D fan-beam CT scanning data. Specifically, we propose a new weighting function to deal with the redundant data. Our weighting function ϖ ( x _ , λ ) is an average of two characteristic functions, where each characteristic function indicates whether the projection data of the scanning angle contributes to the intensity of the pixel x _ . In fact, for every pixel x _ , our method uses the projection data of two scanning angle intervals to reconstruct its intensity, where one interval contains the starting angle and another contains the end angle. Each interval corresponds to a characteristic function. By extending the fan-beam algorithm to the circle cone-beam geometry, we also obtain a new circle cone-beam CT reconstruction algorithm. To verify the effectiveness of our method, the simulated experiments are performed for 2D fan-beam geometry with straight line detectors and 3D circle cone-beam geometry with flat-plan detectors, where the simulated sinograms are generated by the open-source software “ASTRA toolbox.” We compare our method with the other existing algorithms. Our experimental results show that our new method yields the lowest root-mean-square-error (RMSE) and the highest structural-similarity (SSIM) for both reconstructed 2D and 3D fan-beam CT images.


Sign in / Sign up

Export Citation Format

Share Document