reconstruction algorithms
Recently Published Documents


TOTAL DOCUMENTS

1515
(FIVE YEARS 426)

H-INDEX

59
(FIVE YEARS 7)

2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Srinimalan Balakrishnan Selvakumaran ◽  
Daniel Mark Hall

Purpose The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science research approach. Current methods to create digital assets by capturing the state of existing buildings can provide high accuracy but are time-consuming, expensive and difficult. Design/methodology/approach Using design science research, this research identifies the need for a crowdsourced and cloud-based approach to reconstruct digital building assets. The research then develops and tests a fully functional smartphone application prototype. The proposed end-to-end smartphone workflow begins with data capture and ends with user applications. Findings The resulting implementation can achieve a realistic three-dimensional (3D) model characterized by different typologies, minimal trade-off in accuracy and low processing costs. By crowdsourcing the images, the proposed approach can reduce costs for asset reconstruction by an estimated 93% compared to manual modeling and 80% compared to locally processed reconstruction algorithms. Practical implications The resulting implementation achieves “good enough” reconstruction of as-is 3D models with minimal tradeoffs in accuracy compared to automated approaches and 15× cost savings compared to a manual approach. Potential facility management use cases include the issue and information tracking, 3D mark-up and multi-model configurators. Originality/value Through user engagement, development, testing and validation, this work demonstrates the feasibility and impact of a novel crowdsourced and cloud-based approach for the reconstruction of digital building assets.


2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Roberto Fedrigo ◽  
Dan J. Kadrmas ◽  
Patricia E. Edem ◽  
Lauren Fougner ◽  
Ivan S. Klyuzhin ◽  
...  

Abstract Background Positron emission tomography (PET) with prostate specific membrane antigen (PSMA) have shown superior performance in detecting metastatic prostate cancers. Relative to [18F]fluorodeoxyglucose ([18F]FDG) PET images, PSMA PET images tend to visualize significantly higher-contrast focal lesions. We aim to evaluate segmentation and reconstruction algorithms in this emerging context. Specifically, Bayesian or maximum a posteriori (MAP) image reconstruction, compared to standard ordered subsets expectation maximization (OSEM) reconstruction, has received significant interest for its potential to reach convergence with minimal noise amplifications. However, few phantom studies have evaluated the quantitative accuracy of such reconstructions for high contrast, small lesions (sub-10 mm) that are typically observed in PSMA images. In this study, we cast 3 mm–16-mm spheres using epoxy resin infused with a long half-life positron emitter (sodium-22; 22Na) to simulate prostate cancer metastasis. The anthropomorphic Probe-IQ phantom, which features a liver, bladder, lungs, and ureters, was used to model relevant anatomy. Dynamic PET acquisitions were acquired and images were reconstructed with OSEM (varying subsets and iterations) and BSREM (varying β parameters), and the effects on lesion quantitation were evaluated. Results The 22Na lesions were scanned against an aqueous solution containing fluorine-18 (18F) as the background. Regions-of-interest were drawn with MIM Software using 40% fixed threshold (40% FT) and a gradient segmentation algorithm (MIM’s PET Edge+). Recovery coefficients (RCs) (max, mean, peak, and newly defined “apex”), metabolic tumour volume (MTV), and total tumour uptake (TTU) were calculated for each sphere. SUVpeak and SUVapex had the most consistent RCs for different lesion-to-background ratios and reconstruction parameters. The gradient-based segmentation algorithm was more accurate than 40% FT for determining MTV and TTU, particularly for lesions $$\le$$ ≤  6 mm in diameter (R2 = 0.979–0.996 vs. R2 = 0.115–0.527, respectively). Conclusion An anthropomorphic phantom was used to evaluate quantitation for PSMA PET imaging of metastatic prostate cancer lesions. BSREM with β = 200–400 and OSEM with 2–5 iterations resulted in the most accurate and robust measurements of SUVmean, MTV, and TTU for imaging conditions in 18F-PSMA PET/CT images. SUVapex, a hybrid metric of SUVmax and SUVpeak, was proposed for robust, accurate, and segmentation-free quantitation of lesions for PSMA PET.


2022 ◽  
Vol 3 ◽  
Author(s):  
Pierre-Jean Lapray ◽  
Jean-Baptiste Thomas ◽  
Ivar Farup

The visual systems found in nature rely on capturing light under different modalities, in terms of spectral sensitivities and polarization sensitivities. Numerous imaging techniques are inspired by this variety, among which, the most famous is color imaging inspired by the trichromacy theory of the human visual system. We investigate the spectral and polarimetric properties of biological imaging systems that will lead to the best performance on scene imaging through haze, i.e., dehazing. We design a benchmark experiment based on modalities inspired by several visual systems, and adapt state-of-the-art image reconstruction algorithms to those modalities. We show the difference in performance of each studied systems and discuss it in front of our methodology and the statistical relevance of our data.


2022 ◽  
Vol 14 (2) ◽  
pp. 333
Author(s):  
Luca Oggioni ◽  
David Sanchez del Rio Kandel ◽  
Giorgio Pariani

In the framework of earth observation for scientific purposes, we consider a multiband spatial compressive sensing (CS) acquisition system, based on a pushbroom scanning. We conduct a series of analyses to address the effects of the satellite movement on its performance in a context of a future space mission aimed at monitoring the cryosphere. We initially apply the state-of-the-art techniques of CS to static images, and evaluate the reconstruction errors on representative scenes of the earth. We then extend the reconstruction algorithms to pushframe acquisitions, i.e., static images processed line-by-line, and pushbroom acquisitions, i.e., moving frames, which consider the payload displacement during acquisition. A parallel analysis on the classical pushbroom acquisition strategy is also performed for comparison. Design guidelines following this analysis are then provided.


Nanophotonics ◽  
2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Min Huang ◽  
Bin Zheng ◽  
Tong Cai ◽  
Xiaofeng Li ◽  
Jian Liu ◽  
...  

Abstract Metasurfaces, interacted with artificial intelligence, have now been motivating many contemporary research studies to revisit established fields, e.g., direction of arrival (DOA) estimation. Conventional DOA estimation techniques typically necessitate bulky-sized beam-scanning equipment for signal acquisition or complicated reconstruction algorithms for data postprocessing, making them ineffective for in-situ detection. In this article, we propose a machine-learning-enabled metasurface for DOA estimation. For certain incident signals, a tunable metasurface is controlled in sequence, generating a series of field intensities at the single receiving probe. The perceived data are subsequently processed by a pretrained random forest model to access the incident angle. As an illustrative example, we experimentally demonstrate a high-accuracy intelligent DOA estimation approach for a wide range of incident angles and achieve more than 95% accuracy with an error of less than 0.5 ° $0.5{\degree}$ . The reported strategy opens a feasible route for intelligent DOA detection in full space and wide band. Moreover, it will provide breakthrough inspiration for traditional applications incorporating time-saving and equipment-simplified majorization.


2022 ◽  
Vol 14 (2) ◽  
pp. 288
Author(s):  
Yangyang Wang ◽  
Zhiming He ◽  
Xu Zhan ◽  
Yuanhua Fu ◽  
Liming Zhou

Three-dimensional (3D) synthetic aperture radar (SAR) imaging provides complete 3D spatial information, which has been used in environmental monitoring in recent years. Compared with matched filtering (MF) algorithms, the regularization technique can improve image quality. However, due to the substantial computational cost, the existing observation-matrix-based sparse imaging algorithm is difficult to apply to large-scene and 3D reconstructions. Therefore, in this paper, novel 3D sparse reconstruction algorithms with generalized Lq-regularization are proposed. First, we combine majorization–minimization (MM) and L1 regularization (MM-L1) to improve SAR image quality. Next, we combine MM and L1/2 regularization (MM-L1/2) to achieve high-quality 3D images. Then, we present the algorithm which combines MM and L0 regularization (MM-L0) to obtain 3D images. Finally, we present a generalized MM-Lq algorithm (GMM-Lq) for sparse SAR imaging problems with arbitrary q0≤q≤1 values. The proposed algorithm can improve the performance of 3D SAR images, compared with existing regularization techniques, and effectively reduce the amount of calculation needed. Additionally, the reconstructed complex image retains the phase information, which makes the reconstructed SAR image still suitable for interferometry applications. Simulation and experimental results verify the effectiveness of the algorithms.


2022 ◽  
pp. 1-13
Author(s):  
Lei Shi ◽  
Gangrong Qu ◽  
Yunsong Zhao

BACKGROUND: Ultra-limited-angle image reconstruction problem with a limited-angle scanning range less than or equal to π 2 is severely ill-posed. Due to the considerably large condition number of a linear system for image reconstruction, it is extremely challenging to generate a valid reconstructed image by traditional iterative reconstruction algorithms. OBJECTIVE: To develop and test a valid ultra-limited-angle CT image reconstruction algorithm. METHODS: We propose a new optimized reconstruction model and Reweighted Alternating Edge-preserving Diffusion and Smoothing algorithm in which a reweighted method of improving the condition number is incorporated into the idea of AEDS image reconstruction algorithm. The AEDS algorithm utilizes the property of image sparsity to improve partially the results. In experiments, the different algorithms (the Pre-Landweber, AEDS algorithms and our algorithm) are used to reconstruct the Shepp-Logan phantom from the simulated projection data with noises and the flat object with a large ratio between length and width from the real projection data. PSNR and SSIM are used as the quantitative indices to evaluate quality of reconstructed images. RESULTS: Experiment results showed that for simulated projection data, our algorithm improves PSNR and SSIM from 22.46db to 39.38db and from 0.71 to 0.96, respectively. For real projection data, our algorithm yields the highest PSNR and SSIM of 30.89db and 0.88, which obtains a valid reconstructed result. CONCLUSIONS: Our algorithm successfully combines the merits of several image processing and reconstruction algorithms. Thus, our new algorithm outperforms significantly other two algorithms and is valid for ultra-limited-angle CT image reconstruction.


Author(s):  
Zlatan Alagic ◽  
Jacqueline Diaz Cardenas ◽  
Kolbeinn Halldorsson ◽  
Vitali Grozman ◽  
Stig Wallgren ◽  
...  

Abstract Purpose To compare the image quality between a deep learning–based image reconstruction algorithm (DLIR) and an adaptive statistical iterative reconstruction algorithm (ASiR-V) in noncontrast trauma head CT. Methods Head CT scans from 94 consecutive trauma patients were included. Images were reconstructed with ASiR-V 50% and the DLIR strengths: low (DLIR-L), medium (DLIR-M), and high (DLIR-H). The image quality was assessed quantitatively and qualitatively and compared between the different reconstruction algorithms. Inter-reader agreement was assessed by weighted kappa. Results DLIR-M and DLIR-H demonstrated lower image noise (p < 0.001 for all pairwise comparisons), higher SNR of up to 82.9% (p < 0.001), and higher CNR of up to 53.3% (p < 0.001) compared to ASiR-V. DLIR-H outperformed other DLIR strengths (p ranging from < 0.001 to 0.016). DLIR-M outperformed DLIR-L (p < 0.001) and ASiR-V (p < 0.001). The distribution of reader scores for DLIR-M and DLIR-H shifted towards higher scores compared to DLIR-L and ASiR-V. There was a tendency towards higher scores with increasing DLIR strengths. There were fewer non-diagnostic CT series for DLIR-M and DLIR-H compared to ASiR-V and DLIR-L. No images were graded as non-diagnostic for DLIR-H regarding intracranial hemorrhage. The inter-reader agreement was fair-good between the second most and the less experienced reader, poor-moderate between the most and the less experienced reader, and poor-fair between the most and the second most experienced reader. Conclusion The image quality of trauma head CT series reconstructed with DLIR outperformed those reconstructed with ASiR-V. In particular, DLIR-M and DLIR-H demonstrated significantly improved image quality and fewer non-diagnostic images. The improvement in qualitative image quality was greater for the second most and the less experienced readers compared to the most experienced reader.


Author(s):  
Bhupinder Singh Khural ◽  
Matthias Baer-Beck ◽  
Eric Fournie ◽  
Karl Stierstorfer ◽  
Yixing Huang ◽  
...  

Abstract The problem of data truncation in Computed Tomography (CT) is caused by the missing data when the patient exceeds the Scan Field of View (SFOV) of a CT scanner. The reconstruction of a truncated scan produces severe truncation artifacts both inside and outside the SFOV. We have employed a deep learning-based approach to extend the field of view and suppress truncation artifacts. Thereby, our aim is to generate a good estimate of the real patient data and not to provide a perfect and diagnostic image even in regions beyond the SFOV of the CT scanner. This estimate could then be used as an input to higher order reconstruction algorithms [1]. To evaluate the influence of the network structure and layout on the results, three convolutional neural networks (CNNs), in particular a general CNN called ConvNet, an autoencoder, and the U-Net architecture have been investigated in this paper. Additionally, the impact of L1, L2, structural dissimilarity and perceptual loss functions on the neural network’s learning have been assessed and evaluated. The evaluation of data set comprising 12 truncated test patients demonstrated that the U-Net in combination with the structural dissimilarity loss showed the best performance in terms of image restoration in regions beyond the SFOV of the CT scanner. Moreover, this network produced the best mean absolute error, L1, L2, and structural dissimilarity evaluation measures on the test set compared to other applied networks. Therefore, it is possible to achieve truncation artifact removal using deep learning techniques.


Sign in / Sign up

Export Citation Format

Share Document