A Shape Reconstruction Method for Two Phase Materials Using Resistance Tomography Data

2005 ◽  
Author(s):  
Manuchehr Soleimani

Electrical resistance tomography (ERT) has great potential to be used for multi-phase flow monitoring. The Image reconstruction in ERT is computationally costly, so the online monitoring is a difficult task. The linear reconstruction methods are currently used as fast methods. The image reconstruction is a nonlinear inverse problem and the linear methods are not sufficient in many cases. The application of a recently proposed non-iterative inversion method for two-phase materials in has been studied. The method works based on Monotonicity property of the resistance matrix in ERT and it requires modest computational cost. In this paper we explain the application of this inversion method. We demonstrate the capabilities and drawbacks of the method by using 2D test examples. A major contribution of this paper is to optimize the software program for the inversion (by doing most of the computations offline), so it can be used for online application.

2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2015 ◽  
Vol 8 (4) ◽  
pp. 1259-1273 ◽  
Author(s):  
J. Ray ◽  
J. Lee ◽  
V. Yadav ◽  
S. Lefantzi ◽  
A. M. Michalak ◽  
...  

Abstract. Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.


Author(s):  
Quanhu Zhang ◽  
Weihua Hui ◽  
Feng Li ◽  
Zhongmao Gu ◽  
Shaojun Qian

Tomographic Gamma Scanning (TGS) method is one of the most advanced non-destructive assay (NDA) methods. But for measuring heterogeneously distributed media with medium- and high-density, there are still three main problems: experiment’s method of the calibration of detection efficiency of TGS is more difficult and complicated because of large voxels, “point-to-point” model and average model can’t calculate high-density samples accurately in transmission image reconstruction and computational cost is very large for correction factor in emission image. Calibration of detection efficiency using Monte Carlo method shorten calibration cycle greatly, a new Monte Carlo statistical iteration in TGS transmission image reconstruction method which is based on MC calculation and Numerical Analysis is presented, give a chance for measuring high-density samples; the division method and pre-calculation method in reconstructing TGS emission image is used which saves a great lot of computation time and provide a fast reconstruction algorithm for emission image. Above methods apply to TGS experiment device, the relative errors between experiment and MC calibration were less 5%; the relative errors between reconstructed values and the reference values were less than 4% in transmission image; the corrected experimental results were compared to the standard values and the relative deviation was found to be 7%. It took no more than one hour to complete the reconstruction of TGS emission image for a sample model with 3×3×3 voxels using a 2.0G computer.


Author(s):  
Candida Mwisomba ◽  
Abdi T. Abdalla ◽  
Idrissa Amour ◽  
Florian Mkemwa ◽  
Baraka Maiseli

Abstract Compressed sensing allows recovery of image signals using a portion of data – a technique that has drastically revolutionized the field of through-the-wall radar imaging (TWRI). This technique can be accomplished through nonlinear methods, including convex programming and greedy iterative algorithms. However, such (nonlinear) methods increase the computational cost at the sensing and reconstruction stages, thus limiting the application of TWRI in delicate practical tasks (e.g. military operations and rescue missions) that demand fast response times. Motivated by this limitation, the current work introduces the use of a numerical optimization algorithm, called Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS), to the TWRI framework to lower image reconstruction time. LBFGS, a well-known Quasi-Newton algorithm, has traditionally been applied to solve large scale optimization problems. Despite its potential applications, this algorithm has not been extensively applied in TWRI. Therefore, guided by LBFGS and using the Euclidean norm, we employed the regularized least square method to solve the cost function of the TWRI problem. Simulation results show that our method reduces the computational time by 87% relative to the classical method, even under situations of increased number of targets or large data volume. Moreover, the results show that the proposed method remains robust when applied to noisy environment.


Author(s):  
Osama A. Omer

An important part of any computed tomography (CT) system is the reconstruction method, which transforms the measured data into images. Reconstruction methods for CT can be either analytical or iterative. The analytical methods can be exact, by exact projector inversion, or non-exact based on Back projection (BP). The BP methods are attractive because of thier simplicity and low computational cost. But they produce suboptimal images with respect to artifacts, resolution, and noise. This paper deals with improve of the image quality of BP by using super-resolution technique. Super-resolution can be beneficial in improving the image quality of many medical imaging systems without the need for significant hardware alternation. In this paper, we propose to reconstruct a high-resolution image from the measured signals in Sinogram space instead of reconstructing low-resolution images and then post-process these images to get higher resolution image.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Chengzhi Deng ◽  
Shengqian Wang ◽  
Wei Tian ◽  
Zhaoming Wu ◽  
Saifeng Hu

Recent developments in compressive sensing (CS) show that it is possible to accurately reconstruct the magnetic resonance (MR) image from undersampledk-space data by solving nonsmooth convex optimization problems, which therefore significantly reduce the scanning time. In this paper, we propose a new MR image reconstruction method based on a compound regularization model associated with the nonlocal total variation (NLTV) and the wavelet approximate sparsity. Nonlocal total variation can restore periodic textures and local geometric information better than total variation. The wavelet approximate sparsity achieves more accurate sparse reconstruction than fixed waveletl0andl1norm. Furthermore, a variable splitting and augmented Lagrangian algorithm is presented to solve the proposed minimization problem. Experimental results on MR image reconstruction demonstrate that the proposed method outperforms many existing MR image reconstruction methods both in quantitative and in visual quality assessment.


2020 ◽  
Author(s):  
Evangelos Raptis ◽  
Laura Parkes ◽  
Jose Anton-Rodriguez ◽  
Stephen Carter ◽  
Karl Herholz ◽  
...  

Abstract Purpose: The combination of positron emission tomography (PET) with magnetic resonance imaging (MRI) may enable novel research in the field of dementia. MR data is commonly used in the analysis of PET data for dementia due to its anatomical information and good soft tissue contrast. PET image reconstruction is currently performed independently of MRI data and the images typically suffer from low resolution, poor signal-to-noise ratio and count dependent bias, due to random error in acquired data and the reconstruction process which is ill conditioned. The aim of this research is to investigate the benefit of using anatomical information from MR data within PET image reconstruction, applied to dementia research. Methods: Real PET and MRI patient data of 5 FDG scans of a healthy elderly volunteers, were used in order to create realistic ground truth images of the distribution of matter and activity for these individuals. These ground truth images underwent a Monte-Carlo simulation using SimSET, in order to generate simulated raw data of the high research resolution tomograph (HRRT) PET scanner. The simulations were validated by comparing the reconstructed images to real HRRT data and focusing on image resolution. A comparison of partial volume correction (PVC) of PET data applied within image reconstruction with the conventional approach of applying it post-reconstruction was conducted with typical count levels in order to evaluate the hypothesis that there would be benefit of applying PVC within image reconstruction. Results: Results showed a little improvement in the recovered activity values is seen when using Lucy-Richardson deconvolution both post and within the image reconstruction. Similarly the use of RM modelling showed little benefit. Differences were observed when using Rousset PVC, with larger differences observed when interleaved with reconstruction. Generally the used of Rousset PVC within reconstruction resulted in a decrease in the bias (average error) for large cortical regions, but an increase in bias was observed for small regions and there were apparent region specific and patient specific variations in the observed bias. Conclusions: The benefit of applying PVC as a reconstruction based method showed to be minimal. A region specific bias was observed for most of the reconstruction methods, either applied within or post image reconstruction. Further work is needed to evaluate the benefit of applying PVC methods for high resolution scanners.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Marcelo V. W. Zibetti ◽  
Gabor T. Herman ◽  
Ravinder R. Regatte

AbstractIn this study, a fast data-driven optimization approach, named bias-accelerated subset selection (BASS), is proposed for learning efficacious sampling patterns (SPs) with the purpose of reducing scan time in large-dimensional parallel MRI. BASS is applicable when Cartesian fully-sampled k-space measurements of specific anatomy are available for training and the reconstruction method for undersampled measurements is specified; such information is used to define the efficacy of any SP for recovering the values at the non-sampled k-space points. BASS produces a sequence of SPs with the aim of finding one of a specified size with (near) optimal efficacy. BASS was tested with five reconstruction methods for parallel MRI based on low-rankness and sparsity that allow a free choice of the SP. Three datasets were used for testing, two of high-resolution brain images ($$\text {T}_{2}$$ T 2 -weighted images and, respectively, $$\text {T}_{1\rho }$$ T 1 ρ -weighted images) and another of knee images for quantitative mapping of the cartilage. The proposed approach has low computational cost and fast convergence; in the tested cases it obtained SPs up to 50 times faster than the currently best greedy approach. Reconstruction quality increased by up to 45% over that provided by variable density and Poisson disk SPs, for the same scan time. Optionally, the scan time can be nearly halved without loss of reconstruction quality. Quantitative MRI and prospective accelerated MRI results show improvements. Compared with greedy approaches, BASS rapidly learns effective SPs for various reconstruction methods, using larger SPs and larger datasets; enabling better selection of sampling-reconstruction pairs for specific MRI problems.


Sign in / Sign up

Export Citation Format

Share Document