Researches on the Key Technologies for Tomographic Gamma Scanning

Author(s):  
Quanhu Zhang ◽  
Weihua Hui ◽  
Feng Li ◽  
Zhongmao Gu ◽  
Shaojun Qian

Tomographic Gamma Scanning (TGS) method is one of the most advanced non-destructive assay (NDA) methods. But for measuring heterogeneously distributed media with medium- and high-density, there are still three main problems: experiment’s method of the calibration of detection efficiency of TGS is more difficult and complicated because of large voxels, “point-to-point” model and average model can’t calculate high-density samples accurately in transmission image reconstruction and computational cost is very large for correction factor in emission image. Calibration of detection efficiency using Monte Carlo method shorten calibration cycle greatly, a new Monte Carlo statistical iteration in TGS transmission image reconstruction method which is based on MC calculation and Numerical Analysis is presented, give a chance for measuring high-density samples; the division method and pre-calculation method in reconstructing TGS emission image is used which saves a great lot of computation time and provide a fast reconstruction algorithm for emission image. Above methods apply to TGS experiment device, the relative errors between experiment and MC calibration were less 5%; the relative errors between reconstructed values and the reference values were less than 4% in transmission image; the corrected experimental results were compared to the standard values and the relative deviation was found to be 7%. It took no more than one hour to complete the reconstruction of TGS emission image for a sample model with 3×3×3 voxels using a 2.0G computer.

2014 ◽  
Vol 12 (3) ◽  
pp. 031702-31706 ◽  
Author(s):  
Mengyu Jia Mengyu Jia ◽  
Shanshan Cui Shanshan Cui ◽  
Xueying Chen Xueying Chen ◽  
Ming Liu Ming Liu ◽  
Xiaoqing Zhou Xiaoqing Zhou ◽  
...  

2009 ◽  
Vol 66 (10) ◽  
pp. 3131-3146 ◽  
Author(s):  
Robert Pincus ◽  
K. Franklin Evans

Abstract This paper examines the tradeoffs between computational cost and accuracy for two new state-of-the-art codes for computing three-dimensional radiative transfer: a community Monte Carlo model and a parallel implementation of the Spherical Harmonics Discrete Ordinate Method (SHDOM). Both codes are described and algorithmic choices are elaborated. Two prototype problems are considered: a domain filled with stratocumulus clouds and another containing scattered shallow cumulus, absorbing aerosols, and molecular scatterers. Calculations are performed for a range of resolutions and the relationships between accuracy and computational cost, measured by memory use and time to solution, are compared. Monte Carlo accuracy depends primarily on the number of trajectories used in the integration. Monte Carlo estimates of intensity are computationally expensive and may be subject to large sampling noise from highly peaked phase functions. This noise can be decreased using a range of variance reduction techniques, but these techniques can compromise the excellent agreement between the true error and estimates obtained from unbiased calculations. SHDOM accuracy is controlled by both spatial and angular resolution; different output fields are sensitive to different aspects of this resolution, so the optimum accuracy parameters depend on which quantities are desired as well as on the characteristics of the problem being solved. The accuracy of SHDOM must be assessed through convergence tests and all results from unconverged solutions may be biased. SHDOM is more efficient (i.e., has lower error for a given computational cost) than Monte Carlo when computing pixel-by-pixel upwelling fluxes in the cumulus scene, whereas Monte Carlo is more efficient in computing flux divergence and downwelling flux in the stratocumulus scene, especially at higher accuracies. The two models are comparable for downwelling flux and flux divergence in cumulus and upwelling flux in stratocumulus. SHDOM is substantially more efficient when computing pixel-by-pixel intensity in multiple directions; the models are comparable when computing domain-average intensities. In some cases memory use, rather than computation time, may limit the resolution of SHDOM calculations.


2007 ◽  
Vol 07 (01) ◽  
pp. 87-104 ◽  
Author(s):  
YI FAN ◽  
HONGBING LU ◽  
CHONGYANG HAO ◽  
ZHENGRONG LIANG ◽  
ZHIMING ZHOU

Conventionally, the inverse problem of gated cardiac SPECT is solved by reconstructing the images frame-by-frame, ignoring the inter-frame correlation along the time dimension. To compensate for the non-uniform attenuation for quantitative cardiac imaging, iterative image reconstruction has been a choice which could utilize a priori constraint on the inter-frame correlation for a penalized maximum likelihood (ML) solution. However, iterative image reconstruction in the 4D space involves intensive computations. In this paper, an efficient method for 4D gated SPECT reconstruction is developed based on Karhune-Loève (KL) transform and Novikov's inverse formula. The temporal KL transform is first applied on the data sequence to de-correlate the inter-frame correlation and then the 3D principal components in the KL domain are reconstructed frame-by-frame using Novikov's inverse formula with non-uniform attenuation compensation. Finally an inverse KL transform is performed to obtain quantitatively-reconstructed 4D images in the original space. With the proposed method, 4D reconstruction can be achieved at a reasonable computational cost. The results from computer simulations are very encouraging as compared to conventional frame-by-frame filtered back-projection and iterative ordered-subsets ML reconstructions. By discarding high-order KL components for further noise reduction, the computation time could be further reduced.


Author(s):  
Candida Mwisomba ◽  
Abdi T. Abdalla ◽  
Idrissa Amour ◽  
Florian Mkemwa ◽  
Baraka Maiseli

Abstract Compressed sensing allows recovery of image signals using a portion of data – a technique that has drastically revolutionized the field of through-the-wall radar imaging (TWRI). This technique can be accomplished through nonlinear methods, including convex programming and greedy iterative algorithms. However, such (nonlinear) methods increase the computational cost at the sensing and reconstruction stages, thus limiting the application of TWRI in delicate practical tasks (e.g. military operations and rescue missions) that demand fast response times. Motivated by this limitation, the current work introduces the use of a numerical optimization algorithm, called Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS), to the TWRI framework to lower image reconstruction time. LBFGS, a well-known Quasi-Newton algorithm, has traditionally been applied to solve large scale optimization problems. Despite its potential applications, this algorithm has not been extensively applied in TWRI. Therefore, guided by LBFGS and using the Euclidean norm, we employed the regularized least square method to solve the cost function of the TWRI problem. Simulation results show that our method reduces the computational time by 87% relative to the classical method, even under situations of increased number of targets or large data volume. Moreover, the results show that the proposed method remains robust when applied to noisy environment.


Author(s):  
Hajime Nobuhara ◽  
◽  
Yasufumi Takama ◽  
Kaoru Hirota

A fast iterative solving method of various types of fuzzy relational equations is proposed. This method is derived by eliminating a redundant comparison process in the conventional iterative solving method (Pedrycz, 1983). The proposed method is applied to image reconstruction, and confirmed that the computation time is decreased to 1/39 - 1/45 with the compression rate of 0.0625. Furthermore, in order to make any initial solution converge on a reconstructed image with good quality, a new cost function is proposed. Under the condition that the compression rate is 0.0625, it is confirmed that the root mean square error of the proposed method decreases to 24.00% and 86.03% compared with those of the conventional iterative method and a non iterative image reconstruction method (Nobuhara, 2001), respectively.


2005 ◽  
Author(s):  
Manuchehr Soleimani

Electrical resistance tomography (ERT) has great potential to be used for multi-phase flow monitoring. The Image reconstruction in ERT is computationally costly, so the online monitoring is a difficult task. The linear reconstruction methods are currently used as fast methods. The image reconstruction is a nonlinear inverse problem and the linear methods are not sufficient in many cases. The application of a recently proposed non-iterative inversion method for two-phase materials in has been studied. The method works based on Monotonicity property of the resistance matrix in ERT and it requires modest computational cost. In this paper we explain the application of this inversion method. We demonstrate the capabilities and drawbacks of the method by using 2D test examples. A major contribution of this paper is to optimize the software program for the inversion (by doing most of the computations offline), so it can be used for online application.


Author(s):  
Wendy K. Caldwell ◽  
Abigail Hunter ◽  
Catherine S. Plesko ◽  
Stephen Wirkus

Verification and validation (V&V) are necessary processes to ensure accuracy of the computational methods used to solve problems key to vast numbers of applications and industries. Simulations are essential for addressing impact cratering problems, because these problems often exceed experimental capabilities. Here, we show that the free Lagrange (FLAG) hydrocode, developed at Los Alamos National Laboratory (Los Alamos, NM), can be used for impact cratering simulations by verifying FLAG against two analytical models of aluminum-on-aluminum impacts at different impact velocities and validating FLAG against a glass-into-water laboratory impact experiment. Our verification results show good agreement with the theoretical maximum pressures, with relative errors as low in magnitude as 1.00%. Our validation results demonstrate FLAG's ability to model various stages of impact cratering, with crater radius relative errors as low as 3.48% and crater depth relative errors as low as 0.79%. Our mesh resolution study shows that FLAG converges at resolutions low enough to reduce the required computation time from about 28 h to about 25 min. We anticipate that FLAG can be used to model larger impact cratering problems with increased accuracy and decreased computational cost on current systems relative to other hydrocodes tested by Pierazzo et al. (2008, “Validation of Numerical Codes for Impact and Explosion Cratering: Impacts on Strengthless and Metal Targets,” MAPS, 43(12), pp. 1917–1938).


Sign in / Sign up

Export Citation Format

Share Document