scholarly journals Deep learning based classification of dynamic processes in time-resolved X-ray tomographic microscopy

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Minna Bührer ◽  
Hong Xu ◽  
Allard A. Hendriksen ◽  
Felix N. Büchi ◽  
Jens Eller ◽  
...  

AbstractTime-resolved X-ray tomographic microscopy is an invaluable technique to investigate dynamic processes in 3D for extended time periods. Because of the limited signal-to-noise ratio caused by the short exposure times and sparse angular sampling frequency, obtaining quantitative information through post-processing remains challenging and requires intensive manual labor. This severely limits the accessible experimental parameter space and so, prevents fully exploiting the capabilities of the dedicated time-resolved X-ray tomographic stations. Though automatic approaches, often exploiting iterative reconstruction methods, are currently being developed, the required computational costs typically remain high. Here, we propose a highly efficient reconstruction and classification pipeline (SIRT-FBP-MS-D-DIFF) that combines an algebraic filter approximation and machine learning to significantly reduce the computational time. The dynamic features are reconstructed by standard filtered back-projection with an algebraic filter to approximate iterative reconstruction quality in a computationally efficient manner. The raw reconstructions are post-processed with a trained convolutional neural network to extract the dynamic features from the low signal-to-noise ratio reconstructions in a fully automatic manner. The capabilities of the proposed pipeline are demonstrated on three different dynamic fuel cell datasets, one exploited for training and two for testing without network retraining. The proposed approach enables automatic processing of several hundreds of datasets in a single day on a single GPU node readily available at most institutions, so extending the possibilities in future dynamic X-ray tomographic investigations.

2020 ◽  
Vol 27 (5) ◽  
pp. 1326-1338
Author(s):  
Federica Marone ◽  
Jakob Vogel ◽  
Marco Stampanoni

Modern detectors used at synchrotron tomographic microscopy beamlines typically have sensors with more than 4–5 mega-pixels and are capable of acquiring 100–1000 frames per second at full frame. As a consequence, a data rate of a few TB per day can easily be exceeded, reaching peaks of a few tens of TB per day for time-resolved tomographic experiments. This data needs to be post-processed, analysed, stored and possibly transferred, imposing a significant burden onto the IT infrastructure. Compression of tomographic data, as routinely done for diffraction experiments, is therefore highly desirable. This study considers a set of representative datasets and investigates the effect of lossy compression of the original X-ray projections onto the final tomographic reconstructions. It demonstrates that a compression factor of at least three to four times does not generally impact the reconstruction quality. Potentially, compression with this factor could therefore be used in a transparent way to the user community, for instance, prior to data archiving. Higher factors (six to eight times) can be achieved for tomographic volumes with a high signal-to-noise ratio as it is the case for phase-retrieved datasets. Although a relationship between the dataset signal-to-noise ratio and a safe compression factor exists, this is not simple and, even considering additional dataset characteristics such as image entropy and high-frequency content variation, the automatic optimization of the compression factor for each single dataset, beyond the conservative factor of three to four, is not straightforward.


2021 ◽  
pp. 197140092110087
Author(s):  
Andrea De Vito ◽  
Cesare Maino ◽  
Sophie Lombardi ◽  
Maria Ragusi ◽  
Cammillo Talei Franzesi ◽  
...  

Background and purpose To evaluate the added value of a model-based reconstruction algorithm in the assessment of acute traumatic brain lesions in emergency non-enhanced computed tomography, in comparison with a standard hybrid iterative reconstruction approach. Materials and methods We retrospectively evaluated a total of 350 patients who underwent a 256-row non-enhanced computed tomography scan at the emergency department for brain trauma. Images were reconstructed both with hybrid and model-based iterative algorithm. Two radiologists, blinded to clinical data, recorded the presence, nature, number, and location of acute findings. Subjective image quality was performed using a 4-point scale. Objective image quality was determined by computing the signal-to-noise ratio and contrast-to-noise ratio. The agreement between the two readers was evaluated using k-statistics. Results A subjective image quality analysis using model-based iterative reconstruction gave a higher detection rate of acute trauma-related lesions in comparison to hybrid iterative reconstruction (extradural haematomas 116 vs. 68, subdural haemorrhages 162 vs. 98, subarachnoid haemorrhages 118 vs. 78, parenchymal haemorrhages 94 vs. 64, contusive lesions 36 vs. 28, diffuse axonal injuries 75 vs. 31; all P<0.001). Inter-observer agreement was moderate to excellent in evaluating all injuries (extradural haematomas k=0.79, subdural haemorrhages k=0.82, subarachnoid haemorrhages k=0.91, parenchymal haemorrhages k=0.98, contusive lesions k=0.88, diffuse axonal injuries k=0.70). Quantitatively, the mean standard deviation of the thalamus on model-based iterative reconstruction images was lower in comparison to hybrid iterative one (2.12 ± 0.92 vsa 3.52 ± 1.10; P=0.030) while the contrast-to-noise ratio and signal-to-noise ratio were significantly higher (contrast-to-noise ratio 3.06 ± 0.55 vs. 1.55 ± 0.68, signal-to-noise ratio 14.51 ± 1.78 vs. 8.62 ± 1.88; P<0.0001). Median subjective image quality values for model-based iterative reconstruction were significantly higher ( P=0.003). Conclusion Model-based iterative reconstruction, offering a higher image quality at a thinner slice, allowed the identification of a higher number of acute traumatic lesions than hybrid iterative reconstruction, with a significant reduction of noise.


2022 ◽  
Vol 93 (1) ◽  
pp. 015006
Author(s):  
Xiaolong Zhao ◽  
Ming Ye ◽  
Zhi Cao ◽  
Danyang Huang ◽  
Tingting Fan ◽  
...  

2011 ◽  
Vol 110 (10) ◽  
pp. 109902 ◽  
Author(s):  
Michael Chabior ◽  
Tilman Donath ◽  
Christian David ◽  
Manfred Schuster ◽  
Christian Schroer ◽  
...  

2004 ◽  
Vol 78 (6) ◽  
pp. 915-919 ◽  
Author(s):  
N. Kalivas ◽  
L. Costaridou ◽  
I. Kandarakis ◽  
D. Cavouras ◽  
C.D. Nomicos ◽  
...  

Author(s):  
Timur Gureyev ◽  
David M. Paganin ◽  
Alex Kozlov ◽  
Harry Quiney

2005 ◽  
Vol 77 (20) ◽  
pp. 6563-6570 ◽  
Author(s):  
Zeng Ping Chen ◽  
Julian Morris ◽  
Elaine Martin ◽  
Robert B. Hammond ◽  
Xiaojun Lai ◽  
...  

2016 ◽  
Vol 22 (3) ◽  
pp. 536-543 ◽  
Author(s):  
Jong Seok Jeong ◽  
K. Andre Mkhoyan

AbstractAcquiring an atomic-resolution compositional map of crystalline specimens has become routine practice, thus opening possibilities for extracting subatomic information from such maps. A key challenge for achieving subatomic precision is the improvement of signal-to-noise ratio (SNR) of compositional maps. Here, we report a simple and reliable solution for achieving high-SNR energy-dispersive X-ray (EDX) spectroscopy spectrum images for individual atomic columns. The method is based on standard cross-correlation aided by averaging of single-column EDX maps with modifications in the reference image. It produces EDX maps with minimal specimen drift, beam drift, and scan distortions. Step-by-step procedures to determine a self-consistent reference map with a discussion on the reliability, stability, and limitations of the method are presented here.


Sign in / Sign up

Export Citation Format

Share Document