scholarly journals Attenuation correction in SPECT without attenuation map

Author(s):  
Krzysztof Kacperski
2014 ◽  
Vol 1 (Suppl 1) ◽  
pp. A77 ◽  
Author(s):  
Mehdi Shandiz ◽  
Mohammad Arabi ◽  
Pardis Ghafarian ◽  
Mehrdad Karam ◽  
Hamidreza Rad ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
pp. e4
Author(s):  
Ramiro Germán Rodríguez Colmeiro ◽  
Claudio Verrastro ◽  
Daniel Minsky ◽  
Thomas Grosges

The correction of attenuation effects in Positron Emission Tomography (PET) imaging is fundamental to obtain a correct radiotracer distribution. However direct measurement of this attenuation map is not error-free and normally results in additional ionization radiation dose to the patient. Here, we explore the task of whole body attenuation map generation using 3D deep neural networks. We analyze the advantages thar an adversarial network training cand provide to such models. The networks are trained to learn the mapping from non attenuation corrected [18 ^F]-fluorodeoxyglucose PET images to a synthetic Computerized Tomography (sCT) and also to label the input voxel tissue. Then the sCT image is further refined using an adversarial training scheme to recover higher frequency details and lost structures using context information. This work is trained and tested on public available datasets, containing several PET images from different scanners with different radiotracer administration and reconstruction modalities. The network is trained with 108 samples and validated on 10 samples. The sCT generation was tested on 133 samples from 8 distinct datasets. The resulting mean absolute error of the networks is 90±20  and 103±18HU and a peak signal to noise ratio of 19.3±1.7 dB and 18.6±1.5, for the base model and the adversarial model respectively. The attenuation correction is tested by means of attenuation sinograms, obtaining a line of response attenuation mean error lower than 1% with a standard deviation lower than 8%. The proposeddeep learning topologies are capable of generating whole body attenuation maps from uncorrected PET image data. Moreover, the accuracy of both methods holds in the presence of data from multiple sources and modalities and are trained on publicly available datasets. Finally, while the adversarial layer enhances visual appearance of the produced samples, the 3D U-Net achieves higher metric performance


2011 ◽  
Vol 52 (7) ◽  
pp. 1142-1149 ◽  
Author(s):  
I. B. Malone ◽  
R. E. Ansorge ◽  
G. B. Williams ◽  
P. J. Nestor ◽  
T. A. Carpenter ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document