attenuation map
Recently Published Documents


TOTAL DOCUMENTS

103
(FIVE YEARS 15)

H-INDEX

14
(FIVE YEARS 3)

2021 ◽  
Vol 21 (1) ◽  
pp. e4
Author(s):  
Ramiro Germán Rodríguez Colmeiro ◽  
Claudio Verrastro ◽  
Daniel Minsky ◽  
Thomas Grosges

The correction of attenuation effects in Positron Emission Tomography (PET) imaging is fundamental to obtain a correct radiotracer distribution. However direct measurement of this attenuation map is not error-free and normally results in additional ionization radiation dose to the patient. Here, we explore the task of whole body attenuation map generation using 3D deep neural networks. We analyze the advantages thar an adversarial network training cand provide to such models. The networks are trained to learn the mapping from non attenuation corrected [18 ^F]-fluorodeoxyglucose PET images to a synthetic Computerized Tomography (sCT) and also to label the input voxel tissue. Then the sCT image is further refined using an adversarial training scheme to recover higher frequency details and lost structures using context information. This work is trained and tested on public available datasets, containing several PET images from different scanners with different radiotracer administration and reconstruction modalities. The network is trained with 108 samples and validated on 10 samples. The sCT generation was tested on 133 samples from 8 distinct datasets. The resulting mean absolute error of the networks is 90±20  and 103±18HU and a peak signal to noise ratio of 19.3±1.7 dB and 18.6±1.5, for the base model and the adversarial model respectively. The attenuation correction is tested by means of attenuation sinograms, obtaining a line of response attenuation mean error lower than 1% with a standard deviation lower than 8%. The proposeddeep learning topologies are capable of generating whole body attenuation maps from uncorrected PET image data. Moreover, the accuracy of both methods holds in the presence of data from multiple sources and modalities and are trained on publicly available datasets. Finally, while the adversarial layer enhances visual appearance of the produced samples, the 3D U-Net achieves higher metric performance


2020 ◽  
Author(s):  
Ramiro Rodriguez Colmeiro ◽  
Claudio Verrastro ◽  
Daniel Minsky ◽  
Thomas Grosges

Abstract Background: The correction of attenuation effects in Positron Emission Tomography (PET) imaging is fundamental to obtain a correct radiotracer distribution. However direct measurement of this attenuation map is not error-free and normally results in additional ionization radiation dose to the patient. Here, we propose to obtain the whole body attenuation map using a 3D U-Net generative adversarial network. The network is trained to learn the mapping from non attenuation corrected 18-F-fluorodeoxyglucose PET images to a synthetic Computerized Tomography (sCT) and also to label the input voxel tissue. The sCT image is further refined using an adversarial training scheme to recover higher frequency details and lost structures using context information. This work is trained and tested on public available datasets, containing several PET images from different scanners with different radiotracer administration and reconstruction modalities. The network is trained with 108 samples and validated on 10 samples.Results: The sCT generation was tested on 133 samples from 8 distinct datasets. The resulting mean absolute error of the network is 103 ± 18 HU and a peak signal to noise ratio of 18.6 ± 1.5 dB. The generated images show good correlation with the unknown structural information.Conclusions: The proposed deep learning topology is capable of generating whole body attenuation maps from uncorrected PET image data. Moreover, the method accuracy holds in the presence of data form multiple sources and modalities and is trained on publicly available datasets.


Sign in / Sign up

Export Citation Format

Share Document