PET Image Reconstruction Using a Cascading Back-Projection Neural Network

2020 ◽  
Vol 14 (6) ◽  
pp. 1100-1111 ◽  
Author(s):  
Qiyang Zhang ◽  
Juan Gao ◽  
Yongshuai Ge ◽  
Na Zhang ◽  
Yongfeng Yang ◽  
...  
2019 ◽  
Vol 38 (3) ◽  
pp. 675-685 ◽  
Author(s):  
Kuang Gong ◽  
Jiahui Guan ◽  
Kyungsang Kim ◽  
Xuezhu Zhang ◽  
Jaewon Yang ◽  
...  

Author(s):  
Kuang Gong ◽  
Dufan Wu ◽  
Quanzheng Li ◽  
Kyungsang Kim ◽  
Jaewon Yang ◽  
...  

2015 ◽  
Vol 8 (3) ◽  
pp. 161
Author(s):  
Samuel Gideon

This research was conducted as a learning alternatives for study of CT (computed tomograpghy) imaging using image reconstruction technique which are inversion matrix, back projection and filtered back projection. CT imaging can produce images of objects that do not overlap. Objects more easily distinguishable although given the relatively low contrast. The image is generated on CT imaging is the result of reconstruction of the original object. Matlab allows us to create and write imaging algorithms easily, easy to undersand and gives applied and exciting other imaging features. In this study, an example cross-sectional image recon-struction performed on the body of prostate tumors using. With these methods, medical prac-titioner (such as oncology clinician, radiographer and medical physicist) allows to simulate the reconstruction of CT images which almost resembles the actual CT visualization techniques.Keywords : computed tomography (CT), image reconstruction, Matlab


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Luzhe Huang ◽  
Hanlong Chen ◽  
Yilin Luo ◽  
Yair Rivenson ◽  
Aydogan Ozcan

AbstractVolumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input, matching confocal microscopy images of the same sample volume. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.


Sign in / Sign up

Export Citation Format

Share Document