scholarly journals A Deep Recurrent Neural Network with Gated Momentum Unit for CT Image Reconstruction

Author(s):  
Masaki Ikuta

<div><div><div><p>Many algorithms and methods have been proposed for Computed Tomography (CT) image reconstruction, partic- ularly with the recent surge of interest in machine learning and deep learning methods. The majority of recently proposed methods are, however, limited to the image domain processing where deep learning is used to learn the mapping from a noisy image data set to a true image data set. While deep learning-based methods can produce higher quality images than conventional model-based post-processing algorithms, these methods have lim- itations. Deep learning-based methods used in the image domain are not sufficient for compensating for lost information during a forward and a backward projection in CT image reconstruction especially with a presence of high noise. In this paper, we propose a new Recurrent Neural Network (RNN) architecture for CT image reconstruction. We propose the Gated Momentum Unit (GMU) that has been extended from the Gated Recurrent Unit (GRU) but it is specifically designed for image processing inverse problems. This new RNN cell performs an iterative optimization with an accelerated convergence. The GMU has a few gates to regulate information flow where the gates decide to keep important long-term information and discard insignificant short- term detail. Besides, the GMU has a likelihood term and a prior term analogous to the Iterative Reconstruction (IR). This helps ensure estimated images are consistent with observation data while the prior term makes sure the likelihood term does not overfit each individual observation data. We conducted a synthetic image study along with a real CT image study to demonstrate this proposed method achieved the highest level of Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM). Also, we showed this algorithm converged faster than other well-known methods.</p></div></div></div>

2021 ◽  
Author(s):  
Masaki Ikuta

<div><div><div><p>Many algorithms and methods have been proposed for Computed Tomography (CT) image reconstruction, partic- ularly with the recent surge of interest in machine learning and deep learning methods. The majority of recently proposed methods are, however, limited to the image domain processing where deep learning is used to learn the mapping from a noisy image data set to a true image data set. While deep learning-based methods can produce higher quality images than conventional model-based post-processing algorithms, these methods have lim- itations. Deep learning-based methods used in the image domain are not sufficient for compensating for lost information during a forward and a backward projection in CT image reconstruction especially with a presence of high noise. In this paper, we propose a new Recurrent Neural Network (RNN) architecture for CT image reconstruction. We propose the Gated Momentum Unit (GMU) that has been extended from the Gated Recurrent Unit (GRU) but it is specifically designed for image processing inverse problems. This new RNN cell performs an iterative optimization with an accelerated convergence. The GMU has a few gates to regulate information flow where the gates decide to keep important long-term information and discard insignificant short- term detail. Besides, the GMU has a likelihood term and a prior term analogous to the Iterative Reconstruction (IR). This helps ensure estimated images are consistent with observation data while the prior term makes sure the likelihood term does not overfit each individual observation data. We conducted a synthetic image study along with a real CT image study to demonstrate this proposed method achieved the highest level of Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM). Also, we showed this algorithm converged faster than other well-known methods.</p></div></div></div>


2019 ◽  
Vol 1 (6) ◽  
pp. 269-276 ◽  
Author(s):  
Hongming Shan ◽  
Atul Padole ◽  
Fatemeh Homayounieh ◽  
Uwe Kruger ◽  
Ruhani Doda Khera ◽  
...  

2020 ◽  
Vol 10 (11) ◽  
pp. 2707-2713
Author(s):  
Zheng Sun ◽  
Xiangyang Yan

Intravascular photoacoustic tomography (IVPAT) is a newly developed imaging modality in the interventional diagnosis and treatment of coronary artery diseases. Incomplete acoustic measurement caused by limitedview scanning of the detector in the vascular lumen results in under-sampling artifacts and distortion in the images reconstructed by using the standard reconstruction methods. A method for limited-view IVPAT image reconstruction based on deep learning is presented in this paper. A convolutional neural network (CNN) is constructed and trained with computer-simulated image data set. Then, the trained CNN is used to optimize the cross-sectional images of the vessel which are recovered from the incomplete photoacoustic measurements by using the standard time-reversal (TR) algorithm to obtain the images with the improved quality. Results of numerical demonstration indicate that the method can effectively reduce the image distortion and artifacts caused by the limited-view detection. Furthermore, it is superior to the compressed sensing (CS) method in recovering the unmeasured information of the imaging target with the structural similarity around 10% higher than CS reconstruction.


1999 ◽  
Vol 103 (2) ◽  
pp. 295-302 ◽  
Author(s):  
Fath El Alem F. Ali ◽  
Zensho Nakao ◽  
Yen-Wei Chen

Universe ◽  
2021 ◽  
Vol 7 (7) ◽  
pp. 211
Author(s):  
Xingzhu Wang ◽  
Jiyu Wei ◽  
Yang Liu ◽  
Jinhao Li ◽  
Zhen Zhang ◽  
...  

Recently, astronomy has witnessed great advancements in detectors and telescopes. Imaging data collected by these instruments are organized into very large datasets that form data-oriented astronomy. The imaging data contain many radio galaxies (RGs) that are interesting to astronomers. However, considering that the scale of astronomical databases in the information age is extremely large, a manual search of these galaxies is impractical given the need for manual labor. Therefore, the ability to detect specific types of galaxies largely depends on computer algorithms. Applying machine learning algorithms on large astronomical data sets can more effectively detect galaxies using photometric images. Astronomers are motivated to develop tools that can automatically analyze massive imaging data, including developing an automatic morphological detection of specified radio sources. Galaxy Zoo projects have generated great interest in visually classifying galaxy samples using CNNs. Banfield studied radio morphologies and host galaxies derived from visual inspection in the Radio Galaxy Zoo project. However, there are relatively more studies on galaxy classification, while there are fewer studies on galaxy detection. We develop a galaxy detection model, which realizes the location and classification of Fanaroff–Riley class I (FR I) and Fanaroff–Riley class II (FR II) galaxies. The field of target detection has also developed rapidly since the convolutional neural network was proposed. You Only Look Once: Unified, Real-Time Object Detection (YOLO) is a neural-network-based target detection model proposed by Redmon et al. We made several improvements to the detection effect of dense galaxies based on the original YOLOv5, mainly including the following. (1) We use Varifocal loss, whose function is to weigh positive and negative samples asymmetrically and highlight the main sample of positive samples in the training phase. (2) Our neural network model adds an attention mechanism for the convolution kernel so that the feature extraction network can adjust the size of the receptive field dynamically in deep convolutional neural networks. In this way, our model has good adaptability and effect for identifying galaxies of different sizes on the picture. (3) We use empirical practices suitable for small target detection, such as image segmentation and reducing the stride of the convolutional layers. Apart from the three major contributions and novel points of the model, the thesis also included different data sources, i.e., radio images and optical images, aiming at better classification performance and more accurate positioning. We used optical image data from SDSS, radio image data from FIRST, and label data from FR Is and FR IIs catalogs to create a data set of FR Is and FR IIs. Subsequently, we used the data set to train our improved YOLOv5 model and finally realize the automatic classification and detection of FR Is and FR IIs. Experimental results prove that our improved method achieves better performance. [email protected] of our model reaches 82.3%, and the location (Ra and Dec) of the galaxies can be identified more accurately. Our model has great astronomical significance. For example, it can help astronomers find FR I and FR II galaxies to build a larger-scale galaxy catalog. Our detection method can also be extended to other types of RGs. Thus, astronomers can locate the specific type of galaxies in a considerably shorter time and with minimum human intervention, or it can be combined with other observation data (spectrum and redshift) to explore other properties of the galaxies.


Medicine ◽  
2021 ◽  
Vol 100 (19) ◽  
pp. e25814
Author(s):  
Ji Eun Lee ◽  
Seo-Youn Choi ◽  
Jeong Ah Hwang ◽  
Sanghyeok Lim ◽  
Min Hee Lee ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document