Motion artifact reduction in abdominal MRIs using generative adversarial networks with perceptual similarity loss

2021 ◽  
Author(s):  
Yunan Wu ◽  
Xijun Wang ◽  
Aggelos K. Katsaggelos
Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1629
Author(s):  
Tsutomu Gomi ◽  
Rina Sakai ◽  
Hidetake Hara ◽  
Yusuke Watanabe ◽  
Shinya Mizukami

In this study, a novel combination of hybrid generative adversarial networks (GANs) comprising cycle-consistent GAN, pix2pix, and (mask pyramid network) MPN (CGpM-metal artifact reduction [MAR]), was developed using projection data to reduce metal artifacts and the radiation dose during digital tomosynthesis. The CGpM-MAR algorithm was compared with the conventional filtered back projection (FBP) without MAR, FBP with MAR, and convolutional neural network MAR. The MAR rates were compared using the artifact index (AI) and Gumbel distribution of the largest variation analysis using a prosthesis phantom at various radiation doses. The novel CGpM-MAR yielded an adequately effective overall performance in terms of AI. The resulting images yielded good results independently of the type of metal used in the prosthesis phantom (p < 0.05) and good artifact removal at 55% radiation-dose reduction. Furthermore, the CGpM-MAR represented the minimum in the model with the largest variation at 55% radiation-dose reduction. Regarding the AI and Gumbel distribution analysis, the novel CGpM-MAR yielded superior MAR when compared with the conventional reconstruction algorithms with and without MAR at 55% radiation-dose reduction and presented features most similar to the reference FBP. CGpM-MAR presents a promising method for metal artifact and radiation-dose reduction in clinical practice.


2021 ◽  
Vol 93 ◽  
pp. 101990
Author(s):  
Zihao Wang ◽  
Clair Vandersteen ◽  
Thomas Demarcy ◽  
Dan Gnansia ◽  
Charles Raffaelli ◽  
...  

2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Alexander Hepburn ◽  
Valero Laparra ◽  
Ryan McConville ◽  
Raul Santos-Rodriguez

In recent years there has been a growing interest in image generation through deep learning. While an important part of the evaluation of the generated images usually involves visual inspection, the inclusion of human perception as a factor in the training process is often overlooked. In this paper we propose an alternative perceptual regulariser for image-to-image translation using conditional generative adversarial networks (cGANs). To do so automatically (avoiding visual inspection), we use the Normalised Laplacian Pyramid Distance (NLPD) to measure the perceptual similarity between the generated image and the original image. The NLPD is based on the principle of normalising the value of coefficients with respect to a local estimate of mean energy at different scales and has already been successfully tested in different experiments involving human perception. We compare this regulariser with the originally proposed L1 distance and note that when using NLPD the generated images contain more realistic values for both local and global contrast.


Information ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 69 ◽  
Author(s):  
Xinhua Liu ◽  
Yao Zou ◽  
Chengjuan Xie ◽  
Hailan Kuang ◽  
Xiaolin Ma

The use of computers to simulate facial aging or rejuvenation has long been a hot research topic in the field of computer vision, and this technology can be applied in many fields, such as customs security, public places, and business entertainment. With the rapid increase in computing speeds, complex neural network algorithms can be implemented in an acceptable amount of time. In this paper, an optimized face-aging method based on a Deep Convolutional Generative Adversarial Network (DCGAN) is proposed. In this method, an original face image is initially mapped to a personal latent vector by an encoder, and then the personal potential vector is combined with the age condition vector and the gender condition vector through a connector. The output of the connector is the input of the generator. A stable and photo-realistic facial image is then generated by maintaining personalized facial features and changing age conditions. With regard to the objective function, the single adversarial loss of the Generated Adversarial Network (GAN) with the perceptual similarity loss is replaced by the perceptual similarity loss function, which is the weighted sum of adversarial loss, feature space loss, pixel space loss, and age loss. The experimental results show that the proposed method can synthesize an aging face with rich texture and visual reality and outperform similar work.


Sign in / Sign up

Export Citation Format

Share Document