scholarly journals Super Resolution MRI Using Generative Adversarial Networks

Author(s):  
Gautam .

This paper proposes a new frame for MRI Image Enhancement from a low-resolution (LR) image obtain from an early used MRI machine to generate a high-resolution (HR) MRI image. For this we use Generative Adversarial Networks, which have proven well in image recovery task. Here we simultaneously train two models which is Generative model that captures the data distribution in the LR MRI images, and a discriminative model that estimates the probability that a sample came from the training data rather than generator. For training generator, we have to maximize the probability of discriminator of making a mistake in comparing the fake image. For discriminator the adversarial loss uses least squares in order to stabilize the training and for generator the function is a combination of a least square adversarial loss and a content term based on mean square error and image gradient to improve the quality of generated images of MRI.

Author(s):  
Khaled ELKarazle ◽  
Valliappan Raman ◽  
Patrick Then

Age estimation models can be employed in many applications, including soft biometrics, content access control, targeted advertising, and many more. However, as some facial images are taken in unrestrained conditions, the quality relegates, which results in the loss of several essential ageing features. This study investigates how introducing a new layer of data processing based on a super-resolution generative adversarial network (SRGAN) model can influence the accuracy of age estimation by enhancing the quality of both the training and testing samples. Additionally, we introduce a novel convolutional neural network (CNN) classifier to distinguish between several age classes. We train one of our classifiers on a reconstructed version of the original dataset and compare its performance with an identical classifier trained on the original version of the same dataset. Our findings reveal that the classifier which trains on the reconstructed dataset produces better classification accuracy, opening the door for more research into building data-centric machine learning systems.


2020 ◽  
Vol 34 (04) ◽  
pp. 3121-3129 ◽  
Author(s):  
Shady Abu Hussein ◽  
Tom Tirer ◽  
Raja Giryes

In the recent years, there has been a significant improvement in the quality of samples produced by (deep) generative models such as variational auto-encoders and generative adversarial networks. However, the representation capabilities of these methods still do not capture the full distribution for complex classes of images, such as human faces. This deficiency has been clearly observed in previous works that use pre-trained generative models to solve imaging inverse problems. In this paper, we suggest to mitigate the limited representation capabilities of generators by making them image-adaptive and enforcing compliance of the restoration with the observations via back-projections. We empirically demonstrate the advantages of our proposed approach for image super-resolution and compressed sensing.


Author(s):  
Mingfeng Jiang ◽  
Minghao Zhi ◽  
Liying Wei ◽  
Xiaocheng Yang ◽  
Jucheng Zhang ◽  
...  

2021 ◽  
Vol 13 (9) ◽  
pp. 1713
Author(s):  
Songwei Gu ◽  
Rui Zhang ◽  
Hongxia Luo ◽  
Mengyao Li ◽  
Huamei Feng ◽  
...  

Deep learning is an important research method in the remote sensing field. However, samples of remote sensing images are relatively few in real life, and those with markers are scarce. Many neural networks represented by Generative Adversarial Networks (GANs) can learn from real samples to generate pseudosamples, rather than traditional methods that often require more time and man-power to obtain samples. However, the generated pseudosamples often have poor realism and cannot be reliably used as the basis for various analyses and applications in the field of remote sensing. To address the abovementioned problems, a pseudolabeled sample generation method is proposed in this work and applied to scene classification of remote sensing images. The improved unconditional generative model that can be learned from a single natural image (Improved SinGAN) with an attention mechanism can effectively generate enough pseudolabeled samples from a single remote sensing scene image sample. Pseudosamples generated by the improved SinGAN model have stronger realism and relatively less training time, and the extracted features are easily recognized in the classification network. The improved SinGAN can better identify sub-jects from images with complex ground scenes compared with the original network. This mechanism solves the problem of geographic errors of generated pseudosamples. This study incorporated the generated pseudosamples into training data for the classification experiment. The result showed that the SinGAN model with the integration of the attention mechanism can better guarantee feature extraction of the training data. Thus, the quality of the generated samples is improved and the classification accuracy and stability of the classification network are also enhanced.


Author(s):  
Huilin Zhou ◽  
Huimin Zheng ◽  
Qiegen Liu ◽  
Jian Liu ◽  
Yuhao Wang

Abstract Electromagnetic inverse-scattering problems (ISPs) are concerned with determining the properties of an unknown object using measured scattered fields. ISPs are often highly nonlinear, causing the problem to be very difficult to address. In addition, the reconstruction images of different optimization methods are distorted which leads to inaccurate reconstruction results. To alleviate these issues, we propose a new linear model solution of generative adversarial network-based (LM-GAN) inspired by generative adversarial networks (GAN). Two sub-networks are trained alternately in the adversarial framework. A linear deep iterative network as a generative network captures the spatial distribution of the data, and a discriminative network estimates the probability of a sample from the training data. Numerical results validate that LM-GAN has admirable fidelity and accuracy when reconstructing complex scatterers.


2022 ◽  
Vol 8 ◽  
Author(s):  
Runnan He ◽  
Shiqi Xu ◽  
Yashu Liu ◽  
Qince Li ◽  
Yang Liu ◽  
...  

Medical imaging provides a powerful tool for medical diagnosis. In the process of computer-aided diagnosis and treatment of liver cancer based on medical imaging, accurate segmentation of liver region from abdominal CT images is an important step. However, due to defects of liver tissue and limitations of CT imaging procession, the gray level of liver region in CT image is heterogeneous, and the boundary between the liver and those of adjacent tissues and organs is blurred, which makes the liver segmentation an extremely difficult task. In this study, aiming at solving the problem of low segmentation accuracy of the original 3D U-Net network, an improved network based on the three-dimensional (3D) U-Net, is proposed. Moreover, in order to solve the problem of insufficient training data caused by the difficulty of acquiring labeled 3D data, an improved 3D U-Net network is embedded into the framework of generative adversarial networks (GAN), which establishes a semi-supervised 3D liver segmentation optimization algorithm. Finally, considering the problem of poor quality of 3D abdominal fake images generated by utilizing random noise as input, deep convolutional neural networks (DCNN) based on feature restoration method is designed to generate more realistic fake images. By testing the proposed algorithm on the LiTS-2017 and KiTS19 dataset, experimental results show that the proposed semi-supervised 3D liver segmentation method can greatly improve the segmentation performance of liver, with a Dice score of 0.9424 outperforming other methods.


Sign in / Sign up

Export Citation Format

Share Document