Image super-resolution reconstruction based on generative adversarial network model with feedback and attention mechanisms

Author(s):  
Yongqiang Wang ◽  
Xue Li ◽  
Fangzhe Nan ◽  
Feng Liu ◽  
Hua Li ◽  
...  
2021 ◽  
Vol 12 (6) ◽  
pp. 1-20
Author(s):  
Fayaz Ali Dharejo ◽  
Farah Deeba ◽  
Yuanchun Zhou ◽  
Bhagwan Das ◽  
Munsif Ali Jatoi ◽  
...  

Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution. Recently, deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR) . However, the generated image still suffers from undesirable artifacts such as the absence of texture-feature representation and high-frequency information. We propose a frequency domain-based spatio-temporal remote sensing single image super-resolution technique to reconstruct the HR image combined with generative adversarial networks (GANs) on various frequency bands (TWIST-GAN). We have introduced a new method incorporating Wavelet Transform (WT) characteristics and transferred generative adversarial network. The LR image has been split into various frequency bands by using the WT, whereas the transfer generative adversarial network predicts high-frequency components via a proposed architecture. Finally, the inverse transfer of wavelets produces a reconstructed image with super-resolution. The model is first trained on an external DIV2 K dataset and validated with the UC Merced Landsat remote sensing dataset and Set14 with each image size of 256 × 256. Following that, transferred GANs are used to process spatio-temporal remote sensing images in order to minimize computation cost differences and improve texture information. The findings are compared qualitatively and qualitatively with the current state-of-art approaches. In addition, we saved about 43% of the GPU memory during training and accelerated the execution of our simplified version by eliminating batch normalization layers.


2020 ◽  
Vol 10 (1) ◽  
pp. 375 ◽  
Author(s):  
Zetao Jiang ◽  
Yongsong Huang ◽  
Lirui Hu

The super-resolution generative adversarial network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied by unpleasant artifacts. To further enhance the visual quality, we propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The method is based on depthwise separable convolution super-resolution generative adversarial network (DSCSRGAN). A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters. For the discriminator network, the batch normalization (BN) layer was discarded, and the problem of artifacts was reduced. A frequency energy similarity loss function was designed to constrain the generator network to generate better super-resolution images. Experiments on several different datasets showed that the peak signal-to-noise ratio (PSNR) was improved by more than 3 dB, structural similarity index (SSIM) was increased by 16%, and the total parameter was reduced to 42.8% compared with the original model. Combining various objective indicators and subjective visual evaluation, the algorithm was shown to generate richer image details, clearer texture, and lower complexity.


Sign in / Sign up

Export Citation Format

Share Document