Structural similarity based single image super-resolution with nonlocal regularization

Optik ◽  
2014 ◽  
Vol 125 (15) ◽  
pp. 4005-4008 ◽  
Author(s):  
Chengzhi Deng ◽  
Wei Tian ◽  
Shengqian Wang ◽  
Huasheng Zhu ◽  
Wei Rao ◽  
...  
2020 ◽  
Vol 10 (1) ◽  
pp. 375 ◽  
Author(s):  
Zetao Jiang ◽  
Yongsong Huang ◽  
Lirui Hu

The super-resolution generative adversarial network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied by unpleasant artifacts. To further enhance the visual quality, we propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The method is based on depthwise separable convolution super-resolution generative adversarial network (DSCSRGAN). A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters. For the discriminator network, the batch normalization (BN) layer was discarded, and the problem of artifacts was reduced. A frequency energy similarity loss function was designed to constrain the generator network to generate better super-resolution images. Experiments on several different datasets showed that the peak signal-to-noise ratio (PSNR) was improved by more than 3 dB, structural similarity index (SSIM) was increased by 16%, and the total parameter was reduced to 42.8% compared with the original model. Combining various objective indicators and subjective visual evaluation, the algorithm was shown to generate richer image details, clearer texture, and lower complexity.


Computers ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 41 ◽  
Author(s):  
Vahid Anari ◽  
Farbod Razzazi ◽  
Rasoul Amirfattahi

In the current study, we were inspired by sparse analysis signal representation theory to propose a novel single-image super-resolution method termed “sparse analysis-based super resolution” (SASR). This study presents and demonstrates mapping between low and high resolution images using a coupled sparse analysis operator learning method to reconstruct high resolution (HR) images. We further show that the proposed method selects more informative high and low resolution (LR) learning patches based on image texture complexity to train high and low resolution operators more efficiently. The coupled high and low resolution operators are used for high resolution image reconstruction at a low computational complexity cost. The experimental results for quantitative criteria peak signal to noise ratio (PSNR), root mean square error (RMSE), structural similarity index (SSIM) and elapsed time, human observation as a qualitative measure, and computational complexity verify the improvements offered by the proposed SASR algorithm.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Author(s):  
Vishal Chudasama ◽  
Kishor Upla ◽  
Kiran Raja ◽  
Raghavendra Ramachandra ◽  
Christoph Busch

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Kai Shao ◽  
Qinglan Fan ◽  
Yunfeng Zhang ◽  
Fangxun Bao ◽  
Caiming Zhang

2021 ◽  
Vol 213 ◽  
pp. 106663
Author(s):  
Yujie Dun ◽  
Zongyang Da ◽  
Shuai Yang ◽  
Yao Xue ◽  
Xueming Qian

Sign in / Sign up

Export Citation Format

Share Document