scholarly journals Efficient Image Super-Resolution via Self-Calibrated Feature Fuse

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 329
Author(s):  
Congming Tan ◽  
Shuli Cheng ◽  
Liejun Wang

Recently, many super-resolution reconstruction (SR) feedforward networks based on deep learning have been proposed. These networks enable the reconstructed images to achieve convincing results. However, due to a large amount of computation and parameters, SR technology is greatly limited in devices with limited computing power. To trade-off the network performance and network parameters. In this paper, we propose the efficient image super-resolution network via Self-Calibrated Feature Fuse, named SCFFN, by constructing the self-calibrated feature fuse block (SCFFB). Specifically, to recover the high-frequency detail information of the image as much as possible, we propose SCFFB by self-transformation and self-fusion of features. In addition, to accelerate the network training while reducing the computational complexity of the network, we employ an attention mechanism to elaborate the reconstruction part of the network, called U-SCA. Compared with the existing transposed convolution, it can greatly reduce the computation burden of the network without reducing the reconstruction effect. We have conducted full quantitative and qualitative experiments on public datasets, and the experimental results show that the network achieves comparable performance to other networks, while we only need fewer parameters and computational resources.

2021 ◽  
Author(s):  
Taiping Mo ◽  
Dehong Chen

Abstract The Invertible Rescaling Net (IRN) is modeling image downscaling and upscaling as a unified task to alleviate the ill-posed problem in the super-resolution task. However, the ability of potential variables of the model embedded high-frequency information is general, which affects the performance of the reconstructed image. In order to improve the ability of embedding high-frequency information and further reduce the complexity of the model, the potential variables and feature extraction of key components of IRN are improved. Attention mechanism and dilated convolution are used to improve the feature extraction block, reduce the parameters of feature extraction block, and allocate more attention to the image details. The high frequency sub-band interpolation method of wavelet domain is used to improve the potential variables, process and save the image edge, and enhance the ability of embedding high frequency information. Experimental results show that compared with IRN model, improved model has less complexity and excellent performance.


2019 ◽  
Vol 9 (22) ◽  
pp. 4874 ◽  
Author(s):  
Xiaofeng Du ◽  
Yifan He

Super-resolution (SR) technology is essential for improving image quality in magnetic resonance imaging (MRI). The main challenge of MRI SR is to reconstruct high-frequency (HR) details from a low-resolution (LR) image. To address this challenge, we develop a gradient-guided convolutional neural network for improving the reconstruction accuracy of high-frequency image details from the LR image. A gradient prior is fully explored to supply the information of high-frequency details during the super-resolution process, thereby leading to a more accurate reconstructed image. Experimental results of image super-resolution on public MRI databases demonstrate that the gradient-guided convolutional neural network achieves better performance over the published state-of-art approaches.


Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 275
Author(s):  
Jun-Seok Yun ◽  
Seok-Bong Yoo

Among various developments in the field of computer vision, single image super-resolution of images is one of the most essential tasks. However, compared to the integer magnification model for super-resolution, research on arbitrary magnification has been overlooked. In addition, the importance of single image super-resolution at arbitrary magnification is emphasized for tasks such as object recognition and satellite image magnification. In this study, we propose a model that performs arbitrary magnification while retaining the advantages of integer magnification. The proposed model extends the integer magnification image to the target magnification in the discrete cosine transform (DCT) spectral domain. The broadening of the DCT spectral domain results in a lack of high-frequency components. To solve this problem, we propose a high-frequency attention network for arbitrary magnification so that high-frequency information can be restored. In addition, only high-frequency components are extracted from the image with a mask generated by a hyperparameter in the DCT domain. Therefore, the high-frequency components that have a substantial impact on image quality are recovered by this procedure. The proposed framework achieves the performance of an integer magnification and correctly retrieves the high-frequency components lost between the arbitrary magnifications. We experimentally validated our model’s superiority over state-of-the-art models.


Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 54
Author(s):  
Min Zhang ◽  
Huibin Wang ◽  
Zhen Zhang ◽  
Zhe Chen ◽  
Jie Shen

Recently, with the development of convolutional neural networks, single-image super-resolution (SISR) has achieved better performance. However, the practical application of image super-resolution is limited by a large number of parameters and calculations. In this work, we present a lightweight multi-scale asymmetric attention network (MAAN), which consists of a coarse-grained feature block (CFB), fine-grained feature blocks (FFBs), and a reconstruction block (RB). MAAN adopts multiple paths to facilitate information flow and accomplish a better balance of performance and parameters. Specifically, the FFB applies a multi-scale attention residual block (MARB) to capture richer features by exploiting the pixel-to-pixel correlation feature. The asymmetric multi-weights attention blocks (AMABs) in MARB are designed to obtain the attention maps for improving SISR efficiency and readiness. Extensive experimental results show that our method has comparable performance with fewer parameters than the current advanced lightweight SISR.


2019 ◽  
Vol 29 (08) ◽  
pp. 2050121
Author(s):  
Huaijuan Zang ◽  
Leilei Zhu ◽  
Zhenglong Ding ◽  
Xinke Li ◽  
Shu Zhan

Recently, deep convolutional neural networks (CNNs) have achieved great success in single image super-resolution (SISR). Especially, dense skip connections and residual learning structures promote better performance. While most existing deep CNN-based networks exploit the interpolation of upsampled original images, or do transposed convolution in the reconstruction stage, which do not fully employ the hierarchical features of the networks for final reconstruction. In this paper, we present a novel cascaded Dense-UNet (CDU) structure to take full advantage of all hierarchical features for SISR. In each Dense-UNet block (DUB), many short, dense skip pathways can facilitate the flow of information and integrate the different receptive fields. A series of DUBs are concatenated to acquire high-resolution features and capture complementary contextual information. Upsampling operators are in DUBs. Furthermore, residual learning is introduced to our network, which can fuse shallow features from low resolution (LR) image and deep features from cascaded DUBs to further boost super-resolution (SR) reconstruction results. The proposed method is evaluated quantitatively and qualitatively on four benchmark datasets, our network achieves comparable performance to state-of-the-art super-resolution approaches and obtains pleasant visualization results.


2021 ◽  
Vol 13 (19) ◽  
pp. 3848
Author(s):  
Yuntao Wang ◽  
Lin Zhao ◽  
Liman Liu ◽  
Huaifei Hu ◽  
Wenbing Tao

It is extremely important and necessary for low computing power or portable devices to design more lightweight algorithms for image super-resolution (SR). Recently, most SR methods have achieved outstanding performance by sacrificing computational cost and memory storage, or vice versa. To address this problem, we introduce a lightweight U-shaped residual network (URNet) for fast and accurate image SR. Specifically, we propose a more effective feature distillation pyramid residual group (FDPRG) to extract features from low-resolution images. The FDPRG can effectively reuse the learned features with dense shortcuts and capture multi-scale information with a cascaded feature pyramid block. Based on the U-shaped structure, we utilize a step-by-step fusion strategy to improve the performance of feature fusion of different blocks. This strategy is different from the general SR methods which only use a single Concat operation to fuse the features of all basic blocks. Moreover, a lightweight asymmetric residual non-local block is proposed to model the global context information and further improve the performance of SR. Finally, a high-frequency loss function is designed to alleviate smoothing image details caused by pixel-wise loss. Simultaneously, the proposed modules and high-frequency loss function can be easily plugged into multiple mature architectures to improve the performance of SR. Extensive experiments on multiple natural image datasets and remote sensing image datasets show the URNet achieves a better trade-off between image SR performance and model complexity against other state-of-the-art SR methods.


Sign in / Sign up

Export Citation Format

Share Document