scholarly journals Single Image Super-Resolution Based on Global Dense Feature Fusion Convolutional Network

Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 316 ◽  
Author(s):  
Wang Xu ◽  
Renwen Chen ◽  
Bin Huang ◽  
Xiang Zhang ◽  
Chuan Liu

Deep neural networks (DNNs) have been widely adopted in single image super-resolution (SISR) recently with great success. As a network goes deeper, intermediate features become hierarchical. However, most SISR methods based on DNNs do not make full use of the hierarchical features. The features cannot be read directly by the subsequent layers, therefore, the previous hierarchical information has little influence on the subsequent layer output, and the performance is relatively poor. To address this issue, a novel global dense feature fusion convolutional network (DFFNet) is proposed, which can take full advantage of global intermediate features. Especially, a feature fusion block (FFblock) is introduced as the basic module. Each block can directly read raw global features from previous ones and then learns the feature spatial correlation and channel correlation between features in a holistic way, leading to a continuous global information memory mechanism. Experiments on the benchmark tests show that the proposed method DFFNet achieves favorable performance against the state-of-art methods.

2020 ◽  
Vol 12 (10) ◽  
pp. 1660 ◽  
Author(s):  
Qiang Li ◽  
Qi Wang ◽  
Xuelong Li

Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, there are two main problems in the previous works. One is to use the typical three-dimensional convolution analysis, resulting in more parameters of the network. The other is not to pay more attention to the mining of hyperspectral image spatial information, when the spectral information can be extracted. To address these issues, in this paper, we propose a mixed convolutional network (MCNet) for hyperspectral image super-resolution. We design a novel mixed convolutional module (MCM) to extract the potential features by 2D/3D convolution instead of one convolution, which enables the network to more mine spatial features of hyperspectral image. To explore the effective features from 2D unit, we design the local feature fusion to adaptively analyze from all the hierarchical features in 2D units. In 3D unit, we employ spatial and spectral separable 3D convolution to extract spatial and spectral information, which reduces unaffordable memory usage and training time. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.


2019 ◽  
Vol 26 (4) ◽  
pp. 538-542 ◽  
Author(s):  
Wenming Yang ◽  
Wei Wang ◽  
Xuechen Zhang ◽  
Shuifa Sun ◽  
Qingmin Liao

Author(s):  
Yanchun Li ◽  
Jianglian Cao ◽  
Zhetao Li ◽  
Sangyoon Oh ◽  
Nobuyoshi Komuro

Single image super-resolution attempts to reconstruct a high-resolution (HR) image from its corresponding low-resolution (LR) image, which has been a research hotspot in computer vision and image processing for decades. To improve the accuracy of super-resolution images, many works adopt very deep networks to model the translation from LR to HR, resulting in memory and computation consumption. In this article, we design a lightweight dense connection distillation network by combining the feature fusion units and dense connection distillation blocks (DCDB) that include selective cascading and dense distillation components. The dense connections are used between and within the distillation block, which can provide rich information for image reconstruction by fusing shallow and deep features. In each DCDB, the dense distillation module concatenates the remaining feature maps of all previous layers to extract useful information, the selected features are then assessed by the proposed layer contrast-aware channel attention mechanism, and finally the cascade module aggregates the features. The distillation mechanism helps to reduce training parameters and improve training efficiency, and the layer contrast-aware channel attention further improves the performance of model. The quality and quantity experimental results on several benchmark datasets show the proposed method performs better tradeoff in term of accuracy and efficiency.


Sign in / Sign up

Export Citation Format

Share Document