scholarly journals Hyperspectral Image Super-Resolution under the Guidance of Deep Gradient Information

2021 ◽  
Vol 13 (12) ◽  
pp. 2382
Author(s):  
Minghua Zhao ◽  
Jiawei Ning ◽  
Jing Hu ◽  
Tingting Li

Hyperspectral image (HSI) super-resolution has gained great attention in remote sensing, due to its effectiveness in enhancing the spatial information of the HSI while preserving the high spectral discriminative ability, without modifying the imagery hardware. In this paper, we proposed a novel HSI super-resolution method via a gradient-guided residual dense network (G-RDN), in which the spatial gradient is exploited to guide the super-resolution process. Specifically, there are three modules in the super-resolving process. Firstly, the spatial mapping between the low-resolution HSI and the desired high-resolution HSI is learned via a residual dense network. The residual dense network is used to fully exploit the hierarchical features learned from all the convolutional layers. Meanwhile, the gradient detail is extracted via a residual network (ResNet), which is further utilized to guide the super-resolution process. Finally, an empirical weight is set between the fully obtained global hierarchical features and the gradient details. Experimental results and the data analysis on three benchmark datasets with different scaling factors demonstrated that our proposed G-RDN achieved favorable performance.

2020 ◽  
Vol 12 (10) ◽  
pp. 1660 ◽  
Author(s):  
Qiang Li ◽  
Qi Wang ◽  
Xuelong Li

Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, there are two main problems in the previous works. One is to use the typical three-dimensional convolution analysis, resulting in more parameters of the network. The other is not to pay more attention to the mining of hyperspectral image spatial information, when the spectral information can be extracted. To address these issues, in this paper, we propose a mixed convolutional network (MCNet) for hyperspectral image super-resolution. We design a novel mixed convolutional module (MCM) to extract the potential features by 2D/3D convolution instead of one convolution, which enables the network to more mine spatial features of hyperspectral image. To explore the effective features from 2D unit, we design the local feature fusion to adaptively analyze from all the hierarchical features in 2D units. In 3D unit, we employ spatial and spectral separable 3D convolution to extract spatial and spectral information, which reduces unaffordable memory usage and training time. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.


2019 ◽  
Vol 11 (23) ◽  
pp. 2859 ◽  
Author(s):  
Jiaojiao Li ◽  
Ruxing Cui ◽  
Bo Li ◽  
Rui Song ◽  
Yunsong Li ◽  
...  

Hyperspectral image (HSI) super-resolution (SR) is of great application value and has attracted broad attention. The hyperspectral single image super-resolution (HSISR) task is correspondingly difficult in SR due to the unavailability of auxiliary high resolution images. To tackle this challenging task, different from the existing learning-based HSISR algorithms, in this paper we propose a novel framework, i.e., a 1D–2D attentional convolutional neural network, which employs a separation strategy to extract the spatial–spectral information and then fuse them gradually. More specifically, our network consists of two streams: a spatial one and a spectral one. The spectral one is mainly composed of the 1D convolution to encode a small change in the spectrum, while the 2D convolution, cooperating with the attention mechanism, is used in the spatial pathway to encode spatial information. Furthermore, a novel hierarchical side connection strategy is proposed for effectively fusing spectral and spatial information. Compared with the typical 3D convolutional neural network (CNN), the 1D–2D CNN is easier to train with less parameters. More importantly, our proposed framework can not only present a perfect solution for the HSISR problem, but also explore the potential in hyperspectral pansharpening. The experiments over widely used benchmarks on SISR and hyperspectral pansharpening demonstrate that the proposed method could outperform other state-of-the-art methods, both in visual quality and quantity measurements.


2019 ◽  
Vol 29 (08) ◽  
pp. 2050121
Author(s):  
Huaijuan Zang ◽  
Leilei Zhu ◽  
Zhenglong Ding ◽  
Xinke Li ◽  
Shu Zhan

Recently, deep convolutional neural networks (CNNs) have achieved great success in single image super-resolution (SISR). Especially, dense skip connections and residual learning structures promote better performance. While most existing deep CNN-based networks exploit the interpolation of upsampled original images, or do transposed convolution in the reconstruction stage, which do not fully employ the hierarchical features of the networks for final reconstruction. In this paper, we present a novel cascaded Dense-UNet (CDU) structure to take full advantage of all hierarchical features for SISR. In each Dense-UNet block (DUB), many short, dense skip pathways can facilitate the flow of information and integrate the different receptive fields. A series of DUBs are concatenated to acquire high-resolution features and capture complementary contextual information. Upsampling operators are in DUBs. Furthermore, residual learning is introduced to our network, which can fuse shallow features from low resolution (LR) image and deep features from cascaded DUBs to further boost super-resolution (SR) reconstruction results. The proposed method is evaluated quantitatively and qualitatively on four benchmark datasets, our network achieves comparable performance to state-of-the-art super-resolution approaches and obtains pleasant visualization results.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Author(s):  
A. Valli Bhasha ◽  
B. D. Venkatramana Reddy

The image super-resolution methods with deep learning using Convolutional Neural Network (CNN) have been producing admirable advancements. The proposed image resolution model involves the following two main analyses: (i) analysis using Adaptive Discrete Wavelet Transform (ADWT) with Deep CNN and (ii) analysis using Non-negative Structured Sparse Representation (NSSR). The technique termed as NSSR is used to recover the high-resolution (HR) images from the low-resolution (LR) images. The experimental evaluation involves two phases: Training and Testing. In the training phase, the information regarding the residual images of the dataset are trained using the optimized Deep CNN. On the other hand, the testing phase helps to generate the super resolution image using the HR wavelet subbands (HRSB) and residual images. As the main novelty, the filter coefficients of DWT are optimized by the hybrid Fire Fly-based Spotted Hyena Optimization (FF-SHO) to develop ADWT. Finally, a valuable performance evaluation on the two benchmark hyperspectral image datasets confirms the effectiveness of the proposed model over the existing algorithms.


Author(s):  
Feiqiang Liu ◽  
Qiang Yu ◽  
Lihui Chen ◽  
Gwanggil Jeon ◽  
Marcelo Keese Albertini ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1234
Author(s):  
Lei Zha ◽  
Yu Yang ◽  
Zicheng Lai ◽  
Ziwei Zhang ◽  
Juan Wen

In recent years, neural networks for single image super-resolution (SISR) have applied more profound and deeper network structures to extract extra image details, which brings difficulties in model training. To deal with deep model training problems, researchers utilize dense skip connections to promote the model’s feature representation ability by reusing deep features of different receptive fields. Benefiting from the dense connection block, SRDensenet has achieved excellent performance in SISR. Despite the fact that the dense connected structure can provide rich information, it will also introduce redundant and useless information. To tackle this problem, in this paper, we propose a Lightweight Dense Connected Approach with Attention for Single Image Super-Resolution (LDCASR), which employs the attention mechanism to extract useful information in channel dimension. Particularly, we propose the recursive dense group (RDG), consisting of Dense Attention Blocks (DABs), which can obtain more significant representations by extracting deep features with the aid of both dense connections and the attention module, making our whole network attach importance to learning more advanced feature information. Additionally, we introduce the group convolution in DABs, which can reduce the number of parameters to 0.6 M. Extensive experiments on benchmark datasets demonstrate the superiority of our proposed method over five chosen SISR methods.


Sign in / Sign up

Export Citation Format

Share Document