A spatial constraint and deep learning based hyperspectral image super-resolution method

Author(s):  
Jing Hu ◽  
Yunsong Li ◽  
Xi Zhao ◽  
Weiying Xie
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 12319-12327 ◽  
Author(s):  
Shengxiang Zhang ◽  
Gaobo Liang ◽  
Shuwan Pan ◽  
Lixin Zheng

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2348
Author(s):  
Zhe Liu ◽  
Yinqiang Zheng ◽  
Xian-Hua Han

Hyperspectral image (HSI) super-resolution (SR) is a challenging task due to its ill-posed nature, and has attracted extensive attention by the research community. Previous methods concentrated on leveraging various hand-crafted image priors of a latent high-resolution hyperspectral (HR-HS) image to regularize the degradation model of the observed low-resolution hyperspectral (LR-HS) and HR-RGB images. Different optimization strategies for searching a plausible solution, which usually leads to a limited reconstruction performance, were also exploited. Recently, deep-learning-based methods evolved for automatically learning the abundant image priors in a latent HR-HS image. These methods have made great progress for HS image super resolution. Current deep-learning methods have faced difficulties in designing more complicated and deeper neural network architectures for boosting the performance. They also require large-scale training triplets, such as the LR-HS, HR-RGB, and their corresponding HR-HS images for neural network training. These training triplets significantly limit their applicability to real scenarios. In this work, a deep unsupervised fusion-learning framework for generating a latent HR-HS image using only the observed LR-HS and HR-RGB images without previous preparation of any other training triplets is proposed. Based on the fact that a convolutional neural network architecture is capable of capturing a large number of low-level statistics (priors) of images, the automatic learning of underlying priors of spatial structures and spectral attributes in a latent HR-HS image using only its corresponding degraded observations is promoted. Specifically, the parameter space of a generative neural network used for learning the required HR-HS image to minimize the reconstruction errors of the observations using mathematical relations between data is investigated. Moreover, special convolutional layers for approximating the degradation operations between observations and the latent HR-HS image are specifically to construct an end-to-end unsupervised learning framework for HS image super-resolution. Experiments on two benchmark HS datasets, including the CAVE and Harvard, demonstrate that the proposed method can is capable of producing very promising results, even under a large upscaling factor. Furthermore, it can outperform other unsupervised state-of-the-art methods by a large margin, and manifests its superiority and efficiency.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Author(s):  
A. Valli Bhasha ◽  
B. D. Venkatramana Reddy

The image super-resolution methods with deep learning using Convolutional Neural Network (CNN) have been producing admirable advancements. The proposed image resolution model involves the following two main analyses: (i) analysis using Adaptive Discrete Wavelet Transform (ADWT) with Deep CNN and (ii) analysis using Non-negative Structured Sparse Representation (NSSR). The technique termed as NSSR is used to recover the high-resolution (HR) images from the low-resolution (LR) images. The experimental evaluation involves two phases: Training and Testing. In the training phase, the information regarding the residual images of the dataset are trained using the optimized Deep CNN. On the other hand, the testing phase helps to generate the super resolution image using the HR wavelet subbands (HRSB) and residual images. As the main novelty, the filter coefficients of DWT are optimized by the hybrid Fire Fly-based Spotted Hyena Optimization (FF-SHO) to develop ADWT. Finally, a valuable performance evaluation on the two benchmark hyperspectral image datasets confirms the effectiveness of the proposed model over the existing algorithms.


Sign in / Sign up

Export Citation Format

Share Document