scholarly journals A Multi-Scale Wavelet 3D-CNN for Hyperspectral Image Super-Resolution

2019 ◽  
Vol 11 (13) ◽  
pp. 1557 ◽  
Author(s):  
Yang ◽  
Zhao ◽  
Chan ◽  
Xiao

Super-resolution (SR) is significant for hyperspectral image (HSI) applications. In single-frame HSI SR, how to reconstruct detailed image structures in high resolution (HR) HSI is challenging since there is no auxiliary image (e.g., HR multispectral image) providing structural information. Wavelet could capture image structures in different orientations, and emphasis on predicting high-frequency wavelet sub-bands is helpful for recovering the detailed structures in HSI SR. In this study, we propose a multi-scale wavelet 3D convolutional neural network (MW-3D-CNN) for HSI SR, which predicts the wavelet coefficients of HR HSI rather than directly reconstructing the HR HSI. To exploit the correlation in the spectral and spatial domains, the MW-3D-CNN is built with 3D convolutional layers. An embedding subnet and a predicting subnet constitute the MW-3D-CNN, the embedding subnet extracts deep spatial-spectral features from the low resolution (LR) HSI and represents the LR HSI as a set of feature cubes. The feature cubes are then fed to the predicting subnet. There are multiple output branches in the predicting subnet, each of which corresponds to one wavelet sub-band and predicts the wavelet coefficients of HR HSI. The HR HSI can be obtained by applying inverse wavelet transform to the predicted wavelet coefficients. In the training stage, we propose to train the MW-3D-CNN with L1 norm loss, which is more suitable than the conventional L2 norm loss for penalizing the errors in different wavelet sub-bands. Experiments on both simulated and real spaceborne HSI demonstrate that the proposed algorithm is competitive with other state-of-the-art HSI SR methods.

2021 ◽  
Vol 13 (22) ◽  
pp. 4621
Author(s):  
Dongxu Liu ◽  
Guangliang Han ◽  
Peixun Liu ◽  
Hang Yang ◽  
Xinglong Sun ◽  
...  

Multifarious hyperspectral image (HSI) classification methods based on convolutional neural networks (CNN) have been gradually proposed and achieve a promising classification performance. However, hyperspectral image classification still suffers from various challenges, including abundant redundant information, insufficient spectral-spatial representation, irregular class distribution, and so forth. To address these issues, we propose a novel 2D-3D CNN with spectral-spatial multi-scale feature fusion for hyperspectral image classification, which consists of two feature extraction streams, a feature fusion module as well as a classification scheme. First, we employ two diverse backbone modules for feature representation, that is, the spectral feature and the spatial feature extraction streams. The former utilizes a hierarchical feature extraction module to capture multi-scale spectral features, while the latter extracts multi-stage spatial features by introducing a multi-level fusion structure. With these network units, the category attribute information of HSI can be fully excavated. Then, to output more complete and robust information for classification, a multi-scale spectral-spatial-semantic feature fusion module is presented based on a Decomposition-Reconstruction structure. Last of all, we innovate a classification scheme to lift the classification accuracy. Experimental results on three public datasets demonstrate that the proposed method outperforms the state-of-the-art methods.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 86367-86379
Author(s):  
Liguo Wang ◽  
Tianyi Bi ◽  
Yao Shi

2019 ◽  
Vol 10 (1) ◽  
pp. 237 ◽  
Author(s):  
Fei Ma ◽  
Feixia Yang ◽  
Ziliang Ping ◽  
Wenqin Wang

The limitations of hyperspectral sensors usually lead to coarse spatial resolution of acquired images. A well-known fusion method called coupled non-negative matrix factorization (CNMF) often amounts to an ill-posed inverse problem with poor anti-noise performance. Moreover, from the perspective of matrix decomposition, the matrixing of remotely-sensed cubic data results in the loss of data’s structural information, which causes the performance degradation of reconstructed images. In addition to three-dimensional tensor-based fusion methods, Craig’s minimum-volume belief in hyperspectral unmixing can also be utilized to restore the data structure information for hyperspectral image super-resolution. To address the above difficulties simultaneously, this article incorporates the regularization of joint spatial-spectral smoothing in a minimum-volume simplex, and spatial sparsity—into the original CNMF, to redefine a bi-convex problem. After the convexification of the regularizers, the alternating optimization is utilized to decouple the regularized problem into two convex subproblems, which are then reformulated by separately vectorizing the variables via vector-matrix operators. The alternating direction method of multipliers is employed to split the variables and yield the closed-form solutions. In addition, in order to solve the bottleneck of high computational burden, especially when the size of the problem is large, complexity reduction is conducted to simplify the solutions with constructed matrices and tensor operators. Experimental results illustrate that the proposed algorithm outperforms state-of-the-art fusion methods, which verifies the validity of the new fusion approach in this article.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 956
Author(s):  
Hao Li ◽  
Yuanshu Zhang ◽  
Yong Ma ◽  
Xiaoguang Mei ◽  
Shan Zeng ◽  
...  

The representation-based algorithm has raised a great interest in hyperspectral image (HSI) classification. l1-minimization-based sparse representation (SR) attempts to select a few atoms and cannot fully reflect within-class information, while l2-minimization-based collaborative representation (CR) tries to use all of the atoms leading to mixed-class information. Considering the above problems, we propose the pairwise elastic net representation-based classification (PENRC) method. PENRC combines the l1-norm and l2-norm penalties and introduces a new penalty term, including a similar matrix between dictionary atoms. This similar matrix enables the automatic grouping selection of highly correlated data to estimate more robust weight coefficients for better classification performance. To reduce computation cost and further improve classification accuracy, we use part of the atoms as a local adaptive dictionary rather than the entire training atoms. Furthermore, we consider the neighbor information of each pixel and propose a joint pairwise elastic net representation-based classification (J-PENRC) method. Experimental results on chosen hyperspectral data sets confirm that our proposed algorithms outperform the other state-of-the-art algorithms.


2019 ◽  
Vol 11 (10) ◽  
pp. 1173 ◽  
Author(s):  
Xiaolin Han ◽  
Jing Yu ◽  
Jiqiang Luo ◽  
Weidong Sun

Fusion of the high-spatial-resolution hyperspectral (HHS) image using low-spatial- resolution hyperspectral (LHS) and high-spatial-resolution multispectral (HMS) image is usually formulated as a spatial super-resolution problem of LHS image with the help of an HMS image, and that may result in the loss of detailed structural information. Facing the above problem, the fusion of HMS with LHS image is formulated as a nonlinear spectral mapping from an HMS to HHS image with the help of an LHS image, and a novel cluster-based fusion method using multi-branch BP neural networks (named CF-BPNNs) is proposed, to ensure a more reasonable spectral mapping for each cluster. In the training stage, considering the intrinsic characteristics that the spectra are more similar within each cluster than that between clusters and so do the corresponding spectral mapping, an unsupervised clustering is used to divide the spectra of the down-sampled HMS image (marked as LMS) into several clusters according to spectral correlation. Then, the spectrum-pairs from the clustered LMS image and the corresponding LHS image are used to train multi-branch BP neural networks (BPNNs), to establish the nonlinear spectral mapping for each cluster. In the fusion stage, a supervised clustering is used to group the spectra of HMS image into the clusters determined during the training stage, and the final HHS image is reconstructed from the clustered HMS image using the trained multi-branch BPNNs accordingly. Comparison results with the related state-of-the-art methods demonstrate that our proposed method achieves a better fusion quality both in spatial and spectral domains.


Sign in / Sign up

Export Citation Format

Share Document