scholarly journals Lightweight Multilevel Feature Fusion Network for Hyperspectral Image Classification

2021 ◽  
Vol 14 (1) ◽  
pp. 79
Author(s):  
Miaomiao Liang ◽  
Huai Wang ◽  
Xiangchun Yu ◽  
Zhe Meng ◽  
Jianbing Yi ◽  
...  

Hyperspectral images (HSIs), acquired as a 3D data set, contain spectral and spatial information that is important for ground–object recognition. A 3D convolutional neural network (3DCNN) could therefore be more suitable than a 2D one for extracting multiscale neighborhood information in the spectral and spatial domains simultaneously, if it is not restrained by mass parameters and computation cost. In this paper, we propose a novel lightweight multilevel feature fusion network (LMFN) that can achieve satisfactory HSI classification with fewer parameters and a lower computational burden. The LMFN decouples spectral–spatial feature extraction into two modules: point-wise 3D convolution to learn correlations between adjacent bands with no spatial perception, and depth-wise convolution to obtain local texture features while the spectral receptive field remains unchanged. Then, a target-guided fusion mechanism (TFM) is introduced to achieve multilevel spectral–spatial feature fusion between the two modules. More specifically, multiscale spectral features are endowed with spatial long-range dependency, which is quantified by central target pixel-guided similarity measurement. Subsequently, the results obtained from shallow to deep layers are added, respectively, to the spatial modules, in an orderly manner. The TFM block can enhance adjacent spectral correction and focus on pixels that actively boost the target classification accuracy, while performing multiscale feature fusion. Experimental results across three benchmark HSI data sets indicate that our proposed LMFN has competitive advantages, in terms of both classification accuracy and lightweight deep network architecture engineering. More importantly, compared to state-of-the-art methods, the LMFN presents better robustness and generalization.

2020 ◽  
Vol 12 (10) ◽  
pp. 1660 ◽  
Author(s):  
Qiang Li ◽  
Qi Wang ◽  
Xuelong Li

Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, there are two main problems in the previous works. One is to use the typical three-dimensional convolution analysis, resulting in more parameters of the network. The other is not to pay more attention to the mining of hyperspectral image spatial information, when the spectral information can be extracted. To address these issues, in this paper, we propose a mixed convolutional network (MCNet) for hyperspectral image super-resolution. We design a novel mixed convolutional module (MCM) to extract the potential features by 2D/3D convolution instead of one convolution, which enables the network to more mine spatial features of hyperspectral image. To explore the effective features from 2D unit, we design the local feature fusion to adaptively analyze from all the hierarchical features in 2D units. In 3D unit, we employ spatial and spectral separable 3D convolution to extract spatial and spectral information, which reduces unaffordable memory usage and training time. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.


2021 ◽  
Vol 13 (18) ◽  
pp. 3592
Author(s):  
Yifei Zhao ◽  
Fengqin Yan

Hyperspectral image (HSI) classification is one of the major problems in the field of remote sensing. Particularly, graph-based HSI classification is a promising topic and has received increasing attention in recent years. However, graphs with pixels as nodes generate large size graphs, thus increasing the computational burden. Moreover, satisfactory classification results are often not obtained without considering spatial information in constructing graph. To address these issues, this study proposes an efficient and effective semi-supervised spectral-spatial HSI classification method based on sparse superpixel graph (SSG). In the constructed sparse superpixels graph, each vertex represents a superpixel instead of a pixel, which greatly reduces the size of graph. Meanwhile, both spectral information and spatial structure are considered by using superpixel, local spatial connection and global spectral connection. To verify the effectiveness of the proposed method, three real hyperspectral images, Indian Pines, Pavia University and Salinas, are chosen to test the performance of our proposal. Experimental results show that the proposed method has good classification completion on the three benchmarks. Compared with several competitive superpixel-based HSI classification approaches, the method has the advantages of high classification accuracy (>97.85%) and rapid implementation (<10 s). This clearly favors the application of the proposed method in practice.


2019 ◽  
Vol 9 (22) ◽  
pp. 4890 ◽  
Author(s):  
Zong-Yue Wang ◽  
Qi-Ming Xia ◽  
Jing-Wen Yan ◽  
Shu-Qi Xuan ◽  
Jin-He Su ◽  
...  

Hyperspectral imaging (HSI) contains abundant spectrums as well as spatial information, providing a great basis for classification in the field of remote sensing. In this paper, to make full use of HSI information, we combined spectral and spatial information into a two-dimension image in a particular order by extracting a data cube and unfolding it. Prior to the step of combining, principle component analysis (PCA) is utilized to decrease the dimensions of HSI so as to reduce computational cost. Moreover, the classification block used during the experiment is a convolutional neural network (CNN). Instead of using traditionally fixed-size kernels in CNN, we leverage a multi-scale kernel in the first convolutional layer so that it can scale to the receptive field. To attain higher classification accuracy with deeper layers, residual blocks are also applied to the network. Extensive experiments on the datasets from Pavia University and Salinas demonstrate that the proposed method significantly improves the accuracy in HSI classification.


2020 ◽  
Vol 12 (9) ◽  
pp. 1395
Author(s):  
Linlin Chen ◽  
Zhihui Wei ◽  
Yang Xu

Hyperspectral image (HSI) classification accuracy has been greatly improved by employing deep learning. The current research mainly focuses on how to build a deep network to improve the accuracy. However, these networks tend to be more complex and have more parameters, which makes the model difficult to train and easy to overfit. Therefore, we present a lightweight deep convolutional neural network (CNN) model called S2FEF-CNN. In this model, three S2FEF blocks are used for the joint spectral–spatial features extraction. Each S2FEF block uses 1D spectral convolution to extract spectral features and 2D spatial convolution to extract spatial features, respectively, and then fuses spectral and spatial features by multiplication. Instead of using the full connected layer, two pooling layers follow three blocks for dimension reduction, which further reduces the training parameters. We compared our method with some state-of-the-art HSI classification methods based on deep network on three commonly used hyperspectral datasets. The results show that our network can achieve a comparable classification accuracy with significantly reduced parameters compared to the above deep networks, which reflects its potential advantages in HSI classification.


2020 ◽  
Vol 12 (1) ◽  
pp. 125 ◽  
Author(s):  
Mu ◽  
Guo ◽  
Liu

Extracting spatial and spectral features through deep neural networks has become an effective means of classification of hyperspectral images. However, most networks rarely consider the extraction of multi-scale spatial features and cannot fully integrate spatial and spectral features. In order to solve these problems, this paper proposes a multi-scale and multi-level spectral-spatial feature fusion network (MSSN) for hyperspectral image classification. The network uses the original 3D cube as input data and does not need to use feature engineering. In the MSSN, using different scale neighborhood blocks as the input of the network, the spectral-spatial features of different scales can be effectively extracted. The proposed 3D–2D alternating residual block combines the spectral features extracted by the three-dimensional convolutional neural network (3D-CNN) with the spatial features extracted by the two-dimensional convolutional neural network (2D-CNN). It not only achieves the fusion of spectral features and spatial features but also achieves the fusion of high-level features and low-level features. Experimental results on four hyperspectral datasets show that this method is superior to several state-of-the-art classification methods for hyperspectral images.


2021 ◽  
Vol 13 (9) ◽  
pp. 1732
Author(s):  
Hadis Madani ◽  
Kenneth McIsaac

Pixel-wise classification of hyperspectral images (HSIs) from remote sensing data is a common approach for extracting information about scenes. In recent years, approaches based on deep learning techniques have gained wide applicability. An HSI dataset can be viewed either as a collection of images, each one captured at a different wavelength, or as a collection of spectra, each one associated with a specific point (pixel). Enhanced classification accuracy is enabled if the spectral and spatial information are combined in the input vector. This allows simultaneous classification according to spectral type but also according to geometric relationships. In this study, we proposed a novel spatial feature vector which improves accuracies in pixel-wise classification. Our proposed feature vector is based on the distance transform of the pixels with respect to the dominant edges in the input HSI. In other words, we allow the location of pixels within geometric subdivisions of the dataset to modify the contribution of each pixel to the spatial feature vector. Moreover, we used the extended multi attribute profile (EMAP) features to add more geometric features to the proposed spatial feature vector. We have performed experiments with three hyperspectral datasets. In addition to the Salinas and University of Pavia datasets, which are commonly used in HSI research, we include samples from our Surrey BC dataset. Our proposed method results compares favorably to traditional algorithms as well as to some recently published deep learning-based algorithms.


Author(s):  
Weiwei Yang ◽  
Haifeng Song

Recent research has shown that integration of spatial information has emerged as a powerful tool in improving the classification accuracy of hyperspectral image (HSI). However, partitioning homogeneous regions of the HSI remains a challenging task. This paper proposes a novel spectral-spatial classification method inspired by the support vector machine (SVM). The model consists of spectral-spatial feature extraction channel (SSC) and SVM classifier. SSC is mainly used to extract spatial-spectral features of HSI. SVM is mainly used to classify the extracted features. The model can automatically extract the features of HSI and classify them. Experiments are conducted on benchmark HSI dataset (Indian Pines). It is found that the proposed method yields more accurate classification results compared to the state-of-the-art techniques.


2021 ◽  
Vol 13 (12) ◽  
pp. 2268
Author(s):  
Hang Gong ◽  
Qiuxia Li ◽  
Chunlai Li ◽  
Haishan Dai ◽  
Zhiping He ◽  
...  

Hyperspectral images are widely used for classification due to its rich spectral information along with spatial information. To process the high dimensionality and high nonlinearity of hyperspectral images, deep learning methods based on convolutional neural network (CNN) are widely used in hyperspectral classification applications. However, most CNN structures are stacked vertically in addition to using a onefold size of convolutional kernels or pooling layers, which cannot fully mine the multiscale information on the hyperspectral images. When such networks meet the practical challenge of a limited labeled hyperspectral image dataset—i.e., “small sample problem”—the classification accuracy and generalization ability would be limited. In this paper, to tackle the small sample problem, we apply the semantic segmentation function to the pixel-level hyperspectral classification due to their comparability. A lightweight, multiscale squeeze-and-excitation pyramid pooling network (MSPN) is proposed. It consists of a multiscale 3D CNN module, a squeezing and excitation module, and a pyramid pooling module with 2D CNN. Such a hybrid 2D-3D-CNN MSPN framework can learn and fuse deeper hierarchical spatial–spectral features with fewer training samples. The proposed MSPN was tested on three publicly available hyperspectral classification datasets: Indian Pine, Salinas, and Pavia University. Using 5%, 0.5%, and 0.5% training samples of the three datasets, the classification accuracies of the MSPN were 96.09%, 97%, and 96.56%, respectively. In addition, we also selected the latest dataset with higher spatial resolution, named WHU-Hi-LongKou, as the challenge object. Using only 0.1% of the training samples, we could achieve a 97.31% classification accuracy, which is far superior to the state-of-the-art hyperspectral classification methods.


2013 ◽  
Vol 333-335 ◽  
pp. 822-827 ◽  
Author(s):  
Jun Chul Chun ◽  
Wong Gi Kim

It is known that wavelet transform provides very useful feature values in analyzing various types of images. This paper presents a novel approach for content-based textile image retrieval which uses composite feature vectors of low-level color feature from spatial domain and second-order statistic features from wavelet-transformed sub-band coefficients. Even though color histogram itself is efficient and most used signature for CBIR, it is unable to carry local spatial information of pixel and generate inaccurate retrieval results especially in large image data set. In this paper, we extract texture features such as contrast, homogeneity, ASM(angular-second momentum) and entropy from decomposed sub-band images by wavelet transform and utilize these multiple feature vector to retrieve textile images combining with color histogram. From the experimental results it is proven that the proposed approach is efficiently retrieve the desired images from a large set of textile image database.


Sign in / Sign up

Export Citation Format

Share Document