scholarly journals Hyperspectral Image Classification Using Feature Relations Map Learning

2020 ◽  
Vol 12 (18) ◽  
pp. 2956 ◽  
Author(s):  
Peng Dou ◽  
Chao Zeng

Recently, deep learning has been reported to be an effective method for improving hyperspectral image classification and convolutional neural networks (CNNs) are, in particular, gaining more and more attention in this field. CNNs provide automatic approaches that can learn more abstract features of hyperspectral images from spectral, spatial, or spectral-spatial domains. However, CNN applications are focused on learning features directly from image data—while the intrinsic relations between original features, which may provide more information for classification, are not fully considered. In order to make full use of the relations between hyperspectral features and to explore more objective features for improving classification accuracy, we proposed feature relations map learning (FRML) in this paper. FRML can automatically enhance the separability of different objects in an image, using a segmented feature relations map (SFRM) that reflects the relations between spectral features through a normalized difference index (NDI), and it can then learn new features from SFRM using a CNN-based feature extractor. Finally, based on these features, a classifier was designed for the classification. With FRML, our experimental results from four popular hyperspectral datasets indicate that the proposed method can achieve more representative and objective features to improve classification accuracy, outperforming classifications using the comparative methods.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shiqi Huang ◽  
Ying Lu ◽  
Wenqing Wang ◽  
Ke Sun

AbstractTo solve the problem that the traditional hyperspectral image classification method cannot effectively distinguish the boundary of objects with a single scale feature, which leads to low classification accuracy, this paper introduces the idea of guided filtering into hyperspectral image classification, and then proposes a multi-scale guided feature extraction and classification (MGFEC) algorithm for hyperspectral images. Firstly, the principal component analysis theory is used to reduce the dimension of hyperspectral image data. Then, guided filtering algorithm is used to achieve multi-scale spatial structure extraction of hyperspectral image by setting different sizes of filtering windows, so as to retain more edge details. Finally, the extracted multi-scale features are input into the support vector machine classifier for classification. Several practical hyperspectral image datasets were used to verify the experiment, and compared with other spectral feature extraction algorithms. The experimental results show that the multi-scale features extracted by the MGFEC algorithm proposed in this paper are more accurate than those extracted by only using spectral information, which leads to the improvement of the final classification accuracy. This fully shows that the proposed method is not only effective, but also suitable for processing different hyperspectral image data.


2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


2021 ◽  
Vol 13 (21) ◽  
pp. 4472
Author(s):  
Tianyu Zhang ◽  
Cuiping Shi ◽  
Diling Liao ◽  
Liguo Wang

Convolutional neural networks (CNNs) have been widely used in hyperspectral image classification in recent years. The training of CNNs relies on a large amount of labeled sample data. However, the number of labeled samples of hyperspectral data is relatively small. Moreover, for hyperspectral images, fully extracting spectral and spatial feature information is the key to achieve high classification performance. To solve the above issues, a deep spectral spatial inverted residuals network (DSSIRNet) is proposed. In this network, a data block random erasing strategy is introduced to alleviate the problem of limited labeled samples by data augmentation of small spatial blocks. In addition, a deep inverted residuals (DIR) module for spectral spatial feature extraction is proposed, which locks the effective features of each layer while avoiding network degradation. Furthermore, a global 3D attention module is proposed, which can realize the fine extraction of spectral and spatial global context information under the condition of the same number of input and output feature maps. Experiments are carried out on four commonly used hyperspectral datasets. A large number of experimental results show that compared with some state-of-the-art classification methods, the proposed method can provide higher classification accuracy for hyperspectral images.


Fractals ◽  
2019 ◽  
Vol 27 (05) ◽  
pp. 1950079
Author(s):  
JUNYING SU ◽  
YINGKUI LI ◽  
QINGWU HU

To maximize the advantages of both spectral and spatial information, we introduce a new spectral–spatial jointed hyperspectral image classification approach based on fractal dimension (FD) analysis of spectral response curve (SRC) in spectral domain and extended morphological processing in spatial domain. This approach first calculates the FD image based on the whole SRC of the hyperspectral image and decomposes the SRC into segments to derive the FD images with each SRC segment. These FD images based on the segmented SRC are composited into a multidimensional FD image set in spectral domain. Then, the extended morphological profiles (EMPs) are derived from the image set through morphological open and close operations in spatial domain. Finally, all these EMPs and FD features are combined into one feature vector for a probabilistic support vector machine (SVM) classification. This approach was demonstrated using three hyperspectral images in urban areas of the university campus and downtown area of Pavia, Italy, and the Washington DC Mall area in the USA, respectively. We assessed the potential and performance of this approach by comparing with PCA-based method in hyperspectral image classification. Our results indicate that the classification accuracy of our proposed method is much higher than the accuracies of the classification methods based on the spectral or spatial domain alone, and similar to or slightly higher than the classification accuracy of PCA-based spectral–spatial jointed classification method. The proposed FD approach also provides a new self-similarity measure of land class in spectral domain, a unique property to represent hyperspectral self-similarity of SRC in hyperspectral imagery.


Author(s):  
B. Saichandana ◽  
K. Srinivas ◽  
R. KiranKumar

<p>Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. This paper presents hyperspectral image classification mechanism using genetic algorithm with empirical mode decomposition and image fusion used in preprocessing stage. 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, image fusion is performed on the hyperspectral bands to selectively merge the maximum possible features from the source images to form a single image. This fused image is classified using genetic algorithm. Different indices, such as K-means (KMI), Davies-Bouldin Index (DBI), and Xie-Beni Index (XBI) are used as objective functions. This method increases classification accuracy of hyperspectral image.</p>


2020 ◽  
Vol 12 (4) ◽  
pp. 664 ◽  
Author(s):  
Binge Cui ◽  
Jiandi Cui ◽  
Yan Lu ◽  
Nannan Guo ◽  
Maoguo Gong

Hyperspectral image classification methods may not achieve good performance when a limited number of training samples are provided. However, labeling sufficient samples of hyperspectral images to achieve adequate training is quite expensive and difficult. In this paper, we propose a novel sample pseudo-labeling method based on sparse representation (SRSPL) for hyperspectral image classification, in which sparse representation is used to select the purest samples to extend the training set. The proposed method consists of the following three steps. First, intrinsic image decomposition is used to obtain the reflectance components of hyperspectral images. Second, hyperspectral pixels are sparsely represented using an overcomplete dictionary composed of all training samples. Finally, information entropy is defined for the vectorized sparse representation, and then the pixels with low information entropy are selected as pseudo-labeled samples to augment the training set. The quality of the generated pseudo-labeled samples is evaluated based on classification accuracy, i.e., overall accuracy, average accuracy, and Kappa coefficient. Experimental results on four real hyperspectral data sets demonstrate excellent classification performance using the new added pseudo-labeled samples, which indicates that the generated samples are of high confidence.


2020 ◽  
Vol 12 (1) ◽  
pp. 159 ◽  
Author(s):  
Yue Wu ◽  
Guifeng Mu ◽  
Can Qin ◽  
Qiguang Miao ◽  
Wenping Ma ◽  
...  

Because there are many unlabeled samples in hyperspectral images and the cost of manual labeling is high, this paper adopts semi-supervised learning method to make full use of many unlabeled samples. In addition, those hyperspectral images contain much spectral information and the convolutional neural networks have great ability in representation learning. This paper proposes a novel semi-supervised hyperspectral image classification framework which utilizes self-training to gradually assign highly confident pseudo labels to unlabeled samples by clustering and employs spatial constraints to regulate self-training process. Spatial constraints are introduced to exploit the spatial consistency within the image to correct and re-assign the mistakenly classified pseudo labels. Through the process of self-training, the sample points of high confidence are gradually increase, and they are added to the corresponding semantic classes, which makes semantic constraints gradually enhanced. At the same time, the increase in high confidence pseudo labels also contributes to regional consistency within hyperspectral images, which highlights the role of spatial constraints and improves the HSIc efficiency. Extensive experiments in HSIc demonstrate the effectiveness, robustness, and high accuracy of our approach.


2021 ◽  
Vol 13 (18) ◽  
pp. 3592
Author(s):  
Yifei Zhao ◽  
Fengqin Yan

Hyperspectral image (HSI) classification is one of the major problems in the field of remote sensing. Particularly, graph-based HSI classification is a promising topic and has received increasing attention in recent years. However, graphs with pixels as nodes generate large size graphs, thus increasing the computational burden. Moreover, satisfactory classification results are often not obtained without considering spatial information in constructing graph. To address these issues, this study proposes an efficient and effective semi-supervised spectral-spatial HSI classification method based on sparse superpixel graph (SSG). In the constructed sparse superpixels graph, each vertex represents a superpixel instead of a pixel, which greatly reduces the size of graph. Meanwhile, both spectral information and spatial structure are considered by using superpixel, local spatial connection and global spectral connection. To verify the effectiveness of the proposed method, three real hyperspectral images, Indian Pines, Pavia University and Salinas, are chosen to test the performance of our proposal. Experimental results show that the proposed method has good classification completion on the three benchmarks. Compared with several competitive superpixel-based HSI classification approaches, the method has the advantages of high classification accuracy (>97.85%) and rapid implementation (<10 s). This clearly favors the application of the proposed method in practice.


2019 ◽  
Vol 11 (7) ◽  
pp. 833 ◽  
Author(s):  
Jianshang Liao ◽  
Liguo Wang

In recent decades, in order to enhance the performance of hyperspectral image classification, the spatial information of hyperspectral image obtained by various methods has become a research hotspot. For this work, it proposes a new classification method based on the fusion of two spatial information, which will be classified by a large margin distribution machine (LDM). First, the spatial texture information is extracted from the top of the principal component analysis for hyperspectral images by a curvature filter (CF). Second, the spatial correlation information of a hyperspectral image is completed by using domain transform recursive filter (DTRF). Last, the spatial texture information and correlation information are fused to be classified with LDM. The experimental results of hyperspectral images classification demonstrate that the proposed curvature filter and domain transform recursive filter with LDM(CFDTRF-LDM) method is superior to other classification methods.


Sign in / Sign up

Export Citation Format

Share Document