Modified LDE for Dimensionality Reduction of Hyperspectral Image

Author(s):  
Lei He ◽  
Hongwei Yang ◽  
Lina Zhao
Author(s):  
V. H. Ayma ◽  
V. A. Ayma ◽  
J. Gutierrez

Abstract. Nowadays, the increasing amount of information provided by hyperspectral sensors requires optimal solutions to ease the subsequent analysis of the produced data. A common issue in this matter relates to the hyperspectral data representation for classification tasks. Existing approaches address the data representation problem by performing a dimensionality reduction over the original data. However, mining complementary features that reduce the redundancy from the multiple levels of hyperspectral images remains challenging. Thus, exploiting the representation power of neural networks based techniques becomes an attractive alternative in this matter. In this work, we propose a novel dimensionality reduction implementation for hyperspectral imaging based on autoencoders, ensuring the orthogonality among features to reduce the redundancy in hyperspectral data. The experiments conducted on the Pavia University, the Kennedy Space Center, and Botswana hyperspectral datasets evidence such representation power of our approach, leading to better classification performances compared to traditional hyperspectral dimensionality reduction algorithms.


Author(s):  
R. Kiran Kumar ◽  
B. Saichandana ◽  
K. Srinivas

<p>This paper presents genetic algorithm based band selection and classification on hyperspectral image data set. Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. In this paper, first filtering based on 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, band selection is done using genetic algorithm in-order to remove bands that convey less information. This dimensionality reduction minimizes many requirements such as storage space, computational load, communication bandwidth etc which is imposed on the unsupervised classification algorithms. Next image fusion is performed on the selected hyperspectral bands to selectively merge the maximum possible features from the selected images to form a single image. This fused image is classified using genetic algorithm. Three different indices, such as K-means Index (KMI) and Jm measure are used as objective functions. This method increases classification accuracy and performance of hyperspectral image than without dimensionality reduction.</p>


2020 ◽  
Vol 16 (11) ◽  
pp. 155014772096846
Author(s):  
Jing Liu ◽  
Yulong Qiao

Spectral dimensionality reduction is a crucial step for hyperspectral image classification in practical applications. Dimensionality reduction has a strong influence on image classification performance with the problems of strong coupling features and high band correlation. To solve these issues, we propose the Mahalanobis distance–based kernel supervised machine learning framework for spectral dimensionality reduction. With Mahalanobis distance matrix–based dimensional reduction, the coupling relationship between features and the elimination of the scale effect are removed in low-dimensional feature space, which benefits the image classification. The experimental results show that compared with other methods, the proposed algorithm demonstrates the best accuracy and efficiency. The Mahalanobis distance–based multiples kernel learning achieves higher classification accuracy than the Euclidean distance kernel function. Accordingly, the proposed Mahalanobis distance–based kernel supervised machine learning method performs well with respect to the spectral dimensionality reduction in hyperspectral imaging remote sensing.


2019 ◽  
Vol 11 (10) ◽  
pp. 1219 ◽  
Author(s):  
Lan Zhang ◽  
Hongjun Su ◽  
Jingwei Shen

Dimensionality reduction (DR) is an important preprocessing step in hyperspectral image applications. In this paper, a superpixelwise kernel principal component analysis (SuperKPCA) method for DR that performs kernel principal component analysis (KPCA) on each homogeneous region is proposed to fully utilize the KPCA’s ability to acquire nonlinear features. Moreover, for the proposed method, the differences in the DR results obtained based on different fundamental images (the first principal components obtained by principal component analysis (PCA), KPCA, and minimum noise fraction (MNF)) are compared. Extensive experiments show that when 5, 10, 20, and 30 samples from each class are selected, for the Indian Pines, Pavia University, and Salinas datasets: (1) when the most suitable fundamental image is selected, the classification accuracy obtained by SuperKPCA can be increased by 0.06%–0.74%, 3.88%–4.37%, and 0.39%–4.85%, respectively, when compared with SuperPCA, which performs PCA on each homogeneous region; (2) the DR results obtained based on different first principal components are different and complementary. By fusing the multiscale classification results obtained based on different first principal components, the classification accuracy can be increased by 0.54%–2.68%, 0.12%–1.10%, and 0.01%–0.08%, respectively, when compared with the method based only on the most suitable fundamental image.


Sign in / Sign up

Export Citation Format

Share Document