scholarly journals Dimensionality Reduction using Bi-Dimensional Empirical Mode Decomposition Method for Hyperspectral Image Segmentation

2019 ◽  
Vol 8 (4) ◽  
pp. 11300-11304

This paper presents a dimensionality reduction of hyperspectral dataset using bi-dimensional empirical mode decomposition (BEMD). This reduction method is used in a process for segmentation of hyperspectral data. Hyperspectral data contains multiple narrow bands conveying both spectral and spatial information of a scene. Analysis of this kind of data is done in three sequential stages, dimensionality reduction, fusion and segmentation. The method presented in this paper mainly focus on the dimensionality reduction step using BEMD, fusion is carried out using hierarchical fusion method and the segmentation is carried out using Clustering algorithms. This dimensionality reduction removes less informative bands in the data set, decreasing the storage and processing load in further steps in analysis of data. The qualitative and quantitative analysis shows that best informative bands are selected using proposed method which gets high quality segmented image using FCM.

Author(s):  
R. Kiran Kumar ◽  
B. Saichandana ◽  
K. Srinivas

<p>This paper presents genetic algorithm based band selection and classification on hyperspectral image data set. Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. In this paper, first filtering based on 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, band selection is done using genetic algorithm in-order to remove bands that convey less information. This dimensionality reduction minimizes many requirements such as storage space, computational load, communication bandwidth etc which is imposed on the unsupervised classification algorithms. Next image fusion is performed on the selected hyperspectral bands to selectively merge the maximum possible features from the selected images to form a single image. This fused image is classified using genetic algorithm. Three different indices, such as K-means Index (KMI) and Jm measure are used as objective functions. This method increases classification accuracy and performance of hyperspectral image than without dimensionality reduction.</p>


Author(s):  
B. Saichandana ◽  
K. Srinivas ◽  
R. KiranKumar

<p>Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. This paper presents hyperspectral image classification mechanism using genetic algorithm with empirical mode decomposition and image fusion used in preprocessing stage. 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, image fusion is performed on the hyperspectral bands to selectively merge the maximum possible features from the source images to form a single image. This fused image is classified using genetic algorithm. Different indices, such as K-means (KMI), Davies-Bouldin Index (DBI), and Xie-Beni Index (XBI) are used as objective functions. This method increases classification accuracy of hyperspectral image.</p>


2020 ◽  
Vol 12 (23) ◽  
pp. 4007
Author(s):  
Kasra Rafiezadeh Shahi ◽  
Pedram Ghamisi ◽  
Behnood Rasti ◽  
Robert Jackisch ◽  
Paul Scheunders ◽  
...  

The increasing amount of information acquired by imaging sensors in Earth Sciences results in the availability of a multitude of complementary data (e.g., spectral, spatial, elevation) for monitoring of the Earth’s surface. Many studies were devoted to investigating the usage of multi-sensor data sets in the performance of supervised learning-based approaches at various tasks (i.e., classification and regression) while unsupervised learning-based approaches have received less attention. In this paper, we propose a new approach to fuse multiple data sets from imaging sensors using a multi-sensor sparse-based clustering algorithm (Multi-SSC). A technique for the extraction of spatial features (i.e., morphological profiles (MPs) and invariant attribute profiles (IAPs)) is applied to high spatial-resolution data to derive the spatial and contextual information. This information is then fused with spectrally rich data such as multi- or hyperspectral data. In order to fuse multi-sensor data sets a hierarchical sparse subspace clustering approach is employed. More specifically, a lasso-based binary algorithm is used to fuse the spectral and spatial information prior to automatic clustering. The proposed framework ensures that the generated clustering map is smooth and preserves the spatial structures of the scene. In order to evaluate the generalization capability of the proposed approach, we investigate its performance not only on diverse scenes but also on different sensors and data types. The first two data sets are geological data sets, which consist of hyperspectral and RGB data. The third data set is the well-known benchmark Trento data set, including hyperspectral and LiDAR data. Experimental results indicate that this novel multi-sensor clustering algorithm can provide an accurate clustering map compared to the state-of-the-art sparse subspace-based clustering algorithms.


Author(s):  
N. Jamshidpour ◽  
S. Homayouni ◽  
A. Safari

Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.


2021 ◽  
Vol 13 (2) ◽  
pp. 268
Author(s):  
Xiaochen Lv ◽  
Wenhong Wang ◽  
Hongfu Liu

Hyperspectral unmixing is an important technique for analyzing remote sensing images which aims to obtain a collection of endmembers and their corresponding abundances. In recent years, non-negative matrix factorization (NMF) has received extensive attention due to its good adaptability for mixed data with different degrees. The majority of existing NMF-based unmixing methods are developed by incorporating additional constraints into the standard NMF based on the spectral and spatial information of hyperspectral images. However, they neglect to exploit the nature of imbalanced pixels included in the data, which may cause the pixels mixed with imbalanced endmembers to be ignored, and thus the imbalanced endmembers generally cannot be accurately estimated due to the statistical property of NMF. To exploit the information of imbalanced samples in hyperspectral data during the unmixing procedure, in this paper, a cluster-wise weighted NMF (CW-NMF) method for the unmixing of hyperspectral images with imbalanced data is proposed. Specifically, based on the result of clustering conducted on the hyperspectral image, we construct a weight matrix and introduce it into the model of standard NMF. The proposed weight matrix can provide an appropriate weight value to the reconstruction error between each original pixel and the reconstructed pixel in the unmixing procedure. In this way, the adverse effect of imbalanced samples on the statistical accuracy of NMF is expected to be reduced by assigning larger weight values to the pixels concerning imbalanced endmembers and giving smaller weight values to the pixels mixed by majority endmembers. Besides, we extend the proposed CW-NMF by introducing the sparsity constraints of abundance and graph-based regularization, respectively. The experimental results on both synthetic and real hyperspectral data have been reported, and the effectiveness of our proposed methods has been demonstrated by comparing them with several state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document