Dimensionality Reduction Based on PARAFAC Model

2019 ◽  
Vol 63 (6) ◽  
pp. 60501-1-60501-11
Author(s):  
Ronghua Yan ◽  
Jinye Peng ◽  
Dongmei Ma

Abstract In hyperspectral image analysis, dimensionality reduction is a preprocessing step for hyperspectral image (HSI) classification. Principal component analysis (PCA) reduces the spectral dimension and does not utilize the spatial information of an HSI. To solve it, the tensor decompositions have been successfully applied to joint noise reduction in spatial and spectral dimensions of hyperspectral images, such as parallel factor analysis (PARAFAC). However, the PARAFAC method does not reduce the dimension in the spectral dimension. To improve it, two new methods were proposed in this article, that is, combine PCA and PARAFAC to reduce both the dimension in the spectral dimension and the noise in the spatial and spectral dimensions. The experimental results indicate that the new methods improve the classification compared with the PARAFAC method.

2020 ◽  
Vol 12 (11) ◽  
pp. 1698 ◽  
Author(s):  
Alina L. Machidon ◽  
Fabio Del Frate ◽  
Matteo Picchiani ◽  
Octavian M. Machidon ◽  
Petre L. Ogrutan

Principal Component Analysis (PCA) is a method based on statistics and linear algebra techniques, used in hyperspectral satellite imagery for data dimensionality reduction required in order to speed up and increase the performance of subsequent hyperspectral image processing algorithms. This paper introduces the PCA approximation method based on a geometric construction approach (gaPCA) method, an alternative algorithm for computing the principal components based on a geometrical constructed approximation of the standard PCA and presents its application to remote sensing hyperspectral images. gaPCA has the potential of yielding better land classification results by preserving a higher degree of information related to the smaller objects of the scene (or to the rare spectral objects) than the standard PCA, being focused not on maximizing the variance of the data, but the range. The paper validates gaPCA on four distinct datasets and performs comparative evaluations and metrics with the standard PCA method. A comparative land classification benchmark of gaPCA and the standard PCA using statistical-based tools is also described. The results show gaPCA is an effective dimensionality-reduction tool, with performance similar to, and in several cases, even higher than standard PCA on specific image classification tasks. gaPCA was shown to be more suitable for hyperspectral images with small structures or objects that need to be detected or where preponderantly spectral classes or spectrally similar classes are present.


2019 ◽  
Vol 11 (10) ◽  
pp. 1219 ◽  
Author(s):  
Lan Zhang ◽  
Hongjun Su ◽  
Jingwei Shen

Dimensionality reduction (DR) is an important preprocessing step in hyperspectral image applications. In this paper, a superpixelwise kernel principal component analysis (SuperKPCA) method for DR that performs kernel principal component analysis (KPCA) on each homogeneous region is proposed to fully utilize the KPCA’s ability to acquire nonlinear features. Moreover, for the proposed method, the differences in the DR results obtained based on different fundamental images (the first principal components obtained by principal component analysis (PCA), KPCA, and minimum noise fraction (MNF)) are compared. Extensive experiments show that when 5, 10, 20, and 30 samples from each class are selected, for the Indian Pines, Pavia University, and Salinas datasets: (1) when the most suitable fundamental image is selected, the classification accuracy obtained by SuperKPCA can be increased by 0.06%–0.74%, 3.88%–4.37%, and 0.39%–4.85%, respectively, when compared with SuperPCA, which performs PCA on each homogeneous region; (2) the DR results obtained based on different first principal components are different and complementary. By fusing the multiscale classification results obtained based on different first principal components, the classification accuracy can be increased by 0.54%–2.68%, 0.12%–1.10%, and 0.01%–0.08%, respectively, when compared with the method based only on the most suitable fundamental image.


Author(s):  
S. Lyu ◽  
J. Mao ◽  
M. Hou

Abstract. Due to the influence of natural and human factors, the linear features in the murals are partially blurred, which brings great challenges to the digital preservation and virtual restoration of cultural heritage. Taking the advantages of non-invasive measurement as well as the rich image and spectral information of hyperspectral technology, we proposed a linear feature enhancement method by combining semi-supervised superpixel segmentation with block dimension reduction. The main research work includes: (1) The true color composite image was segmented to obtain the label data by using the local spatial information of the superpixel image and the global feature information extracted by fuzzy c-means (FCM) clustering.(2) According to the label data, the preprocessed hyperspectral data were divided into homogeneous regions, whose dimensionality was reduced by principal component analysis (PCA) and kernel principal component analysis (KPCA). (3) The principal component images with the largest gradient after dimensionality reduction were respectively selected and normalized. The optimal principal component images normalized by the block PCA and block KPCA dimensionality reduction algorithms are superimposed to produce the linear feature enhancement images of murals. The hyperspectral images of some murals in Qutan Temple, Qinghai Province, China were used to verify the method. The results show that the spatial information and the spectral information of different pattern areas in the hyperspectral image can be fully used by combining the superpixel FCM image segmentation algorithm with the dimensionality reduction algorithm. of. It can highlight the linear information in the hyperspectral images of fades murals.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 479 ◽  
Author(s):  
Baokai Zu ◽  
Kewen Xia ◽  
Tiejun Li ◽  
Ziping He ◽  
Yafang Li ◽  
...  

Hyperspectral Images (HSIs) contain enriched information due to the presence of various bands, which have gained attention for the past few decades. However, explosive growth in HSIs’ scale and dimensions causes “Curse of dimensionality” and “Hughes phenomenon”. Dimensionality reduction has become an important means to overcome the “Curse of dimensionality”. In hyperspectral images, labeled samples are more difficult to collect because they require many labor and material resources. Semi-supervised dimensionality reduction is very important in mining high-dimensional data due to the lack of costly-labeled samples. The promotion of the supervised dimensionality reduction method to the semi-supervised method is mostly done by graph, which is a powerful tool for characterizing data relationships and manifold exploration. To take advantage of the spatial information of data, we put forward a novel graph construction method for semi-supervised learning, called SLIC Superpixel-based l 2 , 1 -norm Robust Principal Component Analysis (SURPCA2,1), which integrates superpixel segmentation method Simple Linear Iterative Clustering (SLIC) into Low-rank Decomposition. First, the SLIC algorithm is adopted to obtain the spatial homogeneous regions of HSI. Then, the l 2 , 1 -norm RPCA is exploited in each superpixel area, which captures the global information of homogeneous regions and preserves spectral subspace segmentation of HSIs very well. Therefore, we have explored the spatial and spectral information of hyperspectral image simultaneously by combining superpixel segmentation with RPCA. Finally, a semi-supervised dimensionality reduction framework based on SURPCA2,1 graph is used for feature extraction task. Extensive experiments on multiple HSIs showed that the proposed spectral-spatial SURPCA2,1 is always comparable to other compared graphs with few labeled samples.


2019 ◽  
Vol 11 (7) ◽  
pp. 833 ◽  
Author(s):  
Jianshang Liao ◽  
Liguo Wang

In recent decades, in order to enhance the performance of hyperspectral image classification, the spatial information of hyperspectral image obtained by various methods has become a research hotspot. For this work, it proposes a new classification method based on the fusion of two spatial information, which will be classified by a large margin distribution machine (LDM). First, the spatial texture information is extracted from the top of the principal component analysis for hyperspectral images by a curvature filter (CF). Second, the spatial correlation information of a hyperspectral image is completed by using domain transform recursive filter (DTRF). Last, the spatial texture information and correlation information are fused to be classified with LDM. The experimental results of hyperspectral images classification demonstrate that the proposed curvature filter and domain transform recursive filter with LDM(CFDTRF-LDM) method is superior to other classification methods.


2021 ◽  
Vol 13 (14) ◽  
pp. 2752
Author(s):  
Na Li ◽  
Deyun Zhou ◽  
Jiao Shi ◽  
Tao Wu ◽  
Maoguo Gong

Dimensionality reduction (DR) plays an important role in hyperspectral image (HSI) classification. Unsupervised DR (uDR) is more practical due to the difficulty of obtaining class labels and their scarcity for HSIs. However, many existing uDR algorithms lack the comprehensive exploration of spectral-locational-spatial (SLS) information, which is of great significance for uDR in view of the complex intrinsic structure in HSIs. To address this issue, two uDR methods called SLS structure preserving projection (SLSSPP) and SLS reconstruction preserving embedding (SLSRPE) are proposed. Firstly, to facilitate the extraction of SLS information, a weighted spectral-locational (wSL) datum is generated to break the locality of spatial information extraction. Then, a new SLS distance (SLSD) excavating the SLS relationships among samples is designed to select effective SLS neighbors. In SLSSPP, a new uDR model that includes a SLS adjacency graph based on SLSD and a cluster centroid adjacency graph based on wSL data is proposed, which compresses intraclass samples and approximately separates interclass samples in an unsupervised manner. Meanwhile, in SLSRPE, for preserving the SLS relationship among target pixels and their nearest neighbors, a new SLS reconstruction weight was defined to obtain the more discriminative projection. Experimental results on the Indian Pines, Pavia University and Salinas datasets demonstrate that, through KNN and SVM classifiers with different classification conditions, the classification accuracies of SLSSPP and SLSRPE are approximately 4.88%, 4.15%, 2.51%, and 2.30%, 5.31%, 2.41% higher than that of the state-of-the-art DR algorithms.


Sign in / Sign up

Export Citation Format

Share Document