scholarly journals Probabilistic Classification of Hyperspectral Images by Learning Nonlinear Dimensionality Reduction Mapping

Author(s):  
X. Wang ◽  
Suresh Kumar ◽  
Fabio Ramos ◽  
Tobias Kaupp ◽  
Ben Upcroft ◽  
...  
2019 ◽  
Vol 11 (2) ◽  
pp. 136 ◽  
Author(s):  
Yuliang Wang ◽  
Huiyi Su ◽  
Mingshi Li

Hyperspectral images (HSIs) provide unique capabilities for urban impervious surfaces (UIS) extraction. This paper proposes a multi-feature extraction model (MFEM) for UIS detection from HSIs. The model is based on a nonlinear dimensionality reduction technique, t-distributed stochastic neighbor embedding (t-SNE), and the deep learning method convolutional deep belief networks (CDBNs). We improved the two methods to create a novel MFEM consisting of improved t-SNE, deep compression CDBNs (d-CDBNs), and a logistic regression classifier. The improved t-SNE method provides dimensionality reduction and spectral feature extraction from the original HSIs and the d-CDBNs algorithm extracts spatial feature and edges using the reduced dimensional datasets. Finally, the extracted features are combined into multi-feature for the impervious surface detection using the logistic regression classifier. After comparing with the commonly used methods, the current experimental results demonstrate that the proposed MFEM model provides better performance for UIS extraction and detection from HSIs.


1994 ◽  
Vol 05 (04) ◽  
pp. 313-333 ◽  
Author(s):  
MARK DOLSON

Multi-Layer Perceptron (MLP) neural networks have been used extensively for classification tasks. Typically, the MLP network is trained explicitly to produce the correct classification as its output. For speech recognition, however, several investigators have recently experimented with an indirect approach: a unique MLP predictive network is trained for each class of data, and classification is accomplished by determining which predictive network serves as the best model for samples of unknown speech. Results from this approach have been mixed. In this report, we compare the direct and indirect approaches to classification from a more fundamental perspective. We show how recent advances in nonlinear dimensionality reduction can be incorporated into the indirect approach, and we show how the two approaches can be integrated in a novel MLP framework. We further show how these new MLP networks can be usefully viewed as generalizations of Learning Vector Quantization (LVQ) and of subspace methods of pattern recognition. Lastly, we show that applying these ideas to the classification of temporal trajectories can substantially improve performance on simple tasks.


Sign in / Sign up

Export Citation Format

Share Document