scholarly journals Analysis of linear and nonlinear dimensionality reduction methods for gender classification of face images

2005 ◽  
Vol 36 (14) ◽  
pp. 931-942 ◽  
Author(s):  
Samarasena Buchala ◽  
Neil Davey ◽  
Tim M. Gale ◽  
Ray J Frank
2004 ◽  
Vol 37 (2) ◽  
pp. 325-336 ◽  
Author(s):  
Changshui Zhang ◽  
Jun Wang ◽  
Nanyuan Zhao ◽  
David Zhang

2017 ◽  
Vol 18 (1) ◽  
Author(s):  
Jiaoyun Yang ◽  
Haipeng Wang ◽  
Huitong Ding ◽  
Ning An ◽  
Gil Alterovitz

1994 ◽  
Vol 05 (04) ◽  
pp. 313-333 ◽  
Author(s):  
MARK DOLSON

Multi-Layer Perceptron (MLP) neural networks have been used extensively for classification tasks. Typically, the MLP network is trained explicitly to produce the correct classification as its output. For speech recognition, however, several investigators have recently experimented with an indirect approach: a unique MLP predictive network is trained for each class of data, and classification is accomplished by determining which predictive network serves as the best model for samples of unknown speech. Results from this approach have been mixed. In this report, we compare the direct and indirect approaches to classification from a more fundamental perspective. We show how recent advances in nonlinear dimensionality reduction can be incorporated into the indirect approach, and we show how the two approaches can be integrated in a novel MLP framework. We further show how these new MLP networks can be usefully viewed as generalizations of Learning Vector Quantization (LVQ) and of subspace methods of pattern recognition. Lastly, we show that applying these ideas to the classification of temporal trajectories can substantially improve performance on simple tasks.


Sign in / Sign up

Export Citation Format

Share Document