Nonlinear Feature Extraction: Deterministic Neural Networks

Author(s):  
Gustavo Deco ◽  
Dragan Obradovic
Author(s):  
Zhao Lu ◽  
Gangbing Song ◽  
Leang-san Shieh

As a general framework to represent data, the kernel method can be used if the interactions between elements of the domain occur only through inner product. As a major stride towards the nonlinear feature extraction and dimension reduction, two important kernel-based feature extraction algorithms, kernel principal component analysis and kernel Fisher discriminant, have been proposed. They are both used to create a projection of multivariate data onto a space of lower dimensionality, while attempting to preserve as much of the structural nature of the data as possible. However, both methods suffer from the complete loss of sparsity and redundancy in the nonlinear feature representation. In an attempt to mitigate these drawbacks, this article focuses on the application of the newly developed polynomial kernel higher order neural networks in improving the sparsity and thereby obtaining a succinct representation for kernel-based nonlinear feature extraction algorithms. Particularly, the learning algorithm is based on linear programming support vector regression, which outperforms the conventional quadratic programming support vector regression in model sparsity and computational efficiency.


2010 ◽  
Vol 41 (10) ◽  
pp. 29-37 ◽  
Author(s):  
Zhixiong Li ◽  
Xinping Yan ◽  
Chengqing Yuan ◽  
Jiangbin Zhao ◽  
Zhongxiao Peng

Sign in / Sign up

Export Citation Format

Share Document