scholarly journals Target Recognition of SAR Images Based on Linear and Nonlinear Feature Extraction and Classification

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Haiyan Zhao

A synthetic aperture radar (SAR) target recognition method combining linear and nonlinear feature extraction and classifiers is proposed. The principal component analysis (PCA) and kernel PCA (KPCA) are used to extract feature vectors of the original SAR image, respectively, which are classical and reliable feature extraction algorithms. In addition, KPCA can effectively make up for the weak linear description ability of PCA. Afterwards, support vector machine (SVM) and kernel sparse representation-based classification (KSRC) are used to classify the KPCA and PCA feature vectors, respectively. Similar to the idea of feature extraction, KSRC mainly introduces kernel functions to improve the processing and classification capabilities of nonlinear data. Through the combination of linear and nonlinear features and classifiers, the internal data structure of SAR images and the correspondence between test and training samples can be better investigated. In the experiment, the performance of the proposed method is tested based on the MSTAR dataset. The results show the effectiveness and robustness of the proposed method.


2004 ◽  
Vol 37 (4) ◽  
pp. 801-810 ◽  
Author(s):  
Cheong Hee Park ◽  
Haesun Park


2013 ◽  
Vol 347-350 ◽  
pp. 2390-2394
Author(s):  
Xiao Fang Liu ◽  
Chun Yang

Nonlinear feature extraction used standard Kernel Principal Component Analysis (KPCA) method has large memories and high computational complexity in large datasets. A Greedy Kernel Principal Component Analysis (GKPCA) method is applied to reduce training data and deal with the nonlinear feature extraction problem for training data of large data in classification. First, a subset, which approximates to the original training data, is selected from the full training data using the greedy technique of the GKPCA method. Then, the feature extraction model is trained by the subset instead of the full training data. Finally, FCM algorithm classifies feature extraction data of the GKPCA, KPCA and PCA methods, respectively. The simulation results indicate that the feature extraction performance of both the GKPCA, and KPCA methods outperform the PCA method. In addition of retaining the performance of the KPCA method, the GKPCA method reduces computational complexity due to the reduced training set in classification.



Author(s):  
Zhao Lu ◽  
Gangbing Song ◽  
Leang-san Shieh

As a general framework to represent data, the kernel method can be used if the interactions between elements of the domain occur only through inner product. As a major stride towards the nonlinear feature extraction and dimension reduction, two important kernel-based feature extraction algorithms, kernel principal component analysis and kernel Fisher discriminant, have been proposed. They are both used to create a projection of multivariate data onto a space of lower dimensionality, while attempting to preserve as much of the structural nature of the data as possible. However, both methods suffer from the complete loss of sparsity and redundancy in the nonlinear feature representation. In an attempt to mitigate these drawbacks, this article focuses on the application of the newly developed polynomial kernel higher order neural networks in improving the sparsity and thereby obtaining a succinct representation for kernel-based nonlinear feature extraction algorithms. Particularly, the learning algorithm is based on linear programming support vector regression, which outperforms the conventional quadratic programming support vector regression in model sparsity and computational efficiency.





2010 ◽  
Vol 07 (04) ◽  
pp. 347-356
Author(s):  
E. SIVASANKAR ◽  
R. S. RAJESH

In this paper, Principal Component Analysis is used for feature extraction, and a statistical learning based Support Vector Machine is designed for functional classification of clinical data. Appendicitis data collected from BHEL Hospital, Trichy is taken and classified under three classes. Feature extraction transforms the data in the high-dimensional space to a space of fewer dimensions. The classification is done by constructing an optimal hyperplane that separates the members from the nonmembers of the class. For linearly nonseparable data, Kernel functions are used to map data to a higher dimensional space and there the optimal hyperplane is found. This paper works with different SVMs based on radial basis and polynomial kernels, and their performances are compared.



Author(s):  
Mohamed A. Elgammal ◽  
Omar A. Elkhouly ◽  
Heba Elhosary ◽  
Mohamed Elsayed ◽  
Ahmed Nader Mohieldin ◽  
...  


Author(s):  
FENG ZHANG

In this article we propose a polygonal principal curve based nonlinear feature extraction method, which achieves statistical redundancy elimination without loss of information and provides more robust nonlinear pattern identification for high-dimensional data. Recognizing the limitations of linear statistical methods, this article integrates local principal component analysis (PCA) with a polygonal line algorithm to approximate the complicated nonlinear data structure. Experimental results demonstrate that the proposed algorithm can be implemented to reduce the computation complexity for nonlinear feature extraction in multivariate cases.



Sign in / Sign up

Export Citation Format

Share Document