scholarly journals Classification of reduction invariants with improved backpropagation

2002 ◽  
Vol 30 (4) ◽  
pp. 239-247
Author(s):  
S. M. Shamsuddin ◽  
M. Darus ◽  
M. N. Sulaiman

Data reduction is a process of feature extraction that transforms the data space into a feature space of much lower dimension compared to the original data space, yet it retains most of the intrinsic information content of the data. This can be done by using a number of methods, such as principal component analysis (PCA), factor analysis, and feature clustering. Principal components are extracted from a collection of multivariate cases as a way of accounting for as much of the variation in that collection as possible by means of as few variables as possible. On the other hand, backpropagation network has been used extensively in classification problems such as XOR problems, share prices prediction, and pattern recognition. This paper proposes an improved error signal of backpropagation network for classification of the reduction invariants using principal component analysis, for extracting the bulk of the useful information present in moment invariants of handwritten digits, leaving the redundant information behind. Higher order centralised scale- invariants are used to extract features of handwritten digits before PCA, and the reduction invariants are sent to the improved backpropagation model for classification purposes.

2018 ◽  
Author(s):  
Toni Bakhtiar

Kernel Principal Component Analysis (Kernel PCA) is a generalization of the ordinary PCA which allows mapping the original data into a high-dimensional feature space. The mapping is expected to address the issues of nonlinearity among variables and separation among classes in the original data space. The key problem in the use of kernel PCA is the parameter estimation used in kernel functions that so far has not had quite obvious guidance, where the parameter selection mainly depends on the objectivity of the research. This study exploited the use of Gaussian kernel function and focused on the ability of kernel PCA in visualizing the separation of the classified data. Assessments were undertaken based on misclassification obtained by Fisher Discriminant Linear Analysis of the first two principal components. This study results suggest for the visualization of kernel PCA by selecting the parameter in the interval between the closest and the furthest distances among the objects of original data is better than that of ordinary PCA.


Author(s):  
Hyeuk Kim

Unsupervised learning in machine learning divides data into several groups. The observations in the same group have similar characteristics and the observations in the different groups have the different characteristics. In the paper, we classify data by partitioning around medoids which have some advantages over the k-means clustering. We apply it to baseball players in Korea Baseball League. We also apply the principal component analysis to data and draw the graph using two components for axis. We interpret the meaning of the clustering graphically through the procedure. The combination of the partitioning around medoids and the principal component analysis can be used to any other data and the approach makes us to figure out the characteristics easily.


2021 ◽  
Author(s):  
Ananta Agarwalla ◽  
Diya Dileep ◽  
P. Jyothsana ◽  
Purnima Unnikrishnan ◽  
Karthik Thirumala

Sign in / Sign up

Export Citation Format

Share Document