scholarly journals Weighted Neighborhood Preserving Ensemble Embedding

Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 219 ◽  
Author(s):  
Sumet Mehta ◽  
Bi-Sheng Zhan ◽  
Xiang-Jun Shen

Neighborhood preserving embedding (NPE) is a classical and very promising supervised dimensional reduction (DR) technique based on a linear graph, which preserves the local neighborhood relations of the data points. However, NPE uses the K nearest neighbor (KNN) criteria for constructing an adjacent graph which makes it more sensitive to neighborhood size. In this article, we propose a novel DR method called weighted neighborhood preserving ensemble embedding (WNPEE). Unlike NPE, the proposed WNPEE constructs an ensemble of adjacent graphs with the number of nearest neighbors varying. With this graph ensemble building, WNPEE can obtain the low-dimensional projections with optimal embedded graph pursuing in a joint optimization manner. WNPEE can be applied in many machine learning fields, such as object recognition, data classification, signal processing, text categorization, and various deep learning tasks. Extensive experiments on Olivetti Research Laboratory (ORL), Georgia Tech, Carnegie Mellon University-Pose and Illumination Images (CMU PIE) and Yale, four face databases demonstrate that WNPEE achieves a competitive and better recognition rate than NPE and other comparative DR methods. Additionally, the proposed WNPEE achieves much lower sensitivity to the neighborhood size parameter as compared to the traditional NPE method while preserving more of the local manifold structure of the high-dimensional data.

2015 ◽  
Vol 13 (2) ◽  
pp. 50-58
Author(s):  
R. Khadim ◽  
R. El Ayachi ◽  
Mohamed Fakir

This paper focuses on the recognition of 3D objects using 2D attributes. In order to increase the recognition rate, the present an hybridization of three approaches to calculate the attributes of color image, this hybridization based on the combination of Zernike moments, Gist descriptors and color descriptor (statistical moments). In the classification phase, three methods are adopted: Neural Network (NN), Support Vector Machine (SVM), and k-nearest neighbor (KNN). The database COIL-100 is used in the experimental results.


Author(s):  
Amal A. Moustafa ◽  
Ahmed Elnakib ◽  
Nihal F. F. Areed

This paper presents a methodology for Age-Invariant Face Recognition (AIFR), based on the optimization of deep learning features. The proposed method extracts deep learning features using transfer deep learning, extracted from the unprocessed face images. To optimize the extracted features, a Genetic Algorithm (GA) procedure is designed in order to select the most relevant features to the problem of identifying a person based on his/her facial images over different ages. For classification, K-Nearest Neighbor (KNN) classifiers with different distance metrics are investigated, i.e., Correlation, Euclidian, Cosine, and Manhattan distance metrics. Experimental results using a Manhattan distance KNN classifier achieves the best Rank-1 recognition rate of 86.2% and 96% on the standard FGNET and MORPH datasets, respectively. Compared to the state-of-the-art methods, our proposed method needs no preprocessing stages. In addition, the experiments show its privilege over other related methods.


2007 ◽  
Vol 2 (1) ◽  
pp. 14-22 ◽  
Author(s):  
Wa`el Musa Hadi ◽  
Fadi Thabtah ◽  
Salahideen Mousa ◽  
Samer Al Hawari ◽  
Ghassan Kanaan ◽  
...  

2021 ◽  
Vol 87 (6) ◽  
pp. 445-455
Author(s):  
Yi Ma ◽  
Zezhong Zheng ◽  
Yutang Ma ◽  
Mingcang Zhu ◽  
Ran Huang ◽  
...  

Many manifold learning algorithms conduct an eigen vector analysis on a data-similarity matrix with a size of N×N, where N is the number of data points. Thus, the memory complexity of the analysis is no less than O(N2). We pres- ent in this article an incremental manifold learning approach to handle large hyperspectral data sets for land use identification. In our method, the number of dimensions for the high-dimensional hyperspectral-image data set is obtained with the training data set. A local curvature varia- tion algorithm is utilized to sample a subset of data points as landmarks. Then a manifold skeleton is identified based on the landmarks. Our method is validated on three AVIRIS hyperspectral data sets, outperforming the comparison algorithms with a k–nearest-neighbor classifier and achieving the second best performance with support vector machine.


Author(s):  
Alia Karim Abdul Hassan ◽  
Bashar Saadoon Mahdi ◽  
Asmaa Abdullah Mohammed

In a writer recognition system, the system performs a “one-to-many” search in a large database with handwriting samples of known authors and returns a possible candidate list. This paper proposes method for writer identification handwritten Arabic word without segmentation to sub letters based on feature extraction speed up robust feature transform (SURF) and K nearest neighbor classification (KNN) to enhance the writer's  identification accuracy. After feature extraction, it can be cluster by K-means algorithm to standardize the number of features. The feature extraction and feature clustering called to gather Bag of Word (BOW); it converts arbitrary number of image feature to uniform length feature vector. The proposed method experimented using (IFN/ENIT) database. The recognition rate of experiment result is (96.666).


2018 ◽  
Author(s):  
Tyler J. Burns ◽  
Garry P. Nolan ◽  
Nikolay Samusik

In high-dimensional single cell data, comparing changes in functional markers between conditions is typically done across manual or algorithm-derived partitions based on population-defining markers. Visualizations of these partitions is commonly done on low-dimensional embeddings (eg. t-SNE), colored by per-partition changes. Here, we provide an analysis and visualization tool that performs these comparisons across overlapping k-nearest neighbor (KNN) groupings. This allows one to color low-dimensional embeddings by marker changes without hard boundaries imposed by partitioning. We devised an objective optimization of k based on minimizing functional marker KNN imputation error. Proof-of-concept work visualized the exact location of an IL-7 responsive subset in a B cell developmental trajectory on a t-SNE map independent of clustering. Per-condition cell frequency analysis revealed that KNN is sensitive to detecting artifacts due to marker shift, and therefore can also be valuable in a quality control pipeline. Overall, we found that KNN groupings lead to useful multiple condition visualizations and efficiently extract a large amount of information from mass cytometry data. Our software is publicly available through the Bioconductor package Sconify.


2020 ◽  
Vol 8 (6) ◽  
Author(s):  
Pushpam Sinha ◽  
Ankita Sinha

Entropy based k-Nearest Neighbor pattern classification (EbkNN) is a variation of the conventional k-Nearest Neighbor rule of pattern classification, which exclusively optimizes the value of k-neighbors for each test data based on the calculations of entropy. The formula for entropy used in EbkNN is the one that has been defined popularly in information theory for a set of n different types of information (class) attached to a total of m objects (data points) with each object defined by f features. In EbkNN that value of k is chosen for discrimination of given test data for which the entropy is the least non-zero value. Other rules of conventional kNN are retained in EbkNN. It is concluded that EbkNN works best for binary classification. It is computationally prohibitive to use EbkNN for discriminating the data points of the test dataset into number of classes greater than two. The biggest advantage of EbkNN vis-à-vis the conventional kNN is that in one single run of EbkNN algorithm we get optimum classification of test data. But conventional kNN algorithm has to be run separately for each of the selected range of values of k, and then the optimum k to be chosen from amongst them. We also tested our EbkNN method on WDBC (Wisconsin Diagnostic Breast Cancer) dataset. There are 569 instances in this dataset and we made a random choice of first 290 instances as training dataset and the rest 279 instances as test dataset. We got an exceptionally remarkable result with EbkNN method- accuracy close to 100% and better than the ones got by most of the other researchers who worked on WDBC dataset.  


Sign in / Sign up

Export Citation Format

Share Document