Face recognition via collaborative representation based multiple one-dimensional embedding

Author(s):  
Y. Wang ◽  
Yuan Yan Tang ◽  
Luoqing Li ◽  
Jianzhong Wang

This paper presents a novel classifier based on collaborative representation (CR) and multiple one-dimensional (1D) embedding with applications to face recognition. To use multiple 1D embedding (1DME) framework in semi-supervised learning is first proposed by one of the authors, J. Wang, in 2014. The main idea of the multiple 1D embedding is the following: Given a high-dimensional dataset, we first map it onto several different 1D sequences on the line while keeping the proximity of data points in the original ambient high-dimensional space. By this means, a classification problem on high dimension reduces to the one in a 1D framework, which can be efficiently solved by any classical 1D regularization method, for instance, an interpolation scheme. The dissimilarity metric plays an important role in learning a decent 1DME of the original dataset. Our another contribution is to develop a collaborative representation based dissimilarity (CRD) metric. Compared to the conventional Euclidean distance based metric, the proposed method can lead to better results. The experimental results on real-world databases verify the efficacy of the proposed method.

Author(s):  
Elham Bayatmanesh

The Several numerical techniques have been developed and compared for solving the one-dimensional and three-dimentional advection-diffusion equation with constant coefficients. the subject has played very important roles to fluid dynamics as well as many other field of science and engineering. In this article, we will be presenting the of n-dimentional and we neglect the numerical examples.


Author(s):  
Ping Deng ◽  
Qingkai Ma ◽  
Weili Wu

Clustering can be considered as the most important unsupervised learning problem. It has been discussed thoroughly by both statistics and database communities due to its numerous applications in problems such as classification, machine learning, and data mining. A summary of clustering techniques can be found in (Berkhin, 2002). Most known clustering algorithms such as DBSCAN (Easter, Kriegel, Sander, & Xu, 1996) and CURE (Guha, Rastogi, & Shim, 1998) cluster data points based on full dimensions. When the dimensional space grows higher, the above algorithms lose their efficiency and accuracy because of the so-called “curse of dimensionality”. It is shown in (Beyer, Goldstein, Ramakrishnan, & Shaft, 1999) that computing the distance based on full dimensions is not meaningful in high dimensional space since the distance of a point to its nearest neighbor approaches the distance to its farthest neighbor as dimensionality increases. Actually, natural clusters might exist in subspaces. Data points in different clusters may be correlated with respect to different subsets of dimensions. In order to solve this problem, feature selection (Kohavi & Sommerfield, 1995) and dimension reduction (Raymer, Punch, Goodman, Kuhn, & Jain, 2000) have been proposed to find the closely correlated dimensions for all the data and the clusters in such dimensions. Although both methods reduce the dimensionality of the space before clustering, the case where clusters may exist in different subspaces of full dimensions is not handled well. Projected clustering has been proposed recently to effectively deal with high dimensionalities. Finding clusters and their relevant dimensions are the objectives of projected clustering algorithms. Instead of projecting the entire dataset on the same subspace, projected clustering focuses on finding specific projection for each cluster such that the similarity is reserved as much as possible.


Author(s):  
Samuel Melton ◽  
Sharad Ramanathan

Abstract Motivation Recent technological advances produce a wealth of high-dimensional descriptions of biological processes, yet extracting meaningful insight and mechanistic understanding from these data remains challenging. For example, in developmental biology, the dynamics of differentiation can now be mapped quantitatively using single-cell RNA sequencing, yet it is difficult to infer molecular regulators of developmental transitions. Here, we show that discovering informative features in the data is crucial for statistical analysis as well as making experimental predictions. Results We identify features based on their ability to discriminate between clusters of the data points. We define a class of problems in which linear separability of clusters is hidden in a low-dimensional space. We propose an unsupervised method to identify the subset of features that define a low-dimensional subspace in which clustering can be conducted. This is achieved by averaging over discriminators trained on an ensemble of proposed cluster configurations. We then apply our method to single-cell RNA-seq data from mouse gastrulation, and identify 27 key transcription factors (out of 409 total), 18 of which are known to define cell states through their expression levels. In this inferred subspace, we find clear signatures of known cell types that eluded classification prior to discovery of the correct low-dimensional subspace. Availability and implementation https://github.com/smelton/SMD. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Hao Deng ◽  
Chao Ma ◽  
Lijun Shen ◽  
Chuanwu Yang

In this paper, we present a novel semi-supervised classification method based on sparse representation (SR) and multiple one-dimensional embedding-based adaptive interpolation (M1DEI). The main idea of M1DEI is to embed the data into multiple one-dimensional (1D) manifolds satisfying that the connected samples have shortest distance. In this way, the problem of high-dimensional data classification is transformed into a 1D classification problem. By alternating interpolation and averaging on the multiple 1D manifolds, the labeled sample set of the data can enlarge gradually. Obviously, proper metric facilitates more accurate embedding and further helps improve the classification performance. We develop a SR-based metric, which measures the affinity between samples more accurately than the common Euclidean distance. The experimental results on several databases show the effectiveness of the improvement.


2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Eimad E. Abusham ◽  
E. K. Wong

A novel method based on the local nonlinear mapping is presented in this research. The method is called Locally Linear Discriminate Embedding (LLDE). LLDE preserves a local linear structure of a high-dimensional space and obtains a compact data representation as accurately as possible in embedding space (low dimensional) before recognition. For computational simplicity and fast processing, Radial Basis Function (RBF) classifier is integrated with the LLDE. RBF classifier is carried out onto low-dimensional embedding with reference to the variance of the data. To validate the proposed method, CMU-PIE database has been used and experiments conducted in this research revealed the efficiency of the proposed methods in face recognition, as compared to the linear and non-linear approaches.


1981 ◽  
Vol 29 (2) ◽  
pp. 371-391 ◽  
Author(s):  
Jean-Claude Picard ◽  
Maurice Queyranne

Author(s):  
Yiming Zhang ◽  
Nam Ho Kim ◽  
Chanyoung Park ◽  
Raphael T. Haftka

Focus of this paper is on the prediction accuracy of multidimensional functions at an inaccessible point. The paper explores the possibility of extrapolating a high-dimensional function using multiple one-dimensional converging lines. The main idea is to select samples along lines towards the inaccessible point. Multi-dimensional extrapolation is thus transformed into a series of one-dimensional extrapolations that provide multiple estimates at the inaccessible point. We demonstrate the performance of converging lines using Kriging to extrapolate a two-dimensional drag coefficient function. Post-processing of extrapolation results from different lines based on Bayesian theory is proposed to combine the multiple predictions. Selection of lines is also discussed. The method of converging lines proves to be more robust and reliable than two-dimensional Kriging surrogate for the example.


Author(s):  
Jesse S. Jin ◽  
◽  
Henry C. Wang ◽  
Tom Gedeon ◽  

Indexing and retrieving visual information is an important issue in multimedia development. It involves handling high dimensional vectors. Current tree-based high dimensional index structures, such as R-tree, SS+-tree, TV-tree, etc, have the similar low bound to the one-dimensional comparison-based search methods. It is far from being practical in the multimedia area. We propose a fast indexing method using surrogate coding. It possesses many good properties such as preserving similarity ranking and being fast in retrieval. It also preserves a clustered space and is easy to maintain.


Author(s):  
Andronikos Paliathanasis

Abstract We apply the Lie theory to determine the infinitesimal generators of the one-parameter point transformations which leave invariant the 3 + 1 Kudryashov–Sinelshchikov equation. We solve the classification problem of the one-dimensional optimal system, while we derive all the possible independent Lie invariants; that is, we determine all the independent similarity transformations which lead to different reductions. For an application, the results are applied to prove the existence of travel-wave solutions. Furthermore, the method of singularity analysis is applied where we show that the 3 + 1 Kudryashov–Sinelshchikov equation possess the Painlevé property and its solution can be written by using a Laurent expansion.


Sign in / Sign up

Export Citation Format

Share Document