LOCAL COORDINATES ALIGNMENT (LCA): A NOVEL MANIFOLD LEARNING APPROACH

Author(s):  
TIANHAO ZHANG ◽  
XUELONG LI ◽  
DACHENG TAO ◽  
JIE YANG

Manifold learning has been demonstrated as an effective way to represent intrinsic geometrical structure of samples. In this paper, a new manifold learning approach, named Local Coordinates Alignment (LCA), is developed based on the alignment technique. LCA first obtains local coordinates as representations of local neighborhood by preserving proximity relations on a patch, which is Euclidean. Then, these extracted local coordinates are aligned to yield the global embeddings. To solve the out of sample problem, linearization of LCA (LLCA) is proposed. In addition, in order to solve the non-Euclidean problem in real world data when building the locality, kernel techniques are utilized to represent similarity of the pairwise points on a local patch. Empirical studies on both synthetic data and face image sets show effectiveness of the developed approaches.

Author(s):  
ZHENGMING MA ◽  
JING CHEN

In manifold learning, the neighborhood is often called a patch of the manifold, and the corresponding open set is called the local coordinate of the patch. The so-called alignment is to align the local coordinates in the d-dimensional Euclidean space to get the global coordinate of the manifold. There are two kinds of alignment methods: global and progressive alignment methods. The global alignment methods align the local coordinates of the manifold all at one time by solving an eigenvalue problem. The progressive alignment methods often take the local coordinate of a patch as the basic local coordinate and then attach other local ordinates to the basic local coordinate patch-by-patch until the basic local coordinate evolves into the global coordinate of the manifold. In this paper, a new progressive alignment method is proposed, where only the local coordinates of the two patches with the largest intersection at the current stage of progressive alignment will be aligned into a larger local coordinate. It is inspired by the famous Huffman coding, where two random events with the smallest probabilities at the current phase will be merged into a random event with a larger probability. Therefore, the proposed method is a Huffman-like alignment method. The experiments on benchmark data show that the proposed method outperforms both the global alignment methods and the other progressive alignment methods and is more robust to the changes of data size. The experiments on real-world data show the feasibility of the proposed method.


2016 ◽  
Vol 2016 ◽  
pp. 1-8
Author(s):  
Juan Meng ◽  
Guyu Hu ◽  
Dong Li ◽  
Yanyan Zhang ◽  
Zhisong Pan

Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an existing predictor from the formula. Since the computation of IPM only involves two distributions, this generalization term is independent with specific classifiers. With popular learning models, the empirical risk minimization is expressed as a general convex optimization problem and thus can be solved effectively by existing tools. Empirical studies on synthetic data for regression and real-world data for classification show the effectiveness of this method.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-46
Author(s):  
Kui Yu ◽  
Lin Liu ◽  
Jiuyong Li

In this article, we aim to develop a unified view of causal and non-causal feature selection methods. The unified view will fill in the gap in the research of the relation between the two types of methods. Based on the Bayesian network framework and information theory, we first show that causal and non-causal feature selection methods share the same objective. That is to find the Markov blanket of a class attribute, the theoretically optimal feature set for classification. We then examine the assumptions made by causal and non-causal feature selection methods when searching for the optimal feature set, and unify the assumptions by mapping them to the restrictions on the structure of the Bayesian network model of the studied problem. We further analyze in detail how the structural assumptions lead to the different levels of approximations employed by the methods in their search, which then result in the approximations in the feature sets found by the methods with respect to the optimal feature set. With the unified view, we can interpret the output of non-causal methods from a causal perspective and derive the error bounds of both types of methods. Finally, we present practical understanding of the relation between causal and non-causal methods using extensive experiments with synthetic data and various types of real-world data.


2021 ◽  
Vol 210 ◽  
pp. 106371
Author(s):  
Elisa Moya-Sáez ◽  
Óscar Peña-Nogales ◽  
Rodrigo de Luis-García ◽  
Carlos Alberola-López

2016 ◽  
Vol 2016 ◽  
pp. 1-5 ◽  
Author(s):  
Chuanlei Zhang ◽  
Shanwen Zhang ◽  
Weidong Fang

Manifold learning based dimensionality reduction algorithms have been payed much attention in plant leaf recognition as the algorithms can select a subset of effective and efficient discriminative features in the leaf images. In this paper, a dimensionality reduction method based on local discriminative tangent space alignment (LDTSA) is introduced for plant leaf recognition based on leaf images. The proposed method can embrace part optimization and whole alignment and encapsulate the geometric and discriminative information into a local patch. The experiments on two plant leaf databases, ICL and Swedish plant leaf datasets, demonstrate the effectiveness and feasibility of the proposed method.


Author(s):  
Chi Seng Pun ◽  
Lei Wang ◽  
Hoi Ying Wong

Modern day trading practice resembles a thought experiment, where investors imagine various possibilities of future stock market and invest accordingly. Generative adversarial network (GAN) is highly relevant to this trading practice in two ways. First, GAN generates synthetic data by a neural network that is technically indistinguishable from the reality, which guarantees the reasonableness of the experiment. Second, GAN generates multitudes of fake data, which implements half of the experiment. In this paper, we present a new architecture of GAN and adapt it to portfolio risk minimization problem by adding a regression network to GAN (implementing the second half of the experiment). The new architecture is termed GANr. Battling against two distinctive networks: discriminator and regressor, GANr's generator aims to simulate a stock market that is close to the reality while allow for all possible scenarios. The resulting portfolio resembles a robust portfolio with data-driven ambiguity. Our empirical studies show that GANr portfolio is more resilient to bleak financial scenarios than CLSGAN and LASSO portfolios.


Sign in / Sign up

Export Citation Format

Share Document