polynomial approximation
Recently Published Documents


TOTAL DOCUMENTS

1279
(FIVE YEARS 132)

H-INDEX

37
(FIVE YEARS 3)

2021 ◽  
Vol 31 (2) ◽  
pp. 293-313
Author(s):  
Ali Gholami Rudi ◽  

For a map that can be rotated, we consider the following problem. There are a number of feature points on the map, each having a geometric object as a label. The goal is to find the largest subset of these labels such that when the map is rotated and the labels remain vertical, no two labels in the subset intersect. We show that, even if the labels are vertical bars of zero width, this problem remains NP-hard, and present a polynomial approximation scheme for solving it. We also introduce a new variant of the problem for vertical labels of zero width, in which any label that does not appear in the output must be coalesced with a label that does. Coalescing a subset of the labels means to choose a representative among them and set its label height to the sum of the individual label heights.


2021 ◽  
pp. 1-19
Author(s):  
Guo Niu ◽  
Zhengming Ma ◽  
Haoqing Chen ◽  
Xue Su

Manifold learning plays an important role in nonlinear dimensionality reduction. But many manifold learning algorithms cannot offer an explicit expression for dealing with the problem of out-of-sample (or new data). In recent, many improved algorithms introduce a fixed function to the object function of manifold learning for learning this expression. In manifold learning, the relationship between the high-dimensional data and its low-dimensional representation is a local homeomorphic mapping. Therefore, these improved algorithms actually change or damage the intrinsic structure of manifold learning, as well as not manifold learning. In this paper, a novel manifold learning based on polynomial approximation (PAML) is proposed, which learns the polynomial approximation of manifold learning by using the dimensionality reduction results of manifold learning and the original high-dimensional data. In particular, we establish a polynomial representation of high-dimensional data with Kronecker product, and learns an optimal transformation matrix with this polynomial representation. This matrix gives an explicit and optimal nonlinear mapping between the high-dimensional data and its low-dimensional representation, and can be directly used for solving the problem of new data. Compare with using the fixed linear or nonlinear relationship instead of the manifold relationship, our proposed method actually learns the polynomial optimal approximation of manifold learning, without changing the object function of manifold learning (i.e., keeping the intrinsic structure of manifold learning). We implement experiments over eight data sets with the advanced algorithms published in recent years to demonstrate the benefits of our algorithm.


Sign in / Sign up

Export Citation Format

Share Document