Using Self-Similarity to Incorporate Dimensionality Reduction and Cluster Evolution Tracking

2013 ◽  
Vol 336-338 ◽  
pp. 2242-2247
Author(s):  
Guang Hui Yan ◽  
Yong Chen ◽  
Hong Yun Zhao ◽  
Ya Jin Ren ◽  
Zhi Cheng Ma

Cluster evolution tracking and dimensionality reduction have been studied intensively but separately in the time decayed and high dimensional stream data environment during the past decades. However, the interaction between the cluster evolution and the dimensionality reduction is the most common scenario in the time decayed stream data. Therefore, the dimensionality reduction should interact with cluster operation in the endless life cycle of stream data. In this paper, we first investigate the interaction between dimensionality reduction and cluster evolution in the high dimensional time decayed stream data. Then, we integrate the on-line sequential forward fractal dimensionality reduction technique with self-adaptive technique for cluster evolution tracking based on multi-fractal. Our performance experiments over a number of real and synthetic data sets illustrate the effectiveness and efficiency provided by our approach.

2018 ◽  
Vol 30 (12) ◽  
pp. 3281-3308
Author(s):  
Hong Zhu ◽  
Li-Zhi Liao ◽  
Michael K. Ng

We study a multi-instance (MI) learning dimensionality-reduction algorithm through sparsity and orthogonality, which is especially useful for high-dimensional MI data sets. We develop a novel algorithm to handle both sparsity and orthogonality constraints that existing methods do not handle well simultaneously. Our main idea is to formulate an optimization problem where the sparse term appears in the objective function and the orthogonality term is formed as a constraint. The resulting optimization problem can be solved by using approximate augmented Lagrangian iterations as the outer loop and inertial proximal alternating linearized minimization (iPALM) iterations as the inner loop. The main advantage of this method is that both sparsity and orthogonality can be satisfied in the proposed algorithm. We show the global convergence of the proposed iterative algorithm. We also demonstrate that the proposed algorithm can achieve high sparsity and orthogonality requirements, which are very important for dimensionality reduction. Experimental results on both synthetic and real data sets show that the proposed algorithm can obtain learning performance comparable to that of other tested MI learning algorithms.


Author(s):  
Andrew J. Connolly ◽  
Jacob T. VanderPlas ◽  
Alexander Gray ◽  
Andrew J. Connolly ◽  
Jacob T. VanderPlas ◽  
...  

With the dramatic increase in data available from a new generation of astronomical telescopes and instruments, many analyses must address the question of the complexity as well as size of the data set. This chapter deals with how we can learn which measurements, properties, or combinations thereof carry the most information within a data set. It describes techniques that are related to concepts discussed when describing Gaussian distributions, density estimation, and the concepts of information content. The chapter begins with an exploration of the problems posed by high-dimensional data. It then describes the data sets used in this chapter, and introduces perhaps the most important and widely used dimensionality reduction technique, principal component analysis (PCA). The remainder of the chapter discusses several alternative techniques which address some of the weaknesses of PCA.


2019 ◽  
Vol 277 ◽  
pp. 01012 ◽  
Author(s):  
Clare E. Matthews ◽  
Paria Yousefi ◽  
Ludmila I. Kuncheva

Many existing methods for video summarisation are not suitable for on-line applications, where computational and memory constraints mean that feature extraction and frame selection must be simple and efficient. Our proposed method uses RGB moments to represent frames, and a control-chart procedure to identify shots from which keyframes are then selected. The new method produces summaries of higher quality than two state-of-the-art on-line video summarisation methods identified as the best among nine such methods in our previous study. The summary quality is measured against an objective ideal for synthetic data sets, and compared to user-generated summaries of real videos.


2014 ◽  
Vol 2014 ◽  
pp. 1-5 ◽  
Author(s):  
Fuding Xie ◽  
Yutao Fan ◽  
Ming Zhou

Dimensionality reduction is the transformation of high-dimensional data into a meaningful representation of reduced dimensionality. This paper introduces a dimensionality reduction technique by weighted connections between neighborhoods to improveK-Isomap method, attempting to preserve perfectly the relationships between neighborhoods in the process of dimensionality reduction. The validity of the proposal is tested by three typical examples which are widely employed in the algorithms based on manifold. The experimental results show that the local topology nature of dataset is preserved well while transforming dataset in high-dimensional space into a new dataset in low-dimensionality by the proposed method.


2018 ◽  
Author(s):  
Etienne Becht ◽  
Charles-Antoine Dutertre ◽  
Immanuel W. H. Kwok ◽  
Lai Guan Ng ◽  
Florent Ginhoux ◽  
...  

AbstractUniform Manifold Approximation and Projection (UMAP) is a recently-published non-linear dimensionality reduction technique. Another such algorithm, t-SNE, has been the default method for such task in the past years. Herein we comment on the usefulness of UMAP high-dimensional cytometry and single-cell RNA sequencing, notably highlighting faster runtime and consistency, meaningful organization of cell clusters and preservation of continuums in UMAP compared to t-SNE.


2015 ◽  
Vol 15 (2) ◽  
pp. 154-172 ◽  
Author(s):  
Danilo B Coimbra ◽  
Rafael M Martins ◽  
Tácito TAT Neves ◽  
Alexandru C Telea ◽  
Fernando V Paulovich

Understanding three-dimensional projections created by dimensionality reduction from high-variate datasets is very challenging. In particular, classical three-dimensional scatterplots used to display such projections do not explicitly show the relations between the projected points, the viewpoint used to visualize the projection, and the original data variables. To explore and explain such relations, we propose a set of interactive visualization techniques. First, we adapt and enhance biplots to show the data variables in the projected three-dimensional space. Next, we use a set of interactive bar chart legends to show variables that are visible from a given viewpoint and also assist users to select an optimal viewpoint to examine a desired set of variables. Finally, we propose an interactive viewpoint legend that provides an overview of the information visible in a given three-dimensional projection from all possible viewpoints. Our techniques are simple to implement and can be applied to any dimensionality reduction technique. We demonstrate our techniques on the exploration of several real-world high-dimensional datasets.


2017 ◽  
Vol 10 (13) ◽  
pp. 355 ◽  
Author(s):  
Reshma Remesh ◽  
Pattabiraman. V

Dimensionality reduction techniques are used to reduce the complexity for analysis of high dimensional data sets. The raw input data set may have large dimensions and it might consume time and lead to wrong predictions if unnecessary data attributes are been considered for analysis. So using dimensionality reduction techniques one can reduce the dimensions of input data towards accurate prediction with less cost. In this paper the different machine learning approaches used for dimensionality reductions such as PCA, SVD, LDA, Kernel Principal Component Analysis and Artificial Neural Network  have been studied.


2011 ◽  
Vol 58-60 ◽  
pp. 547-550
Author(s):  
Di Wu ◽  
Zhao Zheng

In real world, high-dimensional data are everywhere, but the nature structure behind them is always featured by only a few parameters. With the rapid development of computer vision, more and more data dimensionality reduction problems are involved, this leads to the rapid development of dimensionality reduction algorithms. Linear method such as LPP [1], NPE [2], nonlinear method such as LLE [3] and improvement version kernel NPE. One particularly simple but effective assumption in face recognition is that the samples from the same class lie on a linear subspace, so lots of nonlinear methods only perform well on some artificial data sets. This paper emphasizes on NPE and SPP [4] come up with recently, and combines these methods, the experiments show the effect of new method outperform some classic unsupervised methods.


Sign in / Sign up

Export Citation Format

Share Document