Visualizing big data via a mixture of PARAMAP and Isomap

Author(s):  
Ulas Akkucuk

Dimension reduction strives to represent higher dimensional data by a lower-dimensional structure. A famous approach by Carroll called Parametric Mapping or PARAMAP (Shepard & Carroll, 1966) works by iterative minimization of a loss function measuring the smoothness or continuity of the mapping from the lower dimensional representation to the original data. The algorithm was revitalized with essential modifications (Akkucuk & Carroll, 2006). Even though the algorithm was modified, it still needed to make a large number of randomly generated starts. In this paper we discuss the use of a variant of the Isomap method (Tenenbaum et al., 2000) to obtain a starting framework to replace the random starts. The core set of landmark points are selected by a special procedure akin to selection of seeds for the k-means algorithm. These core set of landmark points are used to create a rational start for running the PARAMAP algorithm only once but effectively reach a global minimum. Since Isomap is faster and less inclined to local optimum problems than PARAMAP, and the iterative process involved in adding new points to the configuration will be less time consuming (since only one starting configuration is used), we believe the resulting method should be better suited to deal with large data sets, and more prone to obtain an acceptable solution in realistic time.

2017 ◽  
pp. 99
Author(s):  
Pamela S. Soltis ◽  
Douglas E. Soltis

Technological advances in molecular biology have greatly increased the speed and efficiency of DNA sequencing, making it possible to construct large molecular data sets for phylogeny reconstruction relatively quickly. Despite their potential for improving our understanding of phylogeny, these large data sets also provide many challenges. In this paper, we discuss several of these challenges, including 1) the failure of a search to find the most parsimonious trees (the local optimum) in a reasonable amount of time, 2) the difference between a local optimum and the global optimum, and 3) the existence of multiple classes (islands) of most parsimonious trees. We also discuss possible strategies to improve the' likelihood of finding the most parsimonious tree(s) and present two examples from our work on angiosperm phylogeny. We conclude with a discussion of two alternatives to analyses of entire large data sets, the exemplar approach and compartmentalization, and suggest that additional consideration must be given to issues of data analysis for large data sets, whether morphological or molecular.


2020 ◽  
Vol 492 (1) ◽  
pp. 1421-1431 ◽  
Author(s):  
Zhicheng Yang ◽  
Ce Yu ◽  
Jian Xiao ◽  
Bo Zhang

ABSTRACT Radio frequency interference (RFI) detection and excision are key steps in the data-processing pipeline of the Five-hundred-meter Aperture Spherical radio Telescope (FAST). Because of its high sensitivity and large data rate, FAST requires more accurate and efficient RFI flagging methods than its counterparts. In the last decades, approaches based upon artificial intelligence (AI), such as codes using convolutional neural networks (CNNs), have been proposed to identify RFI more reliably and efficiently. However, RFI flagging of FAST data with such methods has often proved to be erroneous, with further manual inspections required. In addition, network construction as well as preparation of training data sets for effective RFI flagging has imposed significant additional workloads. Therefore, rapid deployment and adjustment of AI approaches for different observations is impractical to implement with existing algorithms. To overcome such problems, we propose a model called RFI-Net. With the input of raw data without any processing, RFI-Net can detect RFI automatically, producing corresponding masks without any alteration of the original data. Experiments with RFI-Net using simulated astronomical data show that our model has outperformed existing methods in terms of both precision and recall. Besides, compared with other models, our method can obtain the same relative accuracy with fewer training data, thus reducing the effort and time required to prepare the training data set. Further, the training process of RFI-Net can be accelerated, with overfittings being minimized, compared with other CNN codes. The performance of RFI-Net has also been evaluated with observing data obtained by FAST and the Bleien Observatory. Our results demonstrate the ability of RFI-Net to accurately identify RFI with fine-grained, high-precision masks that required no further modification.


Web Mining ◽  
2011 ◽  
pp. 189-207
Author(s):  
Lixin Fu

Currently, data classification is either performed on data stored in relational databases or performed on data stored in flat files. The problem with these approaches is that for large data sets, they often need multiple scans of the original data and thus are often infeasible in many applications. In this chapter we propose to deploy classification on top of OLAP (online analytical processing) and data cube systems. First, we compute the statistics in various combinations of the attributes known as data cubes. The statistics are then used to derive classification models. In this way, we only scan the original data once, which improves the performance of classification significantly. Furthermore, our new classifier will provide “free” classification by eliminating the dominating I/O overhead of scanning the massive original data. An architecture that integrates database, data cube, and data mining is given and three new cube-based classifiers are presented and evaluated.


Author(s):  
MITHUN PRASAD ◽  
ARCOT SOWMYA ◽  
INGE KOCH

Isolating relevant information and reducing the dimensionality of the original data set are key areas of interest in pattern recognition and machine learning. In this paper, a novel approach to reducing dimensionality of the feature space by employing independent component analysis (ICA) is introduced. While ICA is primarily a feature extraction technique, it is used here as a feature selection/construction technique in a generic way. The new technique, called feature selection based on independent component analysis (FS_ICA), efficiently builds a reduced set of features without loss in accuracy and also has a fast incremental version. When used as a first step in supervised learning, FS_ICA outperforms comparable methods in efficiency without loss of classification accuracy. For large data sets as in medical image segmentation of high-resolution computer tomography images, FS_ICA reduces dimensionality of the data set substantially and results in efficient and accurate classification.


Author(s):  
Malcolm J. Beynon

The essence of data mining is to investigate for pertinent information that may exist in data (often large data sets). The immeasurably large amount of data present in the world, due to the increasing capacity of storage media, manifests the issue of the presence of missing values (Olinsky et al., 2003; Brown and Kros, 2003). The presented encyclopaedia article considers the general issue of the presence of missing values when data mining, and demonstrates the effect of when managing their presence is or is not undertaken, through the utilisation of a data mining technique. The issue of missing values was first exposited over forty years ago in Afifi and Elashoff (1966). Since then it is continually the focus of study and explanation (El-Masri and Fox-Wasylyshyn, 2005), covering issues such as the nature of their presence and management (Allison, 2000). With this in mind, the naïve consistent aspect of the missing value debate is the limited general strategies available for their management, the main two being either the simple deletion of cases with missing data or a form of imputation of the missing values in someway (see Elliott and Hawthorne, 2005). Examples of the specific investigation of missing data (and data quality), include in; data warehousing (Ma et al., 2000), and customer relationship management (Berry and Linoff, 2000). An alternative strategy considered is the retention of the missing values, and their subsequent ‘ignorance’ contribution in any data mining undertaken on the associated original incomplete data set. A consequence of this retention is that full interpretability can be placed on the results found from the original incomplete data set. This strategy can be followed when using the nascent CaRBS technique for object classification (Beynon, 2005a, 2005b). CaRBS analyses are presented here to illustrate that data mining can manage the presence of missing values in a much more effective manner than the more inhibitory traditional strategies. An example data set is considered, with a noticeable level of missing values present in the original data set. A critical increase in the number of missing values present in the data set further illustrates the benefit from ‘intelligent’ data mining (in this case using CaRBS).


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Camilla Reginatto De Pierri ◽  
Ricardo Voyceik ◽  
Letícia Graziela Costa Santos de Mattos ◽  
Mariane Gonçalves Kulik ◽  
Josué Oliveira Camargo ◽  
...  

AbstractVectoral and alignment-free approaches to biological sequence representation have been explored in bioinformatics to efficiently handle big data. Even so, most current methods involve sequence comparisons via alignment-based heuristics and fail when applied to the analysis of large data sets. Here, we present “Spaced Words Projection (SWeeP)”, a method for representing biological sequences using relatively small vectors while preserving intersequence comparability. SWeeP uses spaced-words by scanning the sequences and generating indices to create a higher-dimensional vector that is later projected onto a smaller randomly oriented orthonormal base. We constructed phylogenetic trees for all organisms with mitochondrial and bacterial protein data in the NCBI database. SWeeP quickly built complete and accurate trees for these organisms with low computational cost. We compared SWeeP to other alignment-free methods and Sweep was 10 to 100 times quicker than the other techniques. A tool to build SWeeP vectors is available at https://sourceforge.net/projects/spacedwordsprojection/.


2016 ◽  
Vol 13 (4) ◽  
pp. 1-18
Author(s):  
Angel Ferrnando Kuri-Morales

The exploitation of large data bases frequently implies the investment of large and, usually, expensive resources both in terms of the storage and processing time required. It is possible to obtain equivalent reduced data sets where the statistical information of the original data may be preserved while dispensing with redundant constituents. Therefore, the physical embodiment of the relevant features of the data base is more economical. The author proposes a method where we may obtain an optimal transformed representation of the original data which is, in general, considerably more compact than the original without impairing its informational content. To certify the equivalence of the original data set (FD) and the reduced one (RD), the author applies an algorithm which relies in a Genetic Algorithm (GA) and a multivariate regression algorithm (AA). Through the combined application of GA and AA the equivalent behavior of both FD and RD may be guaranteed with a high degree of statistical certainty.


Author(s):  
MICHEL BRUYNOOGHE

The clustering of large data sets is of great interest in fields such as pattern recognition, numerical taxonomy, image or speech processing. The traditional Ascendant Hierarchical Algorithm (AHC) cannot be run for sets of more than a few thousand elements. The reducible neighborhoods clustering algorithm, which is presented in this paper, has overtaken the limits of the traditional hierarchical clustering algorithm by generating an exact hierarchy on a large data set. The theoretical justification of this algorithm is the so-called Bruynooghe reducibility principle, that lays down the condition under which the exact hierarchy may be constructed locally, by carrying out aggregations in restricted regions of the representation space. As for the Day and Edelsbrunner algorithm, the maximum theoretical time complexity of the reducible neighborhoods clustering algorithm is O(n2 log n), regardless of the chosen clustering strategy. But the reducible neighborhoods clustering algorithm uses the original data table and its practical performances are by far better than Day and Edelsbrunner’s algorithm, thus allowing the hierarchical clustering of large data sets, i.e. composed of more than 10 000 objects.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Mark Ellisman ◽  
Maryann Martone ◽  
Gabriel Soto ◽  
Eleizer Masliah ◽  
David Hessler ◽  
...  

Structurally-oriented biologists examine cells, tissues, organelles and macromolecules in order to gain insight into cellular and molecular physiology by relating structure to function. The understanding of these structures can be greatly enhanced by the use of techniques for the visualization and quantitative analysis of three-dimensional structure. Three projects from current research activities will be presented in order to illustrate both the present capabilities of computer aided techniques as well as their limitations and future possibilities.The first project concerns the three-dimensional reconstruction of the neuritic plaques found in the brains of patients with Alzheimer's disease. We have developed a software package “Synu” for investigation of 3D data sets which has been used in conjunction with laser confocal light microscopy to study the structure of the neuritic plaque. Tissue sections of autopsy samples from patients with Alzheimer's disease were double-labeled for tau, a cytoskeletal marker for abnormal neurites, and synaptophysin, a marker of presynaptic terminals.


Sign in / Sign up

Export Citation Format

Share Document