scholarly journals Efficient Tracking of the Dominant Eigenspace of a Normalized Kernel Matrix

2008 ◽  
Vol 20 (2) ◽  
pp. 523-554 ◽  
Author(s):  
Geert Gins ◽  
Ilse Y. Smets ◽  
Jan F. Van Impe

Various machine learning problems rely on kernel-based methods. The power of these methods resides in the ability to solve highly nonlinear problems by reformulating them in a linear context. The dominant eigenspace of a (normalized) kernel matrix is often required. Unfortunately, the computational requirements of the existing kernel methods are such that the applicability is restricted to relatively small data sets. This letter therefore focuses on a kernel-based method for large data sets. More specifically, a numerically stable tracking algorithm for the dominant eigenspace of a normalized kernel matrix is proposed, which proceeds by an updating (the addition of a new data point) followed by a downdating (the exclusion of an old data point) of the kernel matrix. Testing the algorithm on some representative case studies reveals that a very good approximation of the dominant eigenspace is obtained, while only a minimal amount of operations and memory space per iteration step is required.

2013 ◽  
Vol 2013 ◽  
pp. 1-11
Author(s):  
Dewang Chen ◽  
Long Chen

In order to obtain a decent trade-off between the low-cost, low-accuracy Global Positioning System (GPS) receivers and the requirements of high-precision digital maps for modern railways, using the concept of constraint K-segment principal curves (CKPCS) and the expert knowledge on railways, we propose three practical CKPCS generation algorithms with reduced computational complexity, and thereafter more suitable for engineering applications. The three algorithms are named ALLopt, MPMopt, and DCopt, in which ALLopt exploits global optimization and MPMopt and DCopt apply local optimization with different initial solutions. We compare the three practical algorithms according to their performance on average projection error, stability, and the fitness for simple and complex simulated trajectories with noise data. It is found that ALLopt only works well for simple curves and small data sets. The other two algorithms can work better for complex curves and large data sets. Moreover, MPMopt runs faster than DCopt, but DCopt can work better for some curves with cross points. The three algorithms are also applied in generating GPS digital maps for two railway GPS data sets measured in Qinghai-Tibet Railway (QTR). Similar results like the ones in synthetic data are obtained. Because the trajectory of a railway is relatively simple and straight, we conclude that MPMopt works best according to the comprehensive considerations on the speed of computation and the quality of generated CKPCS. MPMopt can be used to obtain some key points to represent a large amount of GPS data. Hence, it can greatly reduce the data storage requirements and increase the positioning speed for real-time digital map applications.


2020 ◽  
pp. 1-11
Author(s):  
Erjia Yan ◽  
Zheng Chen ◽  
Kai Li

Citation sentiment plays an important role in citation analysis and scholarly communication research, but prior citation sentiment studies have used small data sets and relied largely on manual annotation. This paper uses a large data set of PubMed Central (PMC) full-text publications and analyzes citation sentiment in more than 32 million citances within PMC, revealing citation sentiment patterns at the journal and discipline levels. This paper finds a weak relationship between a journal’s citation impact (as measured by CiteScore) and the average sentiment score of citances to its publications. When journals are aggregated into quartiles based on citation impact, we find that journals in higher quartiles are cited more favorably than those in the lower quartiles. Further, social science journals are found to be cited with higher sentiment, followed by engineering and natural science and biomedical journals, respectively. This result may be attributed to disciplinary discourse patterns in which social science researchers tend to use more subjective terms to describe others’ work than do natural science or biomedical researchers.


Author(s):  
Jeremy Rickli ◽  
Jaime Camelio

Recent advances in process monitoring technology have introduced an influx of exceptionally large data sets containing information on manufacturing process health. Recorded data sets are comprised of numerous parameters for which multivariate statistical process control (MSPC) methodologies are required. Current multivariate control charts are ideal for monitoring data sets with a minimal amount of parameters, however, new monitoring devices such as surface scanning cameras increase the number of parameters by two orders of magnitude in some cases. This paper proposes a modified form of the original multivariate Hotelling T2 chart possessing the capability to monitor manufacturing processes containing a large number of parameters and a fault diagnosis procedure incorporating least squares analysis in conjunction with univariate control charts. A case study considering surface scanning of compliant sheet metal components and comparisons to processes utilizing Optical CMM’s is presented as verification of the proposed assembly fixture fault diagnosis methodology and modified Hotelling T2 multivariate control chart.


Author(s):  
Kim Wallin

The standard Master Curve (MC) deals only with materials assumed to be homogeneous, but MC analysis methods for inhomogeneous materials have also been developed. Especially the bi-modal and multi-modal analysis methods are becoming more and more standard. Their drawback is that these methods are generally reliable only with sufficiently large data sets (number of valid tests, r ≥ 15–20). Here, the possibility of using the multi-modal analysis method with smaller data sets is assessed, and a new procedure to conservatively account for possible inhomogeneities is proposed.


Author(s):  
Gary Smith ◽  
Jay Cordes

Patterns are inevitable and we should not be surprised by them. Streaks, clusters, and correlations are the norm, not the exception. In a large number of coin flips, there are likely to be coincidental clusters of heads and tails. In nationwide data on cancer, crime, or test scores, there are likely to be flukey clusters. When the data are separated into smaller geographic units like cities, the most extreme results are likely to be found in the smallest cities. In athletic competitions between well-matched teams, the outcome of a small number of games is almost meaningless. Our challenge is to overcome our inherited inclination to think that all patterns are meaningful; for example, thinking that clustering in large data sets or differences among small data sets must be something real that needs to be explained. Often, it is just meaningless happenstance.


2008 ◽  
Vol 130 (2) ◽  
Author(s):  
Stuart Holdsworth

The European Creep Collaborative Committee (ECCC) approach to creep data assessment has now been established for almost ten years. The methodology covers the analysis of rupture strength and ductility, creep strain, and stress relaxation data, for a range of material conditions. This paper reviews the concepts and procedures involved. The original approach was devised to determine data sheets for use by committees responsible for the preparation of National and International Design and Product Standards, and the methods developed for data quality evaluation and data analysis were therefore intentionally rigorous. The focus was clearly on the determination of long-time property values from the largest possible data sets involving a significant number of observations in the mechanism regime for which predictions were required. More recently, the emphasis has changed. There is now an increasing requirement for full property descriptions from very short times to very long and hence the need for much more flexible model representations than were previously required. There continues to be a requirement for reliable long-time predictions from relatively small data sets comprising relatively short duration tests, in particular, to exploit new alloy developments at the earliest practical opportunity. In such circumstances, it is not feasible to apply the same degree of rigor adopted for large data set assessment. Current developments are reviewed.


1981 ◽  
Vol 35 (1) ◽  
pp. 35-42 ◽  
Author(s):  
J. D. Algeo ◽  
M. B. Denton

A numerical method for evaluating the inverted Abel integral employing cubic spline approximations is described along with a modification of the procedure of Cremers and Birkebak, and an extension of the Barr method. The accuracy of the computations is evaluated at several noise levels and with varying resolution of the input data. The cubic spline method is found to be useful only at very low noise levels, but capable of providing good results with small data sets. The Barr method is computationally the simplest, and is adequate when large data sets are available. For noisy data, the method of Cremers and Birkebak gave the best results.


Author(s):  
Frank Rehm ◽  
Frank Klawonn ◽  
Rudolf Kruse

Many applications in science and business such as signal analysis or costumer segmentation deal with large amounts of data which are usually high dimensional in the feature space. As a part of preprocessing and exploratory data analysis, visualization of the data helps to decide which kind of data mining method probably leads to good results or whether outliers or noisy data need to be treated before (Barnett & Lewis, 1994; Hawkins, 1980). Since the visual assessment of a feature space that has more than three dimensions is not possible, it becomes necessary to find an appropriate visualization scheme for such data sets. Multidimensional scaling (MDS) is a family of methods that seek to present the important structure of the data in a reduced number of dimensions. Due to the approach of distance preservation that is followed by conventional MDS techniques, resource requirements regarding memory space and computation time are fairly high and prevent their application to large data sets. In this work we will present two methods that visualize high-dimensional data on the plane using a new approach. An algorithm will be presented that allows applying our method on larger data sets. We will also present some results on a benchmark data set.


2019 ◽  
Vol 97 (Supplement_3) ◽  
pp. 52-53
Author(s):  
Ignacy Misztal

Abstract Early application of genomic selection relied on SNP estimation with phenotypes or de-regressed proofs (DRP). Chips of 50k SNP seemed sufficient. Estimated breeding value was an index with parent average and deduction to eliminate double counting. Use of SNP selection or weighting increased accuracy with small data sets but less or none with large data sets. Use of DRP with female information required ad-hoc modifications. As BLUP is biased by genomic selection, use of DRP under genomic selection required adjustments. Efforts to include potentially causative SNP derived from sequence analysis showed limited or no gain. The genomic selection was greatly simplified using single-step GBLUP (ssGBLUP) because the procedure automatically creates the index, can use any combination of male and female genotypes, and accounts for preselection. ssGBLUP requires careful scaling for compatibility between pedigree and genomic relationships to avoid biases especially under strong selection. Large data computations in ssGBLUP were solved by exploiting limited dimensionality of SNP due to limited effective population size. With such dimensionality ranging from 4k in chicken to about 15k in Holsteins, the inverse of GRM can be created directly (e.g., by the APY algorithm) in linear cost. Due to its simplicity and accuracy ssGBLUP is routinely used for genomic selection by major companies in chicken, pigs and beef. ssGBLUP can be used to derive SNP effects for indirect prediction, and for GWAS, including computations of the P-values. An alternative single-step called ssBR exists that uses SNP effects instead of GRM. As BLUP is affected by pre-selection, there is need for new validation procedures unaffected by selection, and for parameter estimation that accounts for all the genomic data used in selection. Another issue are reduced variances due to the Bulmer effect.


Author(s):  
D T Pham ◽  
A A Afify

Rule induction as a method for constructing classifiers is particularly attractive in data mining applications, where the comprehensibility of the generated models is very important. Most existing techniques were designed for small data sets and thus are not practical for direct use on very large data sets because of their computational inefficiency. Scaling up rule induction methods to handle such data sets is a formidable challenge. This article presents a new algorithm for rule induction that can efficiently extract accurate and comprehensible models from large and noisy data sets. This algorithm has been tested on several complex data sets, and the results prove that it scales up well and is an extremely effective learner.


Sign in / Sign up

Export Citation Format

Share Document