deterministic annealing
Recently Published Documents


TOTAL DOCUMENTS

194
(FIVE YEARS 12)

H-INDEX

22
(FIVE YEARS 2)

Author(s):  
Rudolf Frühwirth ◽  
Are Strandlie

AbstractVertex finding is the search for clusters of tracks that originate at the same point in space. The chapter discusses a variety of methods for finding primary vertices, first in one and then in three dimensions. Details are given on model-based clustering, the EM algorithm and clustering by deterministic annealing in 1D, and greedy clustering, iterated estimators, topological vertex finding, and a vertex finder based on medical imaging in 3D.


Author(s):  
Rudolf Frühwirth ◽  
Are Strandlie

AbstractTrack fitting is an application of established statistical estimation procedures with well-known properties. For a long time, estimators based on the least-squares principle were—with some notable exceptions—the principal methods for track fitting. More recently, robust and adaptive methods have found their way into the reconstruction programs. The first section of the chapter presents least-squares regression, the extended Kalman filter, regression with breakpoints, general broken lines and the triplet fit. The following section discusses robust regression by the M-estimator, the deterministic annealing filter, and the Gaussian-sum filter for electron reconstruction. The next section deals with linearized fits of space points to circles and helices. The chapter concludes with a section on track quality and shows how to test the track hypothesis, how to detect outliers, and how to find kinks in a track.


Author(s):  
Amber Srivastava ◽  
Mayank Baranwal ◽  
Srinivasa Salapaka

Typically clustering algorithms provide clustering solutions with prespecified number of clusters. The lack of a priori knowledge on the true number of underlying clusters in the dataset makes it important to have a metric to compare the clustering solutions with different number of clusters. This article quantifies a notion of persistence of clustering solutions that enables comparing solutions with different number of clusters. The persistence relates to the range of dataresolution scales over which a clustering solution persists; it is quantified in terms of the maximum over two-norms of all the associated cluster-covariance matrices. Thus we associate a persistence value for each element in a set of clustering solutions with different number of clusters. We show that the datasets where natural clusters are a priori known, the clustering solutions that identify the natural clusters are most persistent - in this way, this notion can be used to identify solutions with true number of clusters. Detailed experiments on a variety of standard and synthetic datasets demonstrate that the proposed persistence-based indicator outperforms the existing approaches, such as, gap-statistic method, X-means, Gmeans, PG-means, dip-means algorithms and informationtheoretic method, in accurately identifying the clustering solutions with true number of clusters. Interestingly, our method can be explained in terms of the phase-transition phenomenon in the deterministic annealing algorithm, where the number of distinct cluster centers changes (bifurcates) with respect to an annealing parameter.


Sign in / Sign up

Export Citation Format

Share Document