Clustering algorithm using space filling curves for the classification of high energy physics data

2003 ◽  
Vol 119 ◽  
pp. 1024-1026 ◽  
Author(s):  
Mostafa Mjahed
2018 ◽  
Vol 1085 ◽  
pp. 042022 ◽  
Author(s):  
M Andrews ◽  
M Paulini ◽  
S Gleyzer ◽  
B Poczos

2020 ◽  
Vol 3 ◽  
Author(s):  
Marco Rovere ◽  
Ziheng Chen ◽  
Antonio Di Pilato ◽  
Felice Pantaleo ◽  
Chris Seez

One of the challenges of high granularity calorimeters, such as that to be built to cover the endcap region in the CMS Phase-2 Upgrade for HL-LHC, is that the large number of channels causes a surge in the computing load when clustering numerous digitized energy deposits (hits) in the reconstruction stage. In this article, we propose a fast and fully parallelizable density-based clustering algorithm, optimized for high-occupancy scenarios, where the number of clusters is much larger than the average number of hits in a cluster. The algorithm uses a grid spatial index for fast querying of neighbors and its timing scales linearly with the number of hits within the range considered. We also show a comparison of the performance on CPU and GPU implementations, demonstrating the power of algorithmic parallelization in the coming era of heterogeneous computing in high-energy physics.


1987 ◽  
Vol 62 (3) ◽  
pp. 213-217 ◽  
Author(s):  
E. A. Belogorlov ◽  
G. I. Britvich ◽  
G. I. Krupnyi ◽  
V. N. Lebedev ◽  
V. S. Lukanin ◽  
...  

2007 ◽  
Vol 22 (06) ◽  
pp. 1201-1211
Author(s):  
MEILING YU ◽  
LIANSHOU LIU

The possible application of boosted neural network to particle classification in high energy physics is discussed. A two-dimensional toy model, where the boundary between signal and background is irregular but not overlapping, is constructed to show how boosting technique works with neural network. It is found that boosted neural network not only decreases the error rate of classification significantly but also increases the efficiency and signal–background ratio. Besides, boosted neural network can avoid the disadvantage aspects of single neural network design. The boosted neural network is also applied to the classification of quark- and gluon-jet samples from Monte Carlo e+e- collisions, where the two samples show significant overlapping. The performance of boosting technique for the two different boundary cases — with and without overlapping is discussed.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Konstantin T. Matchev ◽  
Prasanth Shyamsundar

Abstract We provide a prescription called ThickBrick to train optimal machine-learning-based event selectors and categorizers that maximize the statistical significance of a potential signal excess in high energy physics (HEP) experiments, as quantified by any of six different performance measures. For analyses where the signal search is performed in the distribution of some event variables, our prescription ensures that only the information complementary to those event variables is used in event selection and categorization. This eliminates a major misalignment with the physics goals of the analysis (maximizing the significance of an excess) that exists in the training of typical ML-based event selectors and categorizers. In addition, this decorrelation of event selectors from the relevant event variables prevents the background distribution from becoming peaked in the signal region as a result of event selection, thereby ameliorating the challenges imposed on signal searches by systematic uncertainties. Our event selectors (categorizers) use the output of machine-learning-based classifiers as input and apply optimal selection cutoffs (categorization thresholds) that are functions of the event variables being analyzed, as opposed to flat cutoffs (thresholds). These optimal cutoffs and thresholds are learned iteratively, using a novel approach with connections to Lloyd’s k-means clustering algorithm. We provide a public, Python implementation of our prescription, also called ThickBrick, along with usage examples.


Author(s):  
Kent W. Staley

Much of the discussion of the argument from inductive risk (AIR) centers on scientific research that has relevance to policymaking. To emphasize that inductive risk pervades science, this chapter discusses the AIR in the context of high energy physics: specifically, the discovery of the Higgs boson, a scientific finding that is irrelevant to policy. The applicability of the AIR for the case of the Higgs boson is established through a pragmatic approach to scientific inquiry, emphasizing the centrality of practical decision problems to the production of scientific knowledge. This approach, drawing on debates among pragmatists over the interpretation of statistical inference, eschews the classification of value judgments into epistemic and non-epistemic.


Sign in / Sign up

Export Citation Format

Share Document