scholarly journals Gene Expression Programming Approach to Event Selection in High Energy Physics

2006 ◽  
Vol 53 (4) ◽  
pp. 2221-2227 ◽  
Author(s):  
L. Teodorescu
1992 ◽  
Vol 25 (4) ◽  
pp. 413-421 ◽  
Author(s):  
Lalit Gupta ◽  
Anand M. Upadhye ◽  
Bruce Denby ◽  
Salvator R. Amendolia ◽  
Giovanni Grieco

2019 ◽  
Vol 214 ◽  
pp. 06004
Author(s):  
Andrea Valassi

I discuss the choice of evaluation metrics for binary classifiers in High Energy Physics (HEP) event selection and I point out that the Area Under the ROC Curve (AUC) is of limited relevance in this context, after discussing its use in other domains. I propose new metrics based on Fisher information, which can be used for both the evaluation and training of HEP event selection algorithms in statistically limited measurements of a parameter.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Konstantin T. Matchev ◽  
Prasanth Shyamsundar

Abstract We provide a prescription called ThickBrick to train optimal machine-learning-based event selectors and categorizers that maximize the statistical significance of a potential signal excess in high energy physics (HEP) experiments, as quantified by any of six different performance measures. For analyses where the signal search is performed in the distribution of some event variables, our prescription ensures that only the information complementary to those event variables is used in event selection and categorization. This eliminates a major misalignment with the physics goals of the analysis (maximizing the significance of an excess) that exists in the training of typical ML-based event selectors and categorizers. In addition, this decorrelation of event selectors from the relevant event variables prevents the background distribution from becoming peaked in the signal region as a result of event selection, thereby ameliorating the challenges imposed on signal searches by systematic uncertainties. Our event selectors (categorizers) use the output of machine-learning-based classifiers as input and apply optimal selection cutoffs (categorization thresholds) that are functions of the event variables being analyzed, as opposed to flat cutoffs (thresholds). These optimal cutoffs and thresholds are learned iteratively, using a novel approach with connections to Lloyd’s k-means clustering algorithm. We provide a public, Python implementation of our prescription, also called ThickBrick, along with usage examples.


Sign in / Sign up

Export Citation Format

Share Document