scholarly journals The TrackML high-energy physics tracking challenge on Kaggle

2019 ◽  
Vol 214 ◽  
pp. 06037
Author(s):  
Moritz Kiehn ◽  
Sabrina Amrouche ◽  
Paolo Calafiura ◽  
Victor Estrade ◽  
Steven Farrell ◽  
...  

The High-Luminosity LHC (HL-LHC) is expected to reach unprecedented collision intensities, which in turn will greatly increase the complexity of tracking within the event reconstruction. To reach out to computer science specialists, a tracking machine learning challenge (TrackML) was set up on Kaggle by a team of ATLAS, CMS, and LHCb physicists tracking experts and computer scientists building on the experience of the successful Higgs Machine Learning challenge in 2014. A training dataset based on a simulation of a generic HL-LHC experiment tracker has been created, listing for each event the measured 3D points, and the list of 3D points associated to a true track.The participants to the challenge should find the tracks in the test dataset, which means building the list of 3D points belonging to each track.The emphasis is to expose innovative approaches, rather than hyper-optimising known approaches. A metric reflecting the accuracy of a model at finding the proper associations that matter most to physics analysis will allow to select good candidates to augment or replace existing algorithms.

2020 ◽  
Vol 245 ◽  
pp. 03002
Author(s):  
Daniel Traynor ◽  
Terry Froy

The Queen Mary University of London WLCG Tier-2 Grid site has been providing GPU resources on the Grid since 2016. GPUs are an important modern tool to assist in data analysis. They have historically been used to accelerate computationally expensive but parallelisable workloads using frameworks such as OpenCL and CUDA. However, more recently their power in accelerating machine learning, using libraries such as TensorFlow and Coffee, has come to the fore and the demand for GPU resources has increased. Significant effort is being spent in high energy physics to investigate and use machine learning to enhance the analysis of data. GPUs may also provide part of the solution to the compute challenge of the High Luminosity LHC. The motivation for providing GPU resources via the Grid is presented. The installation and configuration of the SLURM batch system together with Compute Elements (CREAM and ARC) for use with GPUs is shown. Real world use cases are presented and the success and issues discovered are discussed.


2018 ◽  
Vol 68 (1) ◽  
pp. 161-181 ◽  
Author(s):  
Dan Guest ◽  
Kyle Cranmer ◽  
Daniel Whiteson

Machine learning has played an important role in the analysis of high-energy physics data for decades. The emergence of deep learning in 2012 allowed for machine learning tools which could adeptly handle higher-dimensional and more complex problems than previously feasible. This review is aimed at the reader who is familiar with high-energy physics but not machine learning. The connections between machine learning and high-energy physics data analysis are explored, followed by an introduction to the core concepts of neural networks, examples of the key results demonstrating the power of deep learning for analysis of LHC data, and discussion of future prospects and concerns.


2020 ◽  
Vol 35 (23) ◽  
pp. 2050131
Author(s):  
Mohd Adli Md Ali ◽  
Nu’man Badrud’din ◽  
Hafidzul Abdullah ◽  
Faiz Kemi

Recently, the concept of weakly supervised learning has gained popularity in the high-energy physics community due to its ability to learn even with a noisy and impure dataset. This method is valuable in the quest to discover the elusive beyond Standard Model (BSM) particle. Nevertheless, the weakly supervised learning method still requires a learning sample that describes the features of the BSM particle truthfully to the classification model. Even with the various theoretical framework such as supersymmetry and the quantum black hole, creating a BSM sample is not a trivial task since the exact feature of the particle is unknown. Due to these difficulties, we propose an alternative classifier type called the one-class classification (OCC). OCC algorithms require only background or noise samples in its training dataset, which is already abundant in the high-energy physics community. The algorithm will flag any sample that does not fit the background feature as an abnormality. In this paper, we introduce two new algorithms called EHRA and C-EHRA, which use machine learning regression and clustering to detect anomalies in samples. We tested the algorithms’ capability to create distinct anomalous patterns in the presence of BSM samples and also compare their classification output metrics to the Isolation Forest (ISF), a well-known anomaly detection algorithm. Five Monte Carlo supersymmetry datasets with the signal to noise ratio equal to 1, 0.1, 0.01, 0.001, and 0.0001 were used to test EHRA, C-EHRA and ISF algorithm. In our study, we found that the EHRA with an artificial neural network regression has the highest ROC-AUC score at 0.7882 for the balanced dataset, while the C-EHRA has the highest precision-sensitivity score for the majority of the imbalanced datasets. These findings highlight the potential use of the EHRA, C-EHRA, and other OCC algorithms in the quest to discover BSM particles.


2021 ◽  
Vol 16 (08) ◽  
pp. P08016
Author(s):  
T.M. Hong ◽  
B.T. Carlson ◽  
B.R. Eubanks ◽  
S.T. Racz ◽  
S.T. Roche ◽  
...  

2021 ◽  
Vol 104 (5) ◽  
Author(s):  
Aishik Ghosh ◽  
Benjamin Nachman ◽  
Daniel Whiteson

2021 ◽  
Vol 81 (2) ◽  
Author(s):  
Laurits Tani ◽  
Diana Rand ◽  
Christian Veelken ◽  
Mario Kadastik

AbstractThe analysis of vast amounts of data constitutes a major challenge in modern high energy physics experiments. Machine learning (ML) methods, typically trained on simulated data, are often employed to facilitate this task. Several choices need to be made by the user when training the ML algorithm. In addition to deciding which ML algorithm to use and choosing suitable observables as inputs, users typically need to choose among a plethora of algorithm-specific parameters. We refer to parameters that need to be chosen by the user as hyperparameters. These are to be distinguished from parameters that the ML algorithm learns autonomously during the training, without intervention by the user. The choice of hyperparameters is conventionally done manually by the user and often has a significant impact on the performance of the ML algorithm. In this paper, we explore two evolutionary algorithms: particle swarm optimization and genetic algorithm, for the purposes of performing the choice of optimal hyperparameter values in an autonomous manner. Both of these algorithms will be tested on different datasets and compared to alternative methods.


2020 ◽  
pp. 2030024
Author(s):  
Kapil K. Sharma

This paper reveals the future prospects of quantum algorithms in high energy physics (HEP). Particle identification, knowing their properties and characteristics is a challenging problem in experimental HEP. The key technique to solve these problems is pattern recognition, which is an important application of machine learning and unconditionally used for HEP problems. To execute pattern recognition task for track and vertex reconstruction, the particle physics community vastly use statistical machine learning methods. These methods vary from detector-to-detector geometry and magnetic field used in the experiment. Here, in this paper, we deliver the future possibilities for the lucid application of quantum computation and quantum machine learning in HEP, rather than focusing on deep mathematical structures of techniques arising in this domain.


2018 ◽  
Vol 1085 ◽  
pp. 022008 ◽  
Author(s):  
Kim Albertsson ◽  
Piero Altoe ◽  
Dustin Anderson ◽  
Michael Andrews ◽  
Juan Pedro Araque Espinosa ◽  
...  

Computer ◽  
1993 ◽  
Vol 26 (6) ◽  
pp. 68-77 ◽  
Author(s):  
F.J. Rinaldo ◽  
M.R. Fausey

Sign in / Sign up

Export Citation Format

Share Document