scholarly journals Use of Deep Learning to Improve the Computational Complexity of Reconstruction Algorithms in High Energy Physics

2021 ◽  
Vol 11 (23) ◽  
pp. 11467
Author(s):  
Núria Valls Canudas ◽  
Míriam Calvo Gómez ◽  
Elisabet Golobardes Ribé ◽  
Xavier Vilasis-Cardona

The optimization of reconstruction algorithms has become a key aspect in the field of experimental particle physics. Since technology has allowed gradually increasing the complexity of the measurements, the amount of data taken that needs to be interpreted has grown as well. This is the case with the LHCb experiment at CERN, where a major upgrade currently undergoing will considerably increase the data processing rate. This has presented the need to search for specific reconstruction techniques that aim to accelerate one of the most time consuming reconstruction algorithms in LHCb, the electromagnetic calorimeter clustering. Together with the use of deep learning techniques and the understanding of the current reconstruction algorithm, we propose a method that decomposes the reconstruction process into small parts that can be formulated as a cellular automaton. This approach is shown to benefit the generalized learning of small convolutional neural network architectures and also simplify the training dataset. Final results applied to a complete LHCb simulation reconstruction are compatible in terms of efficiency, and execute in nearly constant time with independence on the complexity of the data.

2021 ◽  
Vol 251 ◽  
pp. 04008
Author(s):  
Núria Valls Canudas ◽  
Xavier Vilasis Cardona ◽  
Míriam Calvo Gómez ◽  
Elisabet Golobardes Ribé

The optimization of reconstruction algorithms has become a key aspect in LHCb as it is currently undergoing a major upgrade that will considerably increase the data processing rate. Aiming to accelerate the second most time consuming reconstruction process of the trigger, we propose an alternative reconstruction algorithm for the Electromagnetic Calorimeter of LHCb. Together with the use of deep learning techniques and the understanding of the current algorithm, our proposal decomposes the reconstruction process into small parts that benefit the generalized learning of small neural network architectures and simplifies the training dataset. This approach takes as input the full simulation data of the calorimeter and outputs a list of reconstructed clusters in a nearly constant time without any dependency in the event complexity.


Author(s):  
Jan Kieseler

AbstractHigh-energy physics detectors, images, and point clouds share many similarities in terms of object detection. However, while detecting an unknown number of objects in an image is well established in computer vision, even machine learning assisted object reconstruction algorithms in particle physics almost exclusively predict properties on an object-by-object basis. Traditional approaches from computer vision either impose implicit constraints on the object size or density and are not well suited for sparse detector data or rely on objects being dense and solid. The object condensation method proposed here is independent of assumptions on object size, sorting or object density, and further generalises to non-image-like data structures, such as graphs and point clouds, which are more suitable to represent detector signals. The pixels or vertices themselves serve as representations of the entire object, and a combination of learnable local clustering in a latent space and confidence assignment allows one to collect condensates of the predicted object properties with a simple algorithm. As proof of concept, the object condensation method is applied to a simple object classification problem in images and used to reconstruct multiple particles from detector signals. The latter results are also compared to a classic particle flow approach.


2018 ◽  
Vol 68 (1) ◽  
pp. 161-181 ◽  
Author(s):  
Dan Guest ◽  
Kyle Cranmer ◽  
Daniel Whiteson

Machine learning has played an important role in the analysis of high-energy physics data for decades. The emergence of deep learning in 2012 allowed for machine learning tools which could adeptly handle higher-dimensional and more complex problems than previously feasible. This review is aimed at the reader who is familiar with high-energy physics but not machine learning. The connections between machine learning and high-energy physics data analysis are explored, followed by an introduction to the core concepts of neural networks, examples of the key results demonstrating the power of deep learning for analysis of LHC data, and discussion of future prospects and concerns.


2021 ◽  
Vol 9 ◽  
Author(s):  
N. Demaria

The High Luminosity Large Hadron Collider (HL-LHC) at CERN will constitute a new frontier for the particle physics after the year 2027. Experiments will undertake a major upgrade in order to stand this challenge: the use of innovative sensors and electronics will have a main role in this. This paper describes the recent developments in 65 nm CMOS technology for readout ASIC chips in future High Energy Physics (HEP) experiments. These allow unprecedented performance in terms of speed, noise, power consumption and granularity of the tracking detectors.


2021 ◽  
Vol 251 ◽  
pp. 03043
Author(s):  
Fedor Ratnikov ◽  
Alexander Rogachev

Simulation is one of the key components in high energy physics. Historically it relies on the Monte Carlo methods which require a tremendous amount of computation resources. These methods may have difficulties with the expected High Luminosity Large Hadron Collider need, so the experiment is in urgent need of new fast simulation techniques. The application of Generative Adversarial Networks is a promising solution to speed up the simulation while providing the necessary physics performance. In this paper we propose the Self-Attention Generative Adversarial Network as a possible improvement of the network architecture. The application is demonstrated on the performance of generating responses of the LHCb type of the electromagnetic calorimeter.


2019 ◽  
Vol 214 ◽  
pp. 02019
Author(s):  
V. Daniel Elvira

Detector simulation has become fundamental to the success of modern high-energy physics (HEP) experiments. For example, the Geant4-based simulation applications developed by the ATLAS and CMS experiments played a major role for them to produce physics measurements of unprecedented quality and precision with faster turnaround, from data taking to journal submission, than any previous hadron collider experiment. The material presented here contains highlights of a recent review on the impact of detector simulation in particle physics collider experiments published in Ref. [1]. It includes examples of applications to detector design and optimization, software development and testing of computing infrastructure, and modeling of physics objects and their kinematics. The cost and economic impact of simulation in the CMS experiment is also presented. A discussion on future detector simulation needs, challenges and potential solutions to address them is included at the end.


Energies ◽  
2020 ◽  
Vol 13 (14) ◽  
pp. 3569
Author(s):  
Simone Cammarata ◽  
Gabriele Ciarpi ◽  
Stefano Faralli ◽  
Philippe Velha ◽  
Guido Magazzù ◽  
...  

Optical links are rapidly becoming pervasive in the readout chains of particle physics detector systems. Silicon photonics (SiPh) stands as an attractive candidate to sustain the radiation levels foreseen in the next-generation experiments, while guaranteeing, at the same time, multi-Gb/s and energy-efficient data transmission. Integrated electronic drivers are needed to enable SiPh modulators’ deployment in compact on-detector front-end modules. A current-mode logic-based driver harnessing a pseudo-differential output stage is proposed in this work to drive different types of SiPh devices by means of the same circuit topology. The proposed driver, realized in a 65 nm bulk technology and already tested to behave properly up to an 8 MGy total ionizing dose, is hybridly integrated in this work with a lumped-element Mach–Zehnder modulator (MZM) and a ring modulator (RM), both fabricated in a 130 nm silicon-on-insulator (SOI) process. Bit-error-rate (BER) performances confirm the applicability of the selected architecture to either differential and single-ended loads. A 5 Gb/s data rate, in line with the current high energy physics requirements, is achieved in the RM case, while a packaging-related performance degradation is captured in the MZM-based system, confirming the importance of interconnection modeling.


2016 ◽  
Vol 3 (2) ◽  
pp. 252-256 ◽  
Author(s):  
Ling Wang ◽  
Mu-ming Poo

Abstract On 8 March 2012, Yifang Wang, co-spokesperson of the Daya Bay Experiment and Director of Institute of High Energy Physics, Chinese Academy of Sciences, announced the discovery of a new type of neutrino oscillation with a surprisingly large mixing angle (θ13), signifying ‘a milestone in neutrino research’. Now his team is heading for a new goal—to determine the neutrino mass hierarchy and to precisely measure oscillation parameters using the Jiangmen Underground Neutrino Observatory, which is due for completion in 2020. Neutrinos are fundamental particles that play important roles in both microscopic particle physics and macroscopic universe evolution. The studies on neutrinos, for example, may answer the question why our universe consists of much more matter than antimatter. But this is not an easy task. Though they are one of the most numerous particles in the universe and zip through our planet and bodies all the time, these tiny particles are like ‘ghost’, difficult to be captured. There are three flavors of neutrinos, known as electron neutrino (νe), muon neutrino (νμ), and tau neutrino (ντ), and their flavors could change as they travel through space via quantum interference. This phenomenon is known as neutrino oscillation or neutrino mixing. To determine the absolute mass of each type of neutrino and find out how they mix is very challenging. In a recent interview with NSR in Beijing, Yifang Wang explained how the Daya Bay Experiment on neutrino oscillation not only addressed the frontier problem in particle physics, but also harnessed the talents and existing technology in Chinese physics community. This achievement, Wang reckons, will not be an exception in Chinese high energy physics, when appropriate funding and organization for big science projects could be efficiently realized in the future.


2020 ◽  
Vol 35 (18) ◽  
pp. 2030006 ◽  
Author(s):  
Goran Senjanović

I reflect on some of the basic aspects of present day Beyond the Standard Model particle physics, focusing mostly on the issues of naturalness, in particular on the so-called hierarchy problem. To all of us, physics as natural science emerged with Galileo and Newton, and led to centuries of unparalleled success in explaining and often predicting new phenomena of nature. I argue here that the long-standing obsession with the hierarchy problem as a guiding principle for the future of our field has had the tragic consequence of deviating high energy physics from its origins as natural philosophy, and turning it into a philosophy of naturalness.


2020 ◽  
Vol 35 (23) ◽  
pp. 2050131
Author(s):  
Mohd Adli Md Ali ◽  
Nu’man Badrud’din ◽  
Hafidzul Abdullah ◽  
Faiz Kemi

Recently, the concept of weakly supervised learning has gained popularity in the high-energy physics community due to its ability to learn even with a noisy and impure dataset. This method is valuable in the quest to discover the elusive beyond Standard Model (BSM) particle. Nevertheless, the weakly supervised learning method still requires a learning sample that describes the features of the BSM particle truthfully to the classification model. Even with the various theoretical framework such as supersymmetry and the quantum black hole, creating a BSM sample is not a trivial task since the exact feature of the particle is unknown. Due to these difficulties, we propose an alternative classifier type called the one-class classification (OCC). OCC algorithms require only background or noise samples in its training dataset, which is already abundant in the high-energy physics community. The algorithm will flag any sample that does not fit the background feature as an abnormality. In this paper, we introduce two new algorithms called EHRA and C-EHRA, which use machine learning regression and clustering to detect anomalies in samples. We tested the algorithms’ capability to create distinct anomalous patterns in the presence of BSM samples and also compare their classification output metrics to the Isolation Forest (ISF), a well-known anomaly detection algorithm. Five Monte Carlo supersymmetry datasets with the signal to noise ratio equal to 1, 0.1, 0.01, 0.001, and 0.0001 were used to test EHRA, C-EHRA and ISF algorithm. In our study, we found that the EHRA with an artificial neural network regression has the highest ROC-AUC score at 0.7882 for the balanced dataset, while the C-EHRA has the highest precision-sensitivity score for the majority of the imbalanced datasets. These findings highlight the potential use of the EHRA, C-EHRA, and other OCC algorithms in the quest to discover BSM particles.


Sign in / Sign up

Export Citation Format

Share Document