scholarly journals Charged Particle Tracking via Edge-Classifying Interaction Networks

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Gage DeZoort ◽  
Savannah Thais ◽  
Javier Duarte ◽  
Vesal Razavimaleki ◽  
Markus Atkinson ◽  
...  

AbstractRecent work has demonstrated that geometric deep learning methods such as graph neural networks (GNNs) are well suited to address a variety of reconstruction problems in high-energy particle physics. In particular, particle tracking data are naturally represented as a graph by identifying silicon tracker hits as nodes and particle trajectories as edges, given a set of hypothesized edges, edge-classifying GNNs identify those corresponding to real particle trajectories. In this work, we adapt the physics-motivated interaction network (IN) GNN toward the problem of particle tracking in pileup conditions similar to those expected at the high-luminosity Large Hadron Collider. Assuming idealized hit filtering at various particle momenta thresholds, we demonstrate the IN’s excellent edge-classification accuracy and tracking efficiency through a suite of measurements at each stage of GNN-based tracking: graph construction, edge classification, and track building. The proposed IN architecture is substantially smaller than previously studied GNN tracking architectures; this is particularly promising as a reduction in size is critical for enabling GNN-based tracking in constrained computing environments. Furthermore, the IN may be represented as either a set of explicit matrix operations or a message passing GNN. Efforts are underway to accelerate each representation via heterogeneous computing resources towards both high-level and low-latency triggering applications.

Author(s):  
Stephen Burns Menary ◽  
Darren David Price

Abstract We show that density models describing multiple observables with (i) hard boundaries and (ii) dependence on external parameters may be created using an auto-regressive Gaussian mixture model. The model is designed to capture how observable spectra are deformed by hypothesis variations, and is made more expressive by projecting data onto a configurable latent space. It may be used as a statistical model for scientific discovery in interpreting experimental observations, for example when constraining the parameters of a physical model or tuning simulation parameters according to calibration data. The model may also be sampled for use within a Monte Carlo simulation chain, or used to estimate likelihood ratios for event classification. The method is demonstrated on simulated high-energy particle physics data considering the anomalous electroweak production of a $Z$ boson in association with a dijet system at the Large Hadron Collider, and the accuracy of inference is tested using a realistic toy example. The developed methods are domain agnostic; they may be used within any field to perform simulation or inference where a dataset consisting of many real-valued observables has conditional dependence on external parameters.


Author(s):  
Michael E. Peskin

This is a textbook of elementary particle physics whose goal is to explain the Standard Model of particle interactions. Part I introduces the basic concepts governing high-energy particle physics: elements of relativity and quantum field theory, the quark model of hadrons, methods for detection and measurement of elementary particles, methods for calculating predictions for observable quantitites. Part II builds up our understanding of the strong interaction from the key experiments to the formulation of Quantum Chromodynamics and its application to the description of evetns at the CERN Large Hadron Collider. Part III build up our understanding of the weak interaction from the key experiments to the formulation of spontaneously broken gauge theories. It then describes the tests and extensions of this theory, including the precision study of the W and Z bosons, CP violation, neutrino mass, and the Higgs boson.


2002 ◽  
Vol 469 ◽  
pp. 121-160 ◽  
Author(s):  
GREG A. VOTH ◽  
A. LA PORTA ◽  
ALICE M. CRAWFORD ◽  
JIM ALEXANDER ◽  
EBERHARD BODENSCHATZ

We use silicon strip detectors (originally developed for the CLEO III high-energy particle physics experiment) to measure fluid particle trajectories in turbulence with temporal resolution of up to 70000 frames per second. This high frame rate allows the Kolmogorov time scale of a turbulent water flow to be fully resolved for 140 [ges ] Rλ [ges ] 970. Particle trajectories exhibiting accelerations up to 16000 m s −2 (40 times the r.m.s. value) are routinely observed. The probability density function of the acceleration is found to have Reynolds-number-dependent stretched exponential tails. The moments of the acceleration distribution are calculated. The scaling of the acceleration component variance with the energy dissipation is found to be consistent with the results for low-Reynolds-number direct numerical simulations, and with the K41-based Heisenberg–Yaglom prediction for Rλ [ges ] 500. The acceleration flatness is found to increase with Reynolds number, and to exceed 60 at Rλ = 970. The coupling of the acceleration to the large-scale anisotropy is found to be large at low Reynolds number and to decrease as the Reynolds number increases, but to persist at all Reynolds numbers measured. The dependence of the acceleration variance on the size and density of the tracer particles is measured. The autocorrelation function of an acceleration component is measured, and is found to scale with the Kolmogorov time τη.


Author(s):  
E.D. Wolf

Most microelectronics devices and circuits operate faster, consume less power, execute more functions and cost less per circuit function when the feature-sizes internal to the devices and circuits are made smaller. This is part of the stimulus for the Very High-Speed Integrated Circuits (VHSIC) program. There is also a need for smaller, more sensitive sensors in a wide range of disciplines that includes electrochemistry, neurophysiology and ultra-high pressure solid state research. There is often fundamental new science (and sometimes new technology) to be revealed (and used) when a basic parameter such as size is extended to new dimensions, as is evident at the two extremes of smallness and largeness, high energy particle physics and cosmology, respectively. However, there is also a very important intermediate domain of size that spans from the diameter of a small cluster of atoms up to near one micrometer which may also have just as profound effects on society as “big” physics.


Atomic Energy ◽  
1956 ◽  
Vol 1 (4) ◽  
pp. 621-632
Author(s):  
V. A. Biryukov ◽  
B. M. Golovin ◽  
L. I. Lapidus

1977 ◽  
Vol 140 (3) ◽  
pp. 549-552 ◽  
Author(s):  
E.D. Platner ◽  
A. Etkin ◽  
K.J. Foley ◽  
J.H. Goldman ◽  
W.A. Love ◽  
...  

2021 ◽  
Vol 9 ◽  
Author(s):  
N. Demaria

The High Luminosity Large Hadron Collider (HL-LHC) at CERN will constitute a new frontier for the particle physics after the year 2027. Experiments will undertake a major upgrade in order to stand this challenge: the use of innovative sensors and electronics will have a main role in this. This paper describes the recent developments in 65 nm CMOS technology for readout ASIC chips in future High Energy Physics (HEP) experiments. These allow unprecedented performance in terms of speed, noise, power consumption and granularity of the tracking detectors.


Sign in / Sign up

Export Citation Format

Share Document