scholarly journals Benchmarking LHC background particle simulation with the CMS triple-GEM detector

2021 ◽  
Vol 16 (12) ◽  
pp. P12026
Author(s):  
M. Abbas ◽  
M. Abbrescia ◽  
H. Abdalla ◽  
A. Abdelalim ◽  
S. AbuZeid ◽  
...  

Abstract In 2018, a system of large-size triple-GEM demonstrator chambers was installed in the CMS experiment at CERN's Large Hadron Collider (LHC). The demonstrator's design mimicks that of the final detector, installed for Run-3. A successful Monte Carlo (MC) simulation of the collision-induced background hit rate in this system in proton-proton collisions at 13 TeV is presented. The MC predictions are compared to CMS measurements recorded at an instantaneous luminosity of 1.5 ×1034 cm-2 s-1. The simulation framework uses a combination of the FLUKA and GEANT4 packages. FLUKA simulates the radiation environment around the GE1/1 chambers. The particle flux by FLUKA covers energy spectra ranging from 10-11 to 104 MeV for neutrons, 10-3 to 104 MeV for γ's, 10-2 to 104 MeV for e±, and 10-1 to 104 MeV for charged hadrons. GEANT4 provides an estimate of the detector response (sensitivity) based on an accurate description of the detector geometry, the material composition, and the interaction of particles with the detector layers. The detector hit rate, as obtained from the simulation using FLUKA and GEANT4, is estimated as a function of the perpendicular distance from the beam line and agrees with data within the assigned uncertainties in the range 13.7-14.5%. This simulation framework can be used to obtain a reliable estimate of the background rates expected at the High Luminosity LHC.

2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
G. Aad ◽  
◽  
B. Abbott ◽  
D. C. Abbott ◽  
A. Abed Abud ◽  
...  

Abstract A search for the supersymmetric partners of quarks and gluons (squarks and gluinos) in final states containing jets and missing transverse momentum, but no electrons or muons, is presented. The data used in this search were recorded by the ATLAS experiment in proton-proton collisions at a centre-of-mass energy of $$ \sqrt{s} $$ s = 13 TeV during Run 2 of the Large Hadron Collider, corresponding to an integrated luminosity of 139 fb−1. The results are interpreted in the context of various R-parity-conserving models where squarks and gluinos are produced in pairs or in association and a neutralino is the lightest supersymmetric particle. An exclusion limit at the 95% confidence level on the mass of the gluino is set at 2.30 TeV for a simplified model containing only a gluino and the lightest neutralino, assuming the latter is massless. For a simplified model involving the strong production of mass-degenerate first- and second-generation squarks, squark masses below 1.85 TeV are excluded if the lightest neutralino is massless. These limits extend substantially beyond the region of supersymmetric parameter space excluded previously by similar searches with the ATLAS detector.


2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Anna Mullin ◽  
Stuart Nicholls ◽  
Holly Pacey ◽  
Michael Parker ◽  
Martin White ◽  
...  

Abstract We present a novel technique for the analysis of proton-proton collision events from the ATLAS and CMS experiments at the Large Hadron Collider. For a given final state and choice of kinematic variables, we build a graph network in which the individual events appear as weighted nodes, with edges between events defined by their distance in kinematic space. We then show that it is possible to calculate local metrics of the network that serve as event-by-event variables for separating signal and background processes, and we evaluate these for a number of different networks that are derived from different distance metrics. Using a supersymmetric electroweakino and stop production as examples, we construct prototype analyses that take account of the fact that the number of simulated Monte Carlo events used in an LHC analysis may differ from the number of events expected in the LHC dataset, allowing an accurate background estimate for a particle search at the LHC to be derived. For the electroweakino example, we show that the use of network variables outperforms both cut-and-count analyses that use the original variables and a boosted decision tree trained on the original variables. The stop example, deliberately chosen to be difficult to exclude due its kinematic similarity with the top background, demonstrates that network variables are not automatically sensitive to BSM physics. Nevertheless, we identify local network metrics that show promise if their robustness under certain assumptions of node-weighted networks can be confirmed.


2021 ◽  
Vol 16 (11) ◽  
pp. P11014
Author(s):  
M. Abbas ◽  
M. Abbrescia ◽  
H. Abdalla ◽  
A. Abdelalim ◽  
S. AbuZeid ◽  
...  

Abstract After the Phase-2 high-luminosity upgrade to the Large Hadron Collider (LHC), the collision rate and therefore the background rate will significantly increase, particularly in the high η region. To improve both the tracking and triggering of muons, the Compact Muon Solenoid (CMS) Collaboration plans to install triple-layer Gas Electron Multiplier (GEM) detectors in the CMS muon endcaps. Demonstrator GEM detectors were installed in CMS during 2017 to gain operational experience and perform a preliminary investigation of detector performance. We present the results of triple-GEM detector performance studies performed in situ during normal CMS and LHC operations in 2018. The distribution of cluster size and the efficiency to reconstruct high pT muons in proton-proton collisions are presented as well as the measurement of the environmental background rate to produce hits in the GEM detector.


2020 ◽  
Vol 29 (09) ◽  
pp. 2050074
Author(s):  
E. Shokr ◽  
A. H. El-Farrash ◽  
A. De Roeck ◽  
M. A. Mahmoud

Proton–Proton ([Formula: see text]) collisions at the Large Hadron Collider (LHC) are simulated in order to study events with a high local density of charged particles produced in narrow pseudorapidty windows of [Formula: see text] = 0.1, 0.2, and 0.5. The [Formula: see text] collisions are generated at center of mass energies of [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] TeV, i.e., the energies at which the LHC has operated so far, using PYTHIA and HERWIG event generators. We have also studied the average of the maximum charged-particle density versus the event multiplicity for all events, using the different pseudorapidity windows. This study prepares for the multi-particle production background expected in a future search for anomalous high-density multiplicity fluctuations using the LHC data.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Georges Aad ◽  
Anne-Sophie Berthold ◽  
Thomas Calvet ◽  
Nemer Chiedde ◽  
Etienne Marie Fortin ◽  
...  

AbstractThe ATLAS experiment at the Large Hadron Collider (LHC) is operated at CERN and measures proton–proton collisions at multi-TeV energies with a repetition frequency of 40 MHz. Within the phase-II upgrade of the LHC, the readout electronics of the liquid-argon (LAr) calorimeters of ATLAS are being prepared for high luminosity operation expecting a pileup of up to 200 simultaneous proton–proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction by the calorimeter detector. Real-time processing of digitized pulses sampled at 40 MHz is performed using field-programmable gate arrays (FPGAs). To cope with the signal pileup, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct proton bunch crossing and in energy resolution. The improvements concern in particular energies derived from overlapping pulses. Since the implementation of the neural networks targets an FPGA, the number of parameters and the mathematical operations need to be well controlled. The trained neural network structures are converted into FPGA firmware using automated implementations in hardware description language and high-level synthesis tools. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The prototype implementations on an Intel Stratix-10 FPGA reach maximum operation frequencies of 344–640 MHz. Applying time-division multiplexing allows the processing of 390–576 calorimeter channels by one FPGA for the most resource-efficient networks. Moreover, the latency achieved is about 200 ns. These performance parameters show that a neural-network based energy reconstruction can be considered for the processing of the ATLAS LAr calorimeter signals during the high-luminosity phase of the LHC.


2013 ◽  
Vol 28 (26) ◽  
pp. 1330038 ◽  
Author(s):  
SHABNAM JABEEN

This review summarizes the recent results for top quark and Higgs boson measurements from experiments at Tevatron, a proton–antiproton collider at a center-of-mass energy of [Formula: see text], and the Large Hadron Collider, a proton–proton collider at a center-of-mass energy of [Formula: see text]. These results include the discovery of a Higgs-like boson and measurement of its various properties, and measurements in the top quark sector, e.g. top quark mass, spin, charge asymmetry and production of single top quark.


2015 ◽  
Vol 30 (34) ◽  
pp. 1530061 ◽  
Author(s):  
Douglas M. Gingrich

The possibility of producing nonperturbative low-scale gravity states in collider experiments was first discussed in about 1998. The ATLAS and CMS experiments have searched for nonperturbative low-scale gravity states using the Large Hadron Collider with a proton–proton center-of-mass energy of 8 TeV. These experiments have now seriously confronted the possibility of producing nonperturbative low-scale gravity states which were proposed over 17 years ago. I will summarize the results of the searches, give a personal view of what they mean, and make some predictions for 13 TeV center-of-mass energy. I will also discuss early ATLAS 13 TeV center-of-mass energy results.


2020 ◽  
Vol 35 (08) ◽  
pp. 2030004 ◽  
Author(s):  
Christophe Royon ◽  
Cristian Baldenegro

We present a review of the recent theoretical and experimental developments related to the field of diffraction, parton saturation, and forward physics. We first discuss our present understanding of the proton structure in terms of quarks and gluons, the degrees of freedom of quantum chromodynamics. We then focus on some of the main results on diffraction at the HERA electron–proton collider in DESY, Germany, at the Tevatron proton–antiproton collider at Fermilab, Batavia, US, and at the CERN Large Hadron Collider (LHC) proton–proton and nucleus–nucleus collider, which is located in Geneva, Switzerland. We also present a selected amount of results on diffraction and photon exchanges that can be done at the LHC experiments and at a future Electron Ion Collider (EIC) to be built in the US at Brookhaven National Laboratory, New York.


2020 ◽  
Vol 35 (36) ◽  
pp. 2050302
Author(s):  
Amr Radi

With many applications in high-energy physics, Deep Learning or Deep Neural Network (DNN) has become noticeable and practical in recent years. In this article, a new technique is presented for modeling the charged particles multiplicity distribution [Formula: see text] of Proton-Proton [Formula: see text] collisions using an efficient DNN model. The charged particles multiplicity n, the total center of mass energy [Formula: see text], and the pseudorapidity [Formula: see text] used as input in DNN model and the desired output is [Formula: see text]. DNN was trained to build a function, which studies the relationship between [Formula: see text]. The DNN model showed a high degree of consistency in matching the data distributions. The DNN model is used to predict with [Formula: see text] not included in the training set. The expected [Formula: see text] had effectively merged the experimental data and the values expected indicate a strong agreement with Large Hadron Collider (LHC) for ATLAS measurement at [Formula: see text], 7 and 8 TeV.


Sign in / Sign up

Export Citation Format

Share Document