scholarly journals Precision electromagnetic calorimetry at the highest energy and intensity proton-proton collider: CMS ECAL performance at LHC Run 2 and prospects for high luminosity LHC

Author(s):  
Andrea Massironi
2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Georges Aad ◽  
Anne-Sophie Berthold ◽  
Thomas Calvet ◽  
Nemer Chiedde ◽  
Etienne Marie Fortin ◽  
...  

AbstractThe ATLAS experiment at the Large Hadron Collider (LHC) is operated at CERN and measures proton–proton collisions at multi-TeV energies with a repetition frequency of 40 MHz. Within the phase-II upgrade of the LHC, the readout electronics of the liquid-argon (LAr) calorimeters of ATLAS are being prepared for high luminosity operation expecting a pileup of up to 200 simultaneous proton–proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction by the calorimeter detector. Real-time processing of digitized pulses sampled at 40 MHz is performed using field-programmable gate arrays (FPGAs). To cope with the signal pileup, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct proton bunch crossing and in energy resolution. The improvements concern in particular energies derived from overlapping pulses. Since the implementation of the neural networks targets an FPGA, the number of parameters and the mathematical operations need to be well controlled. The trained neural network structures are converted into FPGA firmware using automated implementations in hardware description language and high-level synthesis tools. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The prototype implementations on an Intel Stratix-10 FPGA reach maximum operation frequencies of 344–640 MHz. Applying time-division multiplexing allows the processing of 390–576 calorimeter channels by one FPGA for the most resource-efficient networks. Moreover, the latency achieved is about 200 ns. These performance parameters show that a neural-network based energy reconstruction can be considered for the processing of the ATLAS LAr calorimeter signals during the high-luminosity phase of the LHC.


2018 ◽  
Vol 192 ◽  
pp. 00032 ◽  
Author(s):  
Rosamaria Venditti

The High-Luminosity Large Hadron Collider (HL-LHC) is a major upgrade of the LHC, expected to deliver an integrated luminosity of up to 3000/fb over one decade. The very high instantaneous luminosity will lead to about 200 proton-proton collisions per bunch crossing (pileup) superimposed to each event of interest, therefore providing extremely challenging experimental conditions. The scientific goals of the HL-LHC physics program include precise measurement of the properties of the recently discovered standard model Higgs boson and searches for beyond the standard model physics (heavy vector bosons, SUSY, dark matter and exotic long-lived signatures, to name a few). In this contribution we will present the strategy of the CMS experiment to investigate the feasibility of such search and quantify the increase of sensitivity in the HL-LHC scenario.


2021 ◽  
Vol 81 (5) ◽  
Author(s):  
Joosep Pata ◽  
Javier Duarte ◽  
Jean-Roch Vlimant ◽  
Maurizio Pierini ◽  
Maria Spiropulu

AbstractIn general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resolution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton–proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computing platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural network optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark–antiquark pairs produced in proton–proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.


2019 ◽  
Vol 214 ◽  
pp. 02044
Author(s):  
Tadej Novak

The high-luminosity data produced by the LHC leads to many proton-proton interactions per beam crossing in ATLAS, known as pile-up. In order to understand the ATLAS data and extract physics results it is important to model these effects accurately in the simulation. As the pile-up rate continues to grow towards an eventual rate of 200 for the HL-LHC, this puts increasing demands on the computing resources required for the simulation and the current approach of simulating the pile-up interactions along with the hard-scatter for each Monte Carlo production is no longer feasible. The new ATLAS “overlay” approach to pile-up simulation is presented. Here a pre-combined set of minimum bias interactions, either from simulation or from real data, is created once and a single event drawn from this set is overlaid with the hard-scatter event being simulated. This leads to significant improvements in CPU time. This contribution will discuss the technical aspects of the implementation in the ATLAS simulation and production infrastructure and compare the performance, both in terms of computing and physics, to the previous approach.


2019 ◽  
Vol 34 (27) ◽  
pp. 1950157 ◽  
Author(s):  
Tessio B. de Melo ◽  
Farinaldo S. Queiroz ◽  
Yoxara Villamizar

Doubly charged scalars are common figures in several beyond the Standard Model studies, especially those related to neutrino masses. In this work, we estimate the High-Luminosity (HL-LHC) and High-Energy LHC (HE-LHC) sensitivity to doubly charged scalars assuming that they decay promptly and exclusively into charged leptons. Our study focuses on the fit to the same-sign dilepton mass spectra and it is based on proton–proton collisions at 13 TeV, 14 TeV and 27 TeV with integrated luminosity of [Formula: see text] fb[Formula: see text], 3 ab[Formula: see text] and 15 ab[Formula: see text]. We find that HL-LHC may probe doubly charged scalars masses up to 2.3 TeV, whereas HE-LHC can impressively probe masses up to 3 TeV, conclusively constituting a complementary and important probe to signs of doubly charged scalars in lepton flavor violation decays and lepton–lepton colliders.


2016 ◽  
Vol 31 (33) ◽  
pp. 1644017 ◽  
Author(s):  
Feng Su ◽  
Jie Gao ◽  
Dou Wang ◽  
Yiwei Wang ◽  
Jingyu Tang ◽  
...  

In this paper, we introduce the layout and lattice design of Circular-Electron-Positron-Collider (CEPC) partial double ring scheme and the lattice design of Super-Proton-Proton-Collider (SPPC). The baseline design of CEPC is a single beam-pipe electron positron collider, which has to adopt pretzel orbit scheme and it is not suitable to serve as a high luminosity [Formula: see text] factory. If we choose partial double ring scheme, we can get a higher luminosity with lower power and be suitable to serve as a high luminosity [Formula: see text] factory. In this paper, we discuss the details of CEPC partial double ring lattice design and show the dynamic aperture study and optimization. We also show the first version of SPPC lattice although it needs lots of work to do and to be optimized.


Sign in / Sign up

Export Citation Format

Share Document