scholarly journals Assessment of the performance of High-Luminosity LHC operational scenarios: integrated luminosity and effective pile-up density

2019 ◽  
Vol 97 (5) ◽  
pp. 498-508 ◽  
Author(s):  
L. Medina ◽  
R. Tomás ◽  
G. Arduini ◽  
M. Napsuciale

The High-Luminosity Large Hadron Collider (HL-LHC) experiments will operate at unprecedented levels of event pile-up from proton–proton collisions at 14 TeV centre-of-mass energy. In this paper, we study the performance of the baseline and a series of alternative scenarios in terms of the delivered integrated luminosity and its quality (pile-up density). A new figure-of-merit is introduced, the effective pile-up density, a concept that reflects the expected detector efficiency in the reconstruction of event vertices for a given operational scenario, acting as a link between the machine and experimental sides. Alternative scenarios have been proposed either to improve the baseline performance or to provide operational schemes in the case of particular limitations. Simulations of the evolution of their optimum fills with the latest set of parameters of the HL-LHC are performed with β*-levelling, and the results are discussed in terms of both the integrated luminosity and the effective pile-up density. The crab kissing scheme, a proposed scenario for pile-up density control, is re-evaluated under this new perspective with updated beam and optics parameters. Estimates on the expected integrated luminosity due to the impact of crab cavity noise, full crab crossing, and reduced cross section for burn-off, are also presented.

2022 ◽  
Vol 17 (01) ◽  
pp. P01013
Author(s):  
Georges Aad ◽  
Brad Abbott ◽  
Dale Charles Abbott ◽  
Adam Abed Abud ◽  
Kira Abeling ◽  
...  

Abstract The semiconductor tracker (SCT) is one of the tracking systems for charged particles in the ATLAS detector. It consists of 4088 silicon strip sensor modules. During Run 2 (2015–2018) the Large Hadron Collider delivered an integrated luminosity of 156 fb-1 to the ATLAS experiment at a centre-of-mass proton-proton collision energy of 13 TeV. The instantaneous luminosity and pile-up conditions were far in excess of those assumed in the original design of the SCT detector. Due to improvements to the data acquisition system, the SCT operated stably throughout Run 2. It was available for 99.9% of the integrated luminosity and achieved a data-quality efficiency of 99.85%. Detailed studies have been made of the leakage current in SCT modules and the evolution of the full depletion voltage, which are used to study the impact of radiation damage to the modules.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Georges Aad ◽  
Anne-Sophie Berthold ◽  
Thomas Calvet ◽  
Nemer Chiedde ◽  
Etienne Marie Fortin ◽  
...  

AbstractThe ATLAS experiment at the Large Hadron Collider (LHC) is operated at CERN and measures proton–proton collisions at multi-TeV energies with a repetition frequency of 40 MHz. Within the phase-II upgrade of the LHC, the readout electronics of the liquid-argon (LAr) calorimeters of ATLAS are being prepared for high luminosity operation expecting a pileup of up to 200 simultaneous proton–proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction by the calorimeter detector. Real-time processing of digitized pulses sampled at 40 MHz is performed using field-programmable gate arrays (FPGAs). To cope with the signal pileup, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct proton bunch crossing and in energy resolution. The improvements concern in particular energies derived from overlapping pulses. Since the implementation of the neural networks targets an FPGA, the number of parameters and the mathematical operations need to be well controlled. The trained neural network structures are converted into FPGA firmware using automated implementations in hardware description language and high-level synthesis tools. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The prototype implementations on an Intel Stratix-10 FPGA reach maximum operation frequencies of 344–640 MHz. Applying time-division multiplexing allows the processing of 390–576 calorimeter channels by one FPGA for the most resource-efficient networks. Moreover, the latency achieved is about 200 ns. These performance parameters show that a neural-network based energy reconstruction can be considered for the processing of the ATLAS LAr calorimeter signals during the high-luminosity phase of the LHC.


2018 ◽  
Vol 192 ◽  
pp. 00032 ◽  
Author(s):  
Rosamaria Venditti

The High-Luminosity Large Hadron Collider (HL-LHC) is a major upgrade of the LHC, expected to deliver an integrated luminosity of up to 3000/fb over one decade. The very high instantaneous luminosity will lead to about 200 proton-proton collisions per bunch crossing (pileup) superimposed to each event of interest, therefore providing extremely challenging experimental conditions. The scientific goals of the HL-LHC physics program include precise measurement of the properties of the recently discovered standard model Higgs boson and searches for beyond the standard model physics (heavy vector bosons, SUSY, dark matter and exotic long-lived signatures, to name a few). In this contribution we will present the strategy of the CMS experiment to investigate the feasibility of such search and quantify the increase of sensitivity in the HL-LHC scenario.


Coatings ◽  
2020 ◽  
Vol 10 (4) ◽  
pp. 361 ◽  
Author(s):  
Carlotta Accettura ◽  
David Amorim ◽  
Alekseyevichx Antipov ◽  
Adrienn Baris ◽  
Alessandro Bertarelli ◽  
...  

The High-Luminosity Large Hadron Collider (HL-LHC) project aims at extending the operability of the LHC by another decade and increasing by more than a factor of ten the integrated luminosity that the LHC will have collected by the end of Run 3. This will require doubling the beam intensity and reducing the transverse beam size compared to those of the LHC design. The higher beam brightness poses new challenges for machine safety, due to the large energy of 700 MJ stored in the beams, and for beam stability, mainly due to the collimator contribution to the total LHC beam coupling impedance. A rich research program was therefore started to identify suitable materials and collimator designs, not only fulfilling impedance reduction requirements but also granting adequate beam-cleaning and robustness against failures. The use of thin molybdenum coatings on a molybdenum–graphite substrate has been identified as the most promising solution to meet both collimation and impedance requirements, and it is now the baseline choice of the HL-LHC project. In this work we present the main results of the coating characterization, in particular addressing the impact of coating microstructure on the electrical resistivity with different techniques, from Direct Current (DC) to GHz frequency range.


2020 ◽  
Vol 2020 (8) ◽  
Author(s):  
M.A. Arroyo-Ureña ◽  
T.A. Valencia-Pérez ◽  
R. Gaitán ◽  
J.H. Montes de Oca Y ◽  
A. Fernández-Téllez

Abstract We study the flavor-changing decay h → τ μ with τ = τ− +τ+ and μ = μ− +μ+ of a Higgs boson at future hadron colliders, namely: a) High Luminosity Large Hadron Collider, b) High Energy Large Hadron Collider and c) Future hadron-hadron Circular Collider. The theoretical framework adopted is the Two-Higgs-Doublet Model type III. The free model parameters involved in the calculation are constrained through Higgs boson data, Lepton Flavor Violating processes and the muon anomalous magnetic dipole moment; later they are used to analyze the branching ratio of the decay h → τ μ and to evaluate the gg → h production cross section. We find that at the Large Hadron Collider is not possible to claim for evidence of the decay h → τ μ achieving a signal significance about of 1.46σ by considering its final integrated luminosity, 300 fb−1. More promising results arise at the High Luminosity Large Hadron Collider in which a prediction of 4.6σ when an integrated luminosity of 3 ab−1 and tan β = 8 are achieved. Meanwhile, at the High Energy Large Hadron Collider (Future hadron-hadron Circular Collider) a potential discovery could be claimed with a signal significance around 5.04σ (5.43σ) for an integrated luminosity of 3 ab−1 and tan β = 8 (5 ab−1 and tan β = 4).


2019 ◽  
Vol 214 ◽  
pp. 02044
Author(s):  
Tadej Novak

The high-luminosity data produced by the LHC leads to many proton-proton interactions per beam crossing in ATLAS, known as pile-up. In order to understand the ATLAS data and extract physics results it is important to model these effects accurately in the simulation. As the pile-up rate continues to grow towards an eventual rate of 200 for the HL-LHC, this puts increasing demands on the computing resources required for the simulation and the current approach of simulating the pile-up interactions along with the hard-scatter for each Monte Carlo production is no longer feasible. The new ATLAS “overlay” approach to pile-up simulation is presented. Here a pre-combined set of minimum bias interactions, either from simulation or from real data, is created once and a single event drawn from this set is overlaid with the hard-scatter event being simulated. This leads to significant improvements in CPU time. This contribution will discuss the technical aspects of the implementation in the ATLAS simulation and production infrastructure and compare the performance, both in terms of computing and physics, to the previous approach.


2020 ◽  
Vol 2020 (8) ◽  
Author(s):  
Biplob Bhattacherjee ◽  
Swagata Mukherjee ◽  
Rhitaja Sengupta ◽  
Prabhat Solanki

Abstract Triggering long-lived particles (LLPs) at the first stage of the trigger system is very crucial in LLP searches to ensure that we do not miss them at the very beginning. The future High Luminosity runs of the Large Hadron Collider will have increased number of pile-up events per bunch crossing. There will be major upgrades in hardware, firmware and software sides, like tracking at level-1 (L1). The L1 trigger menu will also be modified to cope with pile-up and maintain the sensitivity to physics processes. In our study we found that the usual level-1 triggers, mostly meant for triggering prompt particles, will not be very efficient for LLP searches in the 140 pile-up environment of HL-LHC, thus pointing to the need to include dedicated L1 triggers in the menu for LLPs. We consider the decay of the LLP into jets and develop dedicated jet triggers using the track information at L1 to select LLP events. We show in our work that these triggers give promising results in identifying LLP events with moderate trigger rates.


2021 ◽  
Vol 34 ◽  
pp. 18-22
Author(s):  
A.A. Pankov ◽  
I.A. Serenkova ◽  
V.A. Bednyakov

The full ATLAS and CMS Run 2 data set at the Large Hadron Collider (LHC) with time- integrated luminosity of 139 fb −1 and 137 fb −1 , re- spectively, in the diboson channel is used to probe benchmark models with extended gauge sectors: theE 6 -motivated Grand Unification models, the left-right symmetric LR and the sequential standard model. These all predict neutral Z' and charged W' vector bosons, decaying into lepton or electroweak gauge boson pairs. We present constraints on the parameter space of the Z' and W' and compare them to those obtained from the previous analyses performed withLHC data collected at 7 and 8 TeV in Run 1 as well as at 13 TeV in Run 2 at time-integrated luminosity of 36.1 fb −1 . We show that proton-proton collision data at √ s = 13 TeV collected by the ATLAS and the CMS experiments allow to set the most stringent bounds to date on Z-Z' and W-W' mixing.


2020 ◽  
Vol 2020 (11) ◽  
Author(s):  
Claude Duhr ◽  
Falko Dulat ◽  
Bernhard Mistlberger

Abstract We present the production cross section for a lepton-neutrino pair at the Large Hadron Collider computed at next-to-next-to-next-to-leading order (N3LO) in QCD perturbation theory. We compute the partonic coefficient functions of a virtual W± boson at this order. We then use these analytic functions to study the progression of the perturbative series in different observables. In particular, we investigate the impact of the newly obtained corrections on the inclusive production cross section of W± bosons, as well as on the ratios of the production cross sections for W+, W− and/or a virtual photon. Finally, we present N3LO predictions for the charge asymmetry at the LHC.


Sign in / Sign up

Export Citation Format

Share Document