scholarly journals CMS trigger performance

2018 ◽  
Vol 182 ◽  
pp. 02037
Author(s):  
Silvio Donato

During its second run of operation (Run 2), started in 2015, the LHC will deliver a peak instantaneous luminosity that may reach 2 · 1034 cm-2s-1 with an average pileup of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realized by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offine reconstruction software running on a computer farm. In order to face this challenge, the L1 trigger has been through a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT have been greatly improved; in particular, new approaches for the online track reconstruction lead to a drastic reduction of the computing time, and to much improved performances. This document will describe the performance of the upgraded trigger system in Run 2.

2014 ◽  
Vol 31 ◽  
pp. 1460297 ◽  
Author(s):  
Valentina Gori

The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running on the available computing power, the sustainable output rate, and the selection efficiency. Here we will present the performance of the main triggers used during the 2012 data taking, ranging from simpler single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We will discuss the optimisation of the triggers and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.


2019 ◽  
Vol 214 ◽  
pp. 01039
Author(s):  
Khalil Bouaouda ◽  
Stefan Schmitt ◽  
Driss Benchekroun

Online selection is an essential step to collect the most relevant collisions from the very large number of collisions inside the ATLAS detector at the Large Hadron Collider (LHC). The Fast TracKer (FTK) is a hardware based track finder, built to greatly improve the ATLAS trigger system capabilities for identifying interesting physics processes through track-based signatures. The FTK is reconstructing after each Level-1 trigger all tracks with pT > 1 GeV, such that the high-level trigger system gains access to track information at an early stage. FTK track reconstruction starts with a pattern recognition step. Patterns are found with hits in seven out of eight possible detector layers. Disabled detector modules, as often encountered during LHC operation, lead to efficiency losses. To recover efficiency, WildCards (WC) algorithms are implemented in the FTK system. The WC algorithm recovers inefficiency but also causes high combinatorial background and thus increased data volumes in the FTK system, possibly exceeding hardware limitations. To overcome this, a refined algorithm to select patterns is developed and investigated in this article.


2020 ◽  
Vol 245 ◽  
pp. 01031
Author(s):  
Thiago Rafael Fernandez Perez Tomei

The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger, implemented on custom-designed electronics, and the High Level Trigger, a streamlined version of the CMS offline reconstruction software running on a computer farm. During its second phase the LHC will reach a luminosity of 7.5 1034 cm−2 s−1 with a pileup of 200 collisions, producing integrated luminosity greater than 3000 fb−1 over the full experimental run. To fully exploit the higher luminosity, the CMS experiment will introduce a more advanced Level-1 Trigger and increase the full readout rate from 100 kHz to 750 kHz. CMS is designing an efficient data-processing hardware trigger that will include tracking information and high-granularity calorimeter information. The current Level-1 conceptual design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, low-latency system for large throughput and sophisticated data correlation across diverse sources. The higher luminosity, event complexity and input rate present an unprecedented challenge to the High Level Trigger that aims to achieve a similar efficiency and rejection factor as today despite the higher pileup and more pure preselection. In this presentation we will discuss the ongoing studies and prospects for the online reconstruction and selection algorithms for the high-luminosity era.


2019 ◽  
Vol 214 ◽  
pp. 01047
Author(s):  
Andrew Wightman ◽  
Geoffrey Smith ◽  
Kelci Mohrman ◽  
Charles Mueller

One of the major challenges for the Compact Muon Solenoid (CMS)experiment, is the task of reducing event rate from roughly 40 MHz down to a more manageable 1 kHz while keeping as many interesting physics events as possible. This is accomplished through the use of a Level-1 (L1) hardware based trigger as well as a software based High-Level Trigger (HLT). Monitoring and understanding the output rates of the L1 and HLT triggers is of key importance for determining the overall performance of the trigger system and is intimately tied to what type of data is being recorded for physics analyses. We present here a collection of tools used by CMS to monitor the L1 and HLT trigger rates. One of these tools is a script (run in the CMS control room) that gives valuable real-time feedback of trigger rates to the shift crew. Another useful tool is a plotting library, that is used for observing how trigger rates vary over a range of beam and detector conditions, in particular how the rates of individual triggers scale with event pile-up.


2019 ◽  
Vol 214 ◽  
pp. 01015 ◽  
Author(s):  
Jean-Marc Andre ◽  
Ulf Behrens ◽  
James Branson ◽  
Philipp Brummer ◽  
Sergio Cittolin ◽  
...  

The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, provides these events to the high level-trigger running on a farm of about 30k cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances of software applications, the DAQ system is complex in itself and failures cannot be completely excluded. Moreover, problems in the readout of the detectors,in the first level trigger system or in the high level trigger may provoke anomalous behaviour of the DAQ systemwhich sometimes cannot easily be differentiated from a problem in the DAQ system itself. In order to achieve high data taking efficiency with operators from the entire collaboration and without relying too heavily on the on-call experts, an expert system, the DAQ-Expert, has been developed that can pinpoint the source of most failures and give advice to the shift crew on how to recover in the quickest way. The DAQ-Expert constantly analyzes monitoring data from the DAQ system and the high level trigger by making use of logic modules written in Java that encapsulate the expert knowledge about potential operational problems. The results of the reasoning are presented to the operator in a web-based dashboard, may trigger sound alerts in the control room and are archived for post-mortem analysis - presented in a web-based timeline browser. We present the design of the DAQ-Expert and report on the operational experience since 2017, when it was first put into production.


2019 ◽  
Vol 214 ◽  
pp. 01021
Author(s):  
Simone Sottocornola

During Run 2 of the Large Hadron Collider (LHC) the instantaneous luminosity exceeded the nominal value of 1034 cm−2 s−1 with a 25 ns bunch crossing period and the number of overlapping proton-proton interactions per bunch crossing increased to a maximum of about 80. These conditions pose a challenge to the trigger system of the experiments that has to manage rates while keeping a good efficiency for interesting physics events. This document summarizes the software based control and monitoring of a hardware-based track reconstruction system for the ATLAS experiment, called Fast Tracker (FTK), composed of associative memories and FPGAs operating at the rate of 100 kHz and providing high quality track information within the available latency to the high-level trigger. In particular, we will detail the commissioning of the FTK within the ATLAS online software system presenting the solutions adopted for scaling up the system and ensuring robustness and redundancy. We will also describe the solutions to challenges such as controlling the occupancy of the buffers, managing the heterogeneous and large configuration, and providing monitoring information at sufficient rate.


2021 ◽  
Vol 251 ◽  
pp. 03044
Author(s):  
Ying-Rui Hou ◽  
Suzanne Klaver ◽  
Sneha Malde ◽  
Rosen Matev ◽  
Dmitry Popov ◽  
...  

The LHCb detector at the LHC is currently undergoing a major upgrade to increase its full detector read-out rate to 30 MHz. In addition to the detector hardware modernisation, the new trigger system will be software-only. The code base of the new trigger system must be thoroughly tested for data flow, functionality and physics performance. Currently, the testing procedure is based on a system of nightly builds and continuous integration tests of each new code development. The continuous integration tests are now extended to test and evaluate high-level quantities related to LHCb’s physics program, such as track reconstruction and particle identification, which is described in this paper. Before each merge request, the differences after the change in code are shown and automatically compared using an interactive visualisation tool, allowing easy verification of all relevant quantities. This approach gives an extensive control over the physics performance of the new code resulting into better preparation for data taking with the upgraded LHCb detector at Run 3.


2017 ◽  
Vol 898 ◽  
pp. 032030 ◽  
Author(s):  
David Rohr ◽  
Sergey Gorbunov ◽  
Volker Lindenstruth ◽  

2003 ◽  
Vol 18 (31) ◽  
pp. 2149-2168 ◽  
Author(s):  
Thomas Schörner-Sadenius

With the high bunch-crossing and interaction rates and potentially large event sizes, the experiments at the LHC challenge data acquisition and trigger systems. Within the ATLAS experiment, a multi-level trigger system based on hardware and software is employed to cope with the task of event-rate reduction. This review gives an overview of the trigger of the ATLAS experiment highlighting the design principles and the implementation of the system and provides references to more detailed information. In addition, first trigger-performance studies and an outlook on the ATLAS event-selection strategy are presented.


Sign in / Sign up

Export Citation Format

Share Document