scholarly journals Overview of the high-level trigger electron and photon selection for the ATLAS experiment at the LHC

Author(s):  
A.G. Mello ◽  
A. dos Anjos ◽  
S. Armstrong ◽  
J.T.M. Baines ◽  
C.P. Bee ◽  
...  
2006 ◽  
Vol 53 (5) ◽  
pp. 2839-2843 ◽  
Author(s):  
A.G. Mello ◽  
A. Dos Anjos ◽  
S. Armstrong ◽  
J.T.M. Baines ◽  
C.P. Bee ◽  
...  

2020 ◽  
Vol 35 (34n35) ◽  
pp. 2044007
Author(s):  
Daniela Maria Köck

Electron and photon triggers are an important part of many physics analyses at the ATLAS experiment, where electron and photon final states are considered. Understanding the performance of electron and photon triggers at the High Level trigger as well as the Level-1 trigger was crucial to improve and adapt the trigger during changing run conditions of the Large Hadron Collider in Run 2 (2015–2018).


2010 ◽  
Vol 219 (3) ◽  
pp. 032025
Author(s):  
Andrea Ventura ◽  
the Atlas Collaboration

2019 ◽  
Vol 214 ◽  
pp. 01023
Author(s):  
Nikolina Ilic ◽  
Jos Vermeulen ◽  
Serguei Kolos

During the next major shutdown from 2019-2020, the ATLAS experiment at the LHC at CERN will adopt the Front-End Link eXchange (FELIX) system as the interface between the data acquisition, detector control and TTC (Timing, Trigger and Control) systems and new or updated trigger and detector front-end electronics. FELIX will function as a router between custom serial links from front end ASICs and FPGAs to data collection and processing components via a commodity switched network. Links may aggregate many slower links or be a single high bandwidth link. FELIX will also forward the LHC bunch-crossing clock, fixed latency trigger accepts and resets received from the TTC system to front-end electronics. The FELIX system uses commodity server technology in combination with FPGA-based PCIe I/O cards. The FELIX servers will run a software routing platform serving data to network clients. Commodity servers connected to FELIX systems via the same network will run the new Software Readout Driver (SW ROD) infrastructure for event fragment building and buffering, with support for detector or trigger specific data processing, and will serve the data upon request to the ATLAS High-Level Trigger for Event Building and Selection. This proceeding will cover the design and status of FELIX and the SW ROD.


2019 ◽  
Vol 214 ◽  
pp. 05015 ◽  
Author(s):  
Walter Lampl

The offline software framework of the ATLAS experiment (Athena) consists of many small components of various types like Algorithm, Tool or Service. To assemble these components into an executable application for event processing, a dedicated configuration step is necessary. The configuration of a particular job depends on the work-flow (simulation, reconstruction, high-level trigger, overlay, calibration, analysis ...) and the input data (real or simulated data, beam-energy, ...). The configuration step is done by executing python code. The resulting config-uration depends on optionally pre-set flags as well as meta-data about the data to be processed. For the python configuration code, there is almost no structure enforced, leaving the full power of python to the user. While this approach did work, it also proved to be error prone and complicated to use. It also leads to jobs containing more components that they actually need. For LHC Run 3 a more robust system is envisioned. It is still based on python but enforces a structure and emphasis modularity. This contribution briefly reports about the configuration system used during LHC Run 1 and Run 2 and details the prototype of an improved system to be used in Run 3 and beyond.


Sign in / Sign up

Export Citation Format

Share Document