scholarly journals The muon high level trigger of the ATLAS experiment

2010 ◽  
Vol 219 (3) ◽  
pp. 032025
Author(s):  
Andrea Ventura ◽  
the Atlas Collaboration
2019 ◽  
Vol 214 ◽  
pp. 01023
Author(s):  
Nikolina Ilic ◽  
Jos Vermeulen ◽  
Serguei Kolos

During the next major shutdown from 2019-2020, the ATLAS experiment at the LHC at CERN will adopt the Front-End Link eXchange (FELIX) system as the interface between the data acquisition, detector control and TTC (Timing, Trigger and Control) systems and new or updated trigger and detector front-end electronics. FELIX will function as a router between custom serial links from front end ASICs and FPGAs to data collection and processing components via a commodity switched network. Links may aggregate many slower links or be a single high bandwidth link. FELIX will also forward the LHC bunch-crossing clock, fixed latency trigger accepts and resets received from the TTC system to front-end electronics. The FELIX system uses commodity server technology in combination with FPGA-based PCIe I/O cards. The FELIX servers will run a software routing platform serving data to network clients. Commodity servers connected to FELIX systems via the same network will run the new Software Readout Driver (SW ROD) infrastructure for event fragment building and buffering, with support for detector or trigger specific data processing, and will serve the data upon request to the ATLAS High-Level Trigger for Event Building and Selection. This proceeding will cover the design and status of FELIX and the SW ROD.


2019 ◽  
Vol 214 ◽  
pp. 05015 ◽  
Author(s):  
Walter Lampl

The offline software framework of the ATLAS experiment (Athena) consists of many small components of various types like Algorithm, Tool or Service. To assemble these components into an executable application for event processing, a dedicated configuration step is necessary. The configuration of a particular job depends on the work-flow (simulation, reconstruction, high-level trigger, overlay, calibration, analysis ...) and the input data (real or simulated data, beam-energy, ...). The configuration step is done by executing python code. The resulting config-uration depends on optionally pre-set flags as well as meta-data about the data to be processed. For the python configuration code, there is almost no structure enforced, leaving the full power of python to the user. While this approach did work, it also proved to be error prone and complicated to use. It also leads to jobs containing more components that they actually need. For LHC Run 3 a more robust system is envisioned. It is still based on python but enforces a structure and emphasis modularity. This contribution briefly reports about the configuration system used during LHC Run 1 and Run 2 and details the prototype of an improved system to be used in Run 3 and beyond.


2006 ◽  
Vol 53 (5) ◽  
pp. 2839-2843 ◽  
Author(s):  
A.G. Mello ◽  
A. Dos Anjos ◽  
S. Armstrong ◽  
J.T.M. Baines ◽  
C.P. Bee ◽  
...  

2008 ◽  
Vol 55 (1) ◽  
pp. 165-171 ◽  
Author(s):  
N. Berger ◽  
T. Bold ◽  
T. Eifert ◽  
G. Fischer ◽  
S. George ◽  
...  

2019 ◽  
Vol 214 ◽  
pp. 07021
Author(s):  
Frank Berghaus ◽  
Franco Brasolin ◽  
Kevin Casteels ◽  
Colson Driemel ◽  
Marcus Ebert ◽  
...  

The Simulation at Point1 (Sim@P1) project was established in 2013 to take advantage of the Trigger and Data Acquisition High Level Trigger (HLT) farm of the ATLAS experiment at the LHC. The HLT farm is a significant compute resource, which is critical to ATLAS during data taking. This large compute resource is used to generate and process simulation data for the experiment when ATLAS is not recording data. The Sim@P1 system uses virtual machines, deployed by OpenStack, in order to isolate the resources from the ATLAS technical and control network. During the upcoming long shutdown in 2019 (LS2), the HLT farm including the Sim@P1 infrastructure will be upgraded. A previous paper on the project emphasized the need for “simple, reliable, and efficient tools” to quickly switch between data acquisition operation and offline processing. In this contribution we assess various options for updating and simplifying the provisional tools. Cloudscheduler is a tool for provisioning cloud resources for batch computing that has been managing cloud resources in HEP offline computing since 2012. We present the argument for choosing Cloudscheduler, and describe technical details regarding optimal utilization of the Sim@P1 re-sources.


2021 ◽  
Vol 251 ◽  
pp. 04006
Author(s):  
Alexander Paramonov

The Front-End Link eXchange (FELIX) system is an interface between the trigger and detector electronics and commodity switched networks for the ATLAS experiment at CERN. In preparation for the LHC Run 3, to start in 2022, the system is being installed to read out the new electromagnetic calorimeter, calorimeter trigger, and muon components being installed as part of the ongoing ATLAS upgrade programme. The detector and trigger electronic systems are largely custom and fully synchronous with respect to the 40.08 MHz clock of the Large Hadron Collider (LHC). The FELIX system uses FPGAs on server-hosted PCIe boards to pass data between custom data links connected to the detector and trigger electronics and host system memory over a PCIe interface then route data to network clients, such as the Software Readout Drivers (SW ROD), via a dedicated software platform running on these machines. The SW RODs build event fragments, buffer data, perform detector-specific processing and provide data for the ATLAS High Level Trigger. The FELIX approach takes advantage of modern FPGAs and commodity computing to reduce the system complexity and effort needed to support data acquisition systems in comparison to previous designs. Future upgrades of the experiment will introduce FELIX to read out all other detector components.


Sign in / Sign up

Export Citation Format

Share Document