OVERVIEW ON ARCHITECTURES AND TECHNOLOGIES FOR DATA ACQUISITION AND HIGH LEVEL TRIGGER SYSTEMS

Author(s):  
John Harvey
2020 ◽  
Vol 245 ◽  
pp. 07044
Author(s):  
Frank Berghaus ◽  
Franco Brasolin ◽  
Alessandro Di Girolamo ◽  
Marcus Ebert ◽  
Colin Roy Leavett-Brown ◽  
...  

The Simulation at Point1 (Sim@P1) project was built in 2013 to take advantage of the ATLAS Trigger and Data Acquisition High Level Trigger (HLT) farm. The HLT farm provides around 100,000 cores, which are critical to ATLAS during data taking. When ATLAS is not recording data, such as the long shutdowns of the LHC, this large compute resource is used to generate and process simulation data for the experiment. At the beginning of the second long shutdown of the large hadron collider, the HLT farm including the Sim@P1 infrastructure was upgraded. Previous papers emphasised the need for simple, reliable, and efficient tools and assessed various options to quickly switch between data acquisition operation and offline processing. In this contribution, we describe the new mechanisms put in place for the opportunistic exploitation of the HLT farm for offline processing and give the results from the first months of operation.


2019 ◽  
Vol 214 ◽  
pp. 07021
Author(s):  
Frank Berghaus ◽  
Franco Brasolin ◽  
Kevin Casteels ◽  
Colson Driemel ◽  
Marcus Ebert ◽  
...  

The Simulation at Point1 (Sim@P1) project was established in 2013 to take advantage of the Trigger and Data Acquisition High Level Trigger (HLT) farm of the ATLAS experiment at the LHC. The HLT farm is a significant compute resource, which is critical to ATLAS during data taking. This large compute resource is used to generate and process simulation data for the experiment when ATLAS is not recording data. The Sim@P1 system uses virtual machines, deployed by OpenStack, in order to isolate the resources from the ATLAS technical and control network. During the upcoming long shutdown in 2019 (LS2), the HLT farm including the Sim@P1 infrastructure will be upgraded. A previous paper on the project emphasized the need for “simple, reliable, and efficient tools” to quickly switch between data acquisition operation and offline processing. In this contribution we assess various options for updating and simplifying the provisional tools. Cloudscheduler is a tool for provisioning cloud resources for batch computing that has been managing cloud resources in HEP offline computing since 2012. We present the argument for choosing Cloudscheduler, and describe technical details regarding optimal utilization of the Sim@P1 re-sources.


2015 ◽  
Vol 664 (8) ◽  
pp. 082011 ◽  
Author(s):  
M. Frank ◽  
C. Gaspar ◽  
B. Jost ◽  
N. Neufeld

1989 ◽  
Vol 36 (5) ◽  
pp. 1469-1474 ◽  
Author(s):  
F. Bertolino ◽  
F. Bianchi ◽  
R. Cirio ◽  
M.P. Clara ◽  
D. Crosetto ◽  
...  

2019 ◽  
Vol 214 ◽  
pp. 05010 ◽  
Author(s):  
Giulio Eulisse ◽  
Piotr Konopka ◽  
Mikolaj Krzewicki ◽  
Matthias Richter ◽  
David Rohr ◽  
...  

ALICE is one of the four major LHC experiments at CERN. When the accelerator enters the Run 3 data-taking period, starting in 2021, ALICE expects almost 100 times more Pb-Pb central collisions than now, resulting in a large increase of data throughput. In order to cope with this new challenge, the collaboration had to extensively rethink the whole data processing chain, with a tighter integration between Online and Offline computing worlds. Such a system, code-named ALICE O2, is being developed in collaboration with the FAIR experiments at GSI. It is based on the ALFA framework which provides a generalized implementation of the ALICE High Level Trigger approach, designed around distributed software entities coordinating and communicating via message passing. We will highlight our efforts to integrate ALFA within the ALICE O2 environment. We analyze the challenges arising from the different running environments for production and development, and conclude on requirements for a flexible and modular software framework. In particular we will present the ALICE O2 Data Processing Layer which deals with ALICE specific requirements in terms of Data Model. The main goal is to reduce the complexity of development of algorithms and managing a distributed system, and by that leading to a significant simplification for the large majority of the ALICE users.


2019 ◽  
Vol 214 ◽  
pp. 01037
Author(s):  
Marco Boretto

The aim of the NA62 experiment is to study the extreme rare kaon decay K+ ? π+vv and to measure its branching ratio with a 10% accuracy. In order to do so, a very high intensity beam from the CERN SPS is used to produce charged kaons whose decay products are detected by many detectors installed along a 60 m decay region. The NA62 Data Acquisition system (DAQ) exploits a multi-level trigger system; following a Level0 (L0) trigger decision, 1 MHz data rate from about 60 sources is read by a PC-farm, the partial event is built and then passed through a series of Level1 (L1) algorithms to further reduce the trigger rate. Events passing this level are completed with the missing, larger, data sources (~400 sources) at the rate of 100 KHz. The DAQ is built around a high performance ethernet network interconnecting the detectors to a farm of 30 servers. After an overall description of the system design and the main implementation choices that allowed to reach the required performance and functionality, this paper describes the overall behaviour of the DAQ in the 2017 data taking period. It then concludes with an outlook of possible improvements and upgrades that may be applied to the system in the future.


Sign in / Sign up

Export Citation Format

Share Document