scholarly journals ATLAS Sim@P1 upgrades during long shutdown two

2020 ◽  
Vol 245 ◽  
pp. 07044
Author(s):  
Frank Berghaus ◽  
Franco Brasolin ◽  
Alessandro Di Girolamo ◽  
Marcus Ebert ◽  
Colin Roy Leavett-Brown ◽  
...  

The Simulation at Point1 (Sim@P1) project was built in 2013 to take advantage of the ATLAS Trigger and Data Acquisition High Level Trigger (HLT) farm. The HLT farm provides around 100,000 cores, which are critical to ATLAS during data taking. When ATLAS is not recording data, such as the long shutdowns of the LHC, this large compute resource is used to generate and process simulation data for the experiment. At the beginning of the second long shutdown of the large hadron collider, the HLT farm including the Sim@P1 infrastructure was upgraded. Previous papers emphasised the need for simple, reliable, and efficient tools and assessed various options to quickly switch between data acquisition operation and offline processing. In this contribution, we describe the new mechanisms put in place for the opportunistic exploitation of the HLT farm for offline processing and give the results from the first months of operation.

2019 ◽  
Vol 214 ◽  
pp. 07021
Author(s):  
Frank Berghaus ◽  
Franco Brasolin ◽  
Kevin Casteels ◽  
Colson Driemel ◽  
Marcus Ebert ◽  
...  

The Simulation at Point1 (Sim@P1) project was established in 2013 to take advantage of the Trigger and Data Acquisition High Level Trigger (HLT) farm of the ATLAS experiment at the LHC. The HLT farm is a significant compute resource, which is critical to ATLAS during data taking. This large compute resource is used to generate and process simulation data for the experiment when ATLAS is not recording data. The Sim@P1 system uses virtual machines, deployed by OpenStack, in order to isolate the resources from the ATLAS technical and control network. During the upcoming long shutdown in 2019 (LS2), the HLT farm including the Sim@P1 infrastructure will be upgraded. A previous paper on the project emphasized the need for “simple, reliable, and efficient tools” to quickly switch between data acquisition operation and offline processing. In this contribution we assess various options for updating and simplifying the provisional tools. Cloudscheduler is a tool for provisioning cloud resources for batch computing that has been managing cloud resources in HEP offline computing since 2012. We present the argument for choosing Cloudscheduler, and describe technical details regarding optimal utilization of the Sim@P1 re-sources.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
R. Aaij ◽  
M. Adinolfi ◽  
S. Aiola ◽  
S. Akar ◽  
J. Albrecht ◽  
...  

AbstractThe Large Hadron Collider beauty (LHCb) experiment at CERN is undergoing an upgrade in preparation for the Run 3 data collection period at the Large Hadron Collider (LHC). As part of this upgrade, the trigger is moving to a full software implementation operating at the LHC bunch crossing rate. We present an evaluation of a CPU-based and a GPU-based implementation of the first stage of the high-level trigger. After a detailed comparison, both options are found to be viable. This document summarizes the performance and implementation details of these options, the outcome of which has led to the choice of the GPU-based implementation as the baseline.


2020 ◽  
Vol 35 (34n35) ◽  
pp. 2044007
Author(s):  
Daniela Maria Köck

Electron and photon triggers are an important part of many physics analyses at the ATLAS experiment, where electron and photon final states are considered. Understanding the performance of electron and photon triggers at the High Level trigger as well as the Level-1 trigger was crucial to improve and adapt the trigger during changing run conditions of the Large Hadron Collider in Run 2 (2015–2018).


2010 ◽  
Vol 25 (10) ◽  
pp. 749-766
Author(s):  
VIVIAN O'DELL

The CMS Trigger and Data Acquisition Systems have been installed and commissioned and are awaiting data at the Large Hadron Collider. In this article, we describe what factors drove the design and architecture of the systems.


2019 ◽  
Vol 214 ◽  
pp. 07017
Author(s):  
Jean-Marc Andre ◽  
Ulf Behrens ◽  
James Branson ◽  
Philipp Brummer ◽  
Olivier Chaze ◽  
...  

The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.


2020 ◽  
Vol 245 ◽  
pp. 05004
Author(s):  
Rosen Matev ◽  
Niklas Nolte ◽  
Alex Pearce

For Run 3 of the Large Hadron Collider, the final stage of the LHCb experiment’s high-level trigger must process 100 GB/s of input data. This corresponds to an input rate of 1 MHz, and is an order of magnitude larger compared to Run 2. The trigger is responsible for selecting all physics signals that form part of the experiment’s broad research programme, and as such defines thousands of analysis-specific selections that together comprise tens of thousands of algorithm instances. The configuration of such a system needs to be extremely flexible to be able to handle the large number of different studies it must accommodate. However, it must also be robust and easy to understand, allowing analysts to implement and understand their own selections without the possibility of error. A Python-based system for configuring the data and control flow of the Gaudi-based trigger application is presented. It is designed to be user-friendly by using functions for modularity and removing indirection layers employed previously in Run 2. Robustness is achieved by reducing global state and instead building the data flow graph in a functional manner, whilst keeping configurability of the full call stack.


2021 ◽  
Vol 251 ◽  
pp. 04019
Author(s):  
Andrei Kazarov ◽  
Adrian Chitan ◽  
Andrei Kazymov ◽  
Alina Corso-Radu ◽  
Igor Aleksandrov ◽  
...  

The ATLAS experiment at the Large Hadron Collider (LHC) operated very successfully in the years 2008 to 2018, in two periods identified as Run 1 and Run 2. ATLAS achieved an overall data-taking efficiency of 94%, largely constrained by the irreducible dead-time introduced to accommodate the limitations of the detector read-out electronics. Out of the 6% dead-time only about 15% could be attributed to the central trigger and DAQ system, and out of these, a negligible fraction was due to the Control and Configuration subsystem. Despite these achievements, and in order to improve even more the already excellent efficiency of the whole DAQ system in the coming Run 3, a new campaign of software updates was launched for the second long LHC shutdown (LS2). This paper presents, using a few selected examples, how the work was approached and which new technologies were introduced into the ATLAS Control and Configuration software. Despite these being specific to this system, many solutions can be considered and adapted to different distributed DAQ systems.


2018 ◽  
Vol 2 (2) ◽  
pp. 359-373
Author(s):  
A..J. Barr ◽  
A. Haas ◽  
C.W. Kalderon

The engagement of citizen scientists with the HiggsHunters. org citizen science project is investigated through analysis of behaviour, discussion and survey data. More than 38,000 citizen scientists from 179 countries participated, classifying 1.5 million features of interest on about 39,000 distinct images. While most citizen scientists classified only a handful of images, some classified hundreds or even thousands. Analysis of frequently used terms on the dedicated discussion forum demonstrated that a high level of scientific engagement was not uncommon. Evidence was found for an emergent and distinct technical vocabulary developing within the citizen science community. A survey indicates a high level of engagement and an appetite for further citizen science projects related to the Large Hadron Collider.


Sign in / Sign up

Export Citation Format

Share Document