scholarly journals A Comparison of CPU and GPU Implementations for the LHCb Experiment Run 3 Trigger

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
R. Aaij ◽  
M. Adinolfi ◽  
S. Aiola ◽  
S. Akar ◽  
J. Albrecht ◽  
...  

AbstractThe Large Hadron Collider beauty (LHCb) experiment at CERN is undergoing an upgrade in preparation for the Run 3 data collection period at the Large Hadron Collider (LHC). As part of this upgrade, the trigger is moving to a full software implementation operating at the LHC bunch crossing rate. We present an evaluation of a CPU-based and a GPU-based implementation of the first stage of the high-level trigger. After a detailed comparison, both options are found to be viable. This document summarizes the performance and implementation details of these options, the outcome of which has led to the choice of the GPU-based implementation as the baseline.

2020 ◽  
Vol 245 ◽  
pp. 07044
Author(s):  
Frank Berghaus ◽  
Franco Brasolin ◽  
Alessandro Di Girolamo ◽  
Marcus Ebert ◽  
Colin Roy Leavett-Brown ◽  
...  

The Simulation at Point1 (Sim@P1) project was built in 2013 to take advantage of the ATLAS Trigger and Data Acquisition High Level Trigger (HLT) farm. The HLT farm provides around 100,000 cores, which are critical to ATLAS during data taking. When ATLAS is not recording data, such as the long shutdowns of the LHC, this large compute resource is used to generate and process simulation data for the experiment. At the beginning of the second long shutdown of the large hadron collider, the HLT farm including the Sim@P1 infrastructure was upgraded. Previous papers emphasised the need for simple, reliable, and efficient tools and assessed various options to quickly switch between data acquisition operation and offline processing. In this contribution, we describe the new mechanisms put in place for the opportunistic exploitation of the HLT farm for offline processing and give the results from the first months of operation.


2019 ◽  
Vol 214 ◽  
pp. 01006
Author(s):  
Jean-Marc André ◽  
Ulf Behrens ◽  
James Branson ◽  
Philipp Brummer ◽  
Sergio Cittolin ◽  
...  

The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2MB at a rate of 100 kHz. The event builder collects event fragments from about 750 sources and assembles them into complete events which are then handed to the High-Level Trigger (HLT) processes running on O(1000) computers. The aging eventbuilding hardware will be replaced during the long shutdown 2 of the LHC taking place in 2019/20. The future data networks will be based on 100 Gb/s interconnects using Ethernet and Infiniband technologies. More powerful computers may allow to combine the currently separate functionality of the readout and builder units into a single I/O processor handling simultaneously 100 Gb/s of input and output traffic. It might be beneficial to preprocess data originating from specific detector parts or regions before handling it to generic HLT processors. Therefore, we will investigate how specialized coprocessors, e.g. GPUs, could be integrated into the event builder. We will present the envisioned changes to the event-builder compared to today’s system. Initial measurements of the performance of the data networks under the event-building traffic pattern will be shown. Implications of a folded network architecture for the event building and corresponding changes to the software implementation will be discussed.


2020 ◽  
Vol 35 (34n35) ◽  
pp. 2044007
Author(s):  
Daniela Maria Köck

Electron and photon triggers are an important part of many physics analyses at the ATLAS experiment, where electron and photon final states are considered. Understanding the performance of electron and photon triggers at the High Level trigger as well as the Level-1 trigger was crucial to improve and adapt the trigger during changing run conditions of the Large Hadron Collider in Run 2 (2015–2018).


2022 ◽  
Vol 17 (01) ◽  
pp. C01046
Author(s):  
P. Kopciewicz ◽  
S. Maccolini ◽  
T. Szumlak

Abstract The Vertex Locator (VELO) is a silicon tracking detector in the spectrometer of the Large Hadron Collider beauty (LHCb) experiment. LHCb explores and investigates CP violation phenomena in b- and c- hadron decays and is one of the experiments operating on the Large Hadron Collider (LHC) at CERN. After run 1 and run 2 of LHC data taking (2011–2018), the LHCb detectors are being modernized within the LHCb upgrade I program. The upgrade aims to adjust the spectrometer to readout at full LHC 40 MHz frequency, which requires radical changes to the technologies currently used in LHCb. The hardware trigger is removed, and some of the detectors replaced. The VELO changes its tracking technology and silicon strips are replaced by 55 μm pitch silicon pixels. The readout chip for the VELO upgrade is the VeloPix ASIC. The number of readout channels increases to over 40 million, and the hottest ASIC is expected to produce the output data rate of 15 Gbit/s. New conditions challenge the software and the hardware side of the readout system and put special attention on the detector monitoring. This paper presents the upgraded VELO design and outlines the software aspects of the detector calibration in the upgrade I. An overview of the challenges foreseen for the upgrade II is given.


2018 ◽  
Vol 8 (4) ◽  
pp. 226 ◽  
Author(s):  
Azweed Mohamad ◽  
Radzuwan Ab Rashid ◽  
Kamariah Yunus ◽  
Shireena Basree Abdul Rahman ◽  
Saadiyah Darus ◽  
...  

This paper discusses the speech acts in Facebook Status Updates posted by an apostate of Islam. The Facebook Timeline was observed for a duration of two years (January 2015 to December 2016). More than 4000 postings were made in the data collection period. However, only 648 postings are related to apostasy. The data were classified according to the types of speech acts. Expressive speech act is the most frequent speech act (33%, n=215), followed by the directive (27%, n=177), assertive (22%, n=141), and commissive (18%, n=115), respectively. Based on the speech acts used, it is discernible that the apostate attempts to engage other Facebook users and persuade them into accepting her ideology while gaining their support. This paper is novel in the sense that it puts forth the social actions of an apostate which is very scarce in literature. It is also methodologically innovative as it uses social media postings as a tool to explore the apostate’s social actions in an online space.


Author(s):  
Edmund M. Ricci ◽  
Ernesto A. Pretto ◽  
Knut Ole Sundnes

We strongly recommend that a ‘scout survey’ of the disaster site be implemented prior to the initiation of the principal study, in order to obtain the types of detailed information required to prepare a research plan and a plan for working with a full research team during the data collection period. The scout survey step requires that one or two researchers go to the disaster site within two or three weeks following the disaster to prepare for a larger team visit, which would initiate work within two to three months post disaster. ‘Scout team’ visits are an essential mechanism for developing and/or revising the data collection instruments, for securing collaboration of local officials who will facilitate the study, for identifying key informants and members of the stakeholder group, and for obtaining background information needed for the sample design. We believe it is essential that the initial data collection be completed as soon as possible after the disaster event ends, in order to minimize memory loss. It is also of great importance that the primary data collection phase be conducted efficiently, within a period of approximately seven to ten days, although additional data may be added subsequent to the primary data collection period as the need for it becomes apparent. It is also likely that a data gathering effort, such as a survey involving large numbers of individuals, may continue after the main data collection team members have returned to their home institutions. The amount of thorough and detailed planning required to achieve the 7–10 day goal virtually mandates a pre-visit by a scout team.


2020 ◽  
Vol 105 (9) ◽  
pp. e8.1-e8
Author(s):  
Yusuf Asif ◽  
Chi Huynh ◽  
Awais Hussain

AimThe effectiveness of proton pump inhibitors (PPIs) have been demonstrated. Nevertheless, the choice of PPI that should be used is less absolute. The clinical effectiveness, availability of the formulation, co-morbidities, route of administration and lowest acquisition cost are all considerations that should be accounted for when determining the appropriate therapy. Current Trust guidance recommends lansoprazole capsules and oral dispersible tablets as the first line PPI, unless other indications preclude its use. Other UK hospitals have audited PPI prescribing and their findings highlight that adherence upon deployment was poor.1 2 This audit aims to assess if written outpatient prescriptions are adhering to the guidelines.MethodThis study was conducted prospectively in the outpatient pharmacy, between February and March 2019. The defined data collection period was 5 weeks, which included a 1-week pilot study. The accumulation of data involved reviewing all outpatient prescriptions, whereby PPIs were prescribed, noting if the Trust’s guidance on PPI prescribing was being adhered to. Data was collected via a structured pro forma to assess the percentage compliance against the three predetermined standards:Standard 1 - Is the PPI prescribed appropriate?Standard 2 - Is there a documented indication for the prescribed PPI?Standard 3 - Is the dose appropriate for the patient?ResultsThere were a total of 84 prescriptions received from 13 different specialties. The age range of patients was 1 month to 16 years with a mean (± median) age of 7.66 ± 7 years. The overall compliance with the Trust’s guidelines for standards 1, 2 and 3 were 76%, 88% and 100% respectively. The infant and toddler age group (28 days – 23 months) showed the least compliance in standard 1, the choice of appropriate PPI (63%). The most common indication was gastro oesophageal reflux disease. Paediatric Gastroenterology received the greatest number of prescriptions over the data collection period. 12% of prescriptions did not have a documented indication and the most common PPI prescribed in the outpatient pharmacy was lansoprazole, which accounted for 64 (77%) of the prescriptions.ConclusionThe findings in this study are synonymous to that of other audits conducted in UK hospitals, where compliance with PPI guidelines were explored. Possible factors that could be attributed to the low levels of adherence are problems with implementation, lack of enforcement of the guidelines, patient/guardian preferences and presence of enteral feeding tubes. Clinicians should monitor their prescribing and where applicable, switch patients who are currently on omeprazole suspension, to lansoprazole oral dispersible tablets/capsules. This could lead to significant monetary savings for the Trust.ReferencesDerbyshire Joint Area Prescribing Committee. Gastro-oesophageal reflux disease: recognition, diagnosis and management in children and young people. 2015.Pan Mersey Area Prescribing Committee. Pharmacological management of gastro-oesophageal reflux disease (GORD) in children and young people in primary and secondary care. 2016.


2019 ◽  
Vol 214 ◽  
pp. 07017
Author(s):  
Jean-Marc Andre ◽  
Ulf Behrens ◽  
James Branson ◽  
Philipp Brummer ◽  
Olivier Chaze ◽  
...  

The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.


2008 ◽  
Vol 113 (G1) ◽  
pp. n/a-n/a ◽  
Author(s):  
Rafael Rosolem ◽  
William James Shuttleworth ◽  
Luis Gustavo Gonçalves de Gonçalves

Sign in / Sign up

Export Citation Format

Share Document