Computing and Software for Big Science
Latest Publications


TOTAL DOCUMENTS

69
(FIVE YEARS 54)

H-INDEX

10
(FIVE YEARS 6)

Published By Springer-Verlag

2510-2044, 2510-2036

2022 ◽  
Vol 6 (1) ◽  
Author(s):  
Marco Rossi ◽  
Sofia Vallecorsa

AbstractIn this work, we investigate different machine learning-based strategies for denoising raw simulation data from the ProtoDUNE experiment. The ProtoDUNE detector is hosted by CERN and it aims to test and calibrate the technologies for DUNE, a forthcoming experiment in neutrino physics. The reconstruction workchain consists of converting digital detector signals into physical high-level quantities. We address the first step in reconstruction, namely raw data denoising, leveraging deep learning algorithms. We design two architectures based on graph neural networks, aiming to enhance the receptive field of basic convolutional neural networks. We benchmark this approach against traditional algorithms implemented by the DUNE collaboration. We test the capabilities of graph neural network hardware accelerator setups to speed up training and inference processes.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
R. Aaij ◽  
M. Adinolfi ◽  
S. Aiola ◽  
S. Akar ◽  
J. Albrecht ◽  
...  

AbstractThe Large Hadron Collider beauty (LHCb) experiment at CERN is undergoing an upgrade in preparation for the Run 3 data collection period at the Large Hadron Collider (LHC). As part of this upgrade, the trigger is moving to a full software implementation operating at the LHC bunch crossing rate. We present an evaluation of a CPU-based and a GPU-based implementation of the first stage of the high-level trigger. After a detailed comparison, both options are found to be viable. This document summarizes the performance and implementation details of these options, the outcome of which has led to the choice of the GPU-based implementation as the baseline.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Domenico Giordano ◽  
Manfred Alef ◽  
Luca Atzori ◽  
Jean-Michel Barbet ◽  
Olga Datskova ◽  
...  

AbstractThe HEPiX Benchmarking Working Group has developed a framework to benchmark the performance of a computational server using the software applications of the High Energy Physics (HEP) community. This framework consists of two main components, named HEP-Workloads and HEPscore. HEP-Workloads is a collection of standalone production applications provided by a number of HEP experiments. HEPscore is designed to run HEP-Workloads and provide an overall measurement that is representative of the computing power of a system. HEPscore is able to measure the performance of systems with different processor architectures and accelerators. The framework is completed by the HEP Benchmark Suite that simplifies the process of executing HEPscore and other benchmarks such as HEP-SPEC06, SPEC CPU 2017, and DB12. This paper describes the motivation, the design choices, and the results achieved by the HEPiX Benchmarking Working group. A perspective on future plans is also presented.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Jamie Heredge ◽  
Charles Hill ◽  
Lloyd Hollenberg ◽  
Martin Sevior

AbstractQuantum computers have the potential to speed up certain computational tasks. A possibility this opens up within the field of machine learning is the use of quantum techniques that may be inefficient to simulate classically but could provide superior performance in some tasks. Machine learning algorithms are ubiquitous in particle physics and as advances are made in quantum machine learning technology there may be a similar adoption of these quantum techniques. In this work a quantum support vector machine (QSVM) is implemented for signal-background classification. We investigate the effect of different quantum encoding circuits, the process that transforms classical data into a quantum state, on the final classification performance. We show an encoding approach that achieves an average Area Under Receiver Operating Characteristic Curve (AUC) of 0.848 determined using quantum circuit simulations. For this same dataset the best classical method tested, a classical Support Vector Machine (SVM) using the Radial Basis Function (RBF) Kernel achieved an AUC of 0.793. Using a reduced version of the dataset we then ran the algorithm on the IBM Quantum ibmq_casablanca device achieving an average AUC of 0.703. As further improvements to the error rates and availability of quantum computers materialise, they could form a new approach for data analysis in high energy physics.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Gage DeZoort ◽  
Savannah Thais ◽  
Javier Duarte ◽  
Vesal Razavimaleki ◽  
Markus Atkinson ◽  
...  

AbstractRecent work has demonstrated that geometric deep learning methods such as graph neural networks (GNNs) are well suited to address a variety of reconstruction problems in high-energy particle physics. In particular, particle tracking data are naturally represented as a graph by identifying silicon tracker hits as nodes and particle trajectories as edges, given a set of hypothesized edges, edge-classifying GNNs identify those corresponding to real particle trajectories. In this work, we adapt the physics-motivated interaction network (IN) GNN toward the problem of particle tracking in pileup conditions similar to those expected at the high-luminosity Large Hadron Collider. Assuming idealized hit filtering at various particle momenta thresholds, we demonstrate the IN’s excellent edge-classification accuracy and tracking efficiency through a suite of measurements at each stage of GNN-based tracking: graph construction, edge classification, and track building. The proposed IN architecture is substantially smaller than previously studied GNN tracking architectures; this is particularly promising as a reduction in size is critical for enabling GNN-based tracking in constrained computing environments. Furthermore, the IN may be represented as either a set of explicit matrix operations or a message passing GNN. Efforts are underway to accelerate each representation via heterogeneous computing resources towards both high-level and low-latency triggering applications.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Andreas J. Peters ◽  
Daniel C. van der Ster

AbstractCephFS is a network filesystem built upon the Reliable Autonomic Distributed Object Store (RADOS). At CERN we have demonstrated its reliability and elasticity while operating several 100-to-1000TB clusters which provide NFS-like storage to infrastructure applications and services. At the same time, our lab developed EOS to offer high performance 100PB-scale storage for the LHC at extremely low costs while also supporting the complete set of security and functional APIs required by the particle-physics user community. This work seeks to evaluate the performance of CephFS on this cost-optimized hardware when it is combined with EOS to support the missing functionalities. To this end, we have setup a proof-of-concept Ceph Octopus cluster on high-density JBOD servers (840 TB each) with 100Gig-E networking. The system uses EOS to provide an overlayed namespace and protocol gateways for HTTP(S) and XROOTD, and uses CephFS as an erasure-coded object storage backend. The solution also enables operators to aggregate several CephFS instances and adds features, such as third-party-copy, SciTokens, and high-level user and quota management. Using simple benchmarks we measure the cost/performance tradeoffs of different erasure-coding layouts, as well as the network overheads of these coding schemes. We demonstrate some relevant limitations of the CephFS metadata server and offer improved tunings which can be generally applicable. To conclude, we reflect on the advantages and drawbacks related to this architecture, such as RADOS-level free space requirements and double-network penalties, and offer ideas for improvements in the future.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Joseph D. Osborn ◽  
Anthony D. Frawley ◽  
Jin Huang ◽  
Sookhyun Lee ◽  
Hugo Pereira Da Costa ◽  
...  

AbstractsPHENIX is a high energy nuclear physics experiment under construction at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory (BNL). The primary physics goals of sPHENIX are to study the quark-gluon-plasma, as well as the partonic structure of protons and nuclei, by measuring jets, their substructure, and heavy flavor hadrons in $$p$$ p $$+$$ + $$p$$ p , p + Au, and Au + Au collisions. sPHENIX will collect approximately 300 PB of data over three run periods, to be analyzed using available computing resources at BNL; thus, performing track reconstruction in a timely manner is a challenge due to the high occupancy of heavy ion collision events. The sPHENIX experiment has recently implemented the A Common Tracking Software (ACTS) track reconstruction toolkit with the goal of reconstructing tracks with high efficiency and within a computational budget of 5 s per minimum bias event. This paper reports the performance status of ACTS as the default track fitting tool within sPHENIX, including discussion of the first implementation of a time projection chamber geometry within ACTS.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Sudhir Malik ◽  
Samuel Meehan ◽  
Kilian Lieret ◽  
Meirin Oan Evans ◽  
Michel H. Villanueva ◽  
...  

AbstractThe long-term sustainability of the high-energy physics (HEP) research software ecosystem is essential to the field. With new facilities and upgrades coming online throughout the 2020s, this will only become increasingly important. Meeting the sustainability challenge requires a workforce with a combination of HEP domain knowledge and advanced software skills. The required software skills fall into three broad groups. The first is fundamental and generic software engineering (e.g., Unix, version control, C++, and continuous integration). The second is knowledge of domain-specific HEP packages and practices (e.g., the ROOT data format and analysis framework). The third is more advanced knowledge involving specialized techniques, including parallel programming, machine learning and data science tools, and techniques to maintain software projects at all scales. This paper discusses the collective software training program in HEP led by the HEP Software Foundation (HSF) and the Institute for Research and Innovation in Software in HEP (IRIS-HEP). The program equips participants with an array of software skills that serve as ingredients for the solution of HEP computing challenges. Beyond serving the community by ensuring that members are able to pursue research goals, the program serves individuals by providing intellectual capital and transferable skills important to careers in the realm of software and computing, inside or outside HEP.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Nazar Bartosik ◽  
Paolo Andreetto ◽  
Laura Buonincontri ◽  
Massimo Casarsa ◽  
Alessio Gianelle ◽  
...  

AbstractIn recent years, a Muon collider has attracted a lot of interest in the high-energy physics community, thanks to its ability of achieving clean interaction signatures at multi-TeV collision energies in the most cost-effective way. Estimation of the physics potential of such an experiment must take into account the impact of beam-induced background on the detector performance, which has to be carefully evaluated using full detector simulation. Tracing of all the background particles entering the detector region in a single bunch crossing is out of reach for any realistic computing facility due to the unprecedented number of such particles. To make it feasible a number of optimisations have been applied to the detector simulation workflow. This contribution presents an overview of the main characteristics of the beam-induced background at a Muon collider, the detector technologies considered for the experiment and how they are taken into account to strongly reduce the number of irrelevant computations performed during the detector simulation. Special attention is dedicated to the optimisation of track reconstruction with the conformal tracking algorithm in this high-occupancy environment, which is the most computationally demanding part of event reconstruction.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Xiaocong Ai ◽  
Georgiana Mania ◽  
Heather M. Gray ◽  
Michael Kuhn ◽  
Nicholas Styles

AbstractComputing centres, including those used to process High-Energy Physics data and simulations, are increasingly providing significant fractions of their computing resources through hardware architectures other than x86 CPUs, with GPUs being a common alternative. GPUs can provide excellent computational performance at a good price point for tasks that can be suitably parallelized. Charged particle (track) reconstruction is a computationally expensive component of HEP data reconstruction, and thus needs to use available resources in an efficient way. In this paper, an implementation of Kalman filter-based track fitting using CUDA and running on GPUs is presented. This utilizes the ACTS (A Common Tracking Software) toolkit; an open source and experiment-independent toolkit for track reconstruction. The implementation details and parallelization approach are described, along with the specific challenges for such an implementation. Detailed performance benchmarking results are discussed, which show encouraging performance gains over a CPU-based implementation for representative configurations. Finally, a perspective on the challenges and future directions for these studies is outlined. These include more complex and realistic scenarios which can be studied, and anticipated developments to software frameworks and standards which may open up possibilities for greater flexibility and improved performance.


Sign in / Sign up

Export Citation Format

Share Document