scholarly journals Rucio beyond ATLAS: experiences from Belle II, CMS, DUNE, EISCAT3D, LIGO/VIRGO, SKA, XENON

2020 ◽  
Vol 245 ◽  
pp. 11006 ◽  
Author(s):  
Mario Lassnig ◽  
Martin Barisits ◽  
Paul J Laycock ◽  
Cédric Serfon ◽  
Eric W Vaandering ◽  
...  

For many scientific projects, data management is an increasingly complicated challenge. The number of data-intensive instruments generating unprecedented volumes of data is growing and their accompanying workflows are becoming more complex. Their storage and computing resources are heterogeneous and are distributed at numerous geographical locations belonging to different administrative domains and organisations. These locations do not necessarily coincide with the places where data is produced nor where data is stored, analysed by researchers, or archived for safe long-term storage. To fulfil these needs, the data management system Rucio has been developed to allow the high-energy physics experiment ATLAS at LHC to manage its large volumes of data in an efficient and scalable way. But ATLAS is not alone, and several diverse scientific projects have started evaluating, adopting, and adapting the Rucio system for their own needs. As the Rucio community has grown, many improvements have been introduced, customisations have been added, and many bugs have been fixed. Additionally, new dataflows have been investigated and operational experiences have been documented. In this article we collect and compare the common successes, pitfalls, and oddities that arose in the evaluation efforts of multiple diverse experiments, and compare them with the ATLAS experience. This includes the high-energy physics experiments Belle II and CMS, the neutrino experiment DUNE, the scattering radar experiment EISCAT3D, the gravitational wave observatories LIGO and VIRGO, the SKA radio telescope, and the dark matter search experiment XENON.

2019 ◽  
Vol 214 ◽  
pp. 04020 ◽  
Author(s):  
Martin Barisits ◽  
Fernando Barreiro ◽  
Thomas Beermann ◽  
Karan Bhatia ◽  
Kaushik De ◽  
...  

Transparent use of commercial cloud resources for scientific experiments is a hard problem. In this article, we describe the first steps of the Data Ocean R&D collaboration between the high-energy physics experiment ATLAS together with Google Cloud Platform, to allow seamless use of Google Compute Engine and Google Cloud Storage for physics analysis. We start by describing the three preliminary use cases that were identified at the beginning of the project. The following sections then detail the work done in the data management system Rucio and the workflow management systems PanDA and Harvester to interface Google Cloud Platform with the ATLAS distributed computing environment, and show the results of the integration tests. Afterwards, we describe the setup and results from a full ATLAS user analysis that was executed natively on Google Cloud Platform, and give estimates on projected costs. We close with a summary and and outlook on future work.


1990 ◽  
Author(s):  
A. S. Johnson ◽  
M. I. Briedenbach ◽  
H. Hissen ◽  
P. F. Kunz ◽  
D. J. Sherden ◽  
...  

2019 ◽  
Author(s):  
Juan Carlos Cabanillas Noris ◽  
Ildefonso León Monzón ◽  
Mario Iván Martínez Hernández ◽  
Solangel Rojas Torres

Author(s):  
J. Apostolakis ◽  
L. M. Bertolotto ◽  
C. E. Bruschini ◽  
P. Calafiura ◽  
F. Gagliardi ◽  
...  

2019 ◽  
Vol 214 ◽  
pp. 05026
Author(s):  
Jiaheng Zou ◽  
Tao Lin ◽  
Weidong Li ◽  
Xingtao Huang ◽  
Ziyan Deng ◽  
...  

SNiPER is a general purpose offline software framework for high energy physics experiment. It provides some features that are attractive to neutrino experiments, such as the event buffer. More than one events are available in the buffer according to a customizable time window, so that it is easy for users to apply events correlation analysis. We also implemented the MT-SNiPER to support multithreading computing based on Intel TBB. In MT-SNiPER, the event loop is split into pieces, and each piece is dispatched to a task. The global buffer, an extension and enhancement to the event buffer, is implemented for MT-SNiPER. The global buffer is available by all threads. It keeps all the events being processed in memory. When there is an available task, a subset of its events is dispatched to that task. There can be overlaps between the subsets in different tasks due to the time window. However, it is ensured that each event is processed only once. In the task side, the subsets of events are locally managed by a normal event buffer. So the global buffer can be transparent to most user algorithms. Within the global buffer, the multithreading computing of MT-SNiPER becomes more practicable.


Sign in / Sign up

Export Citation Format

Share Document