scholarly journals Advancements in data management services for distributed e-infrastructures: the eXtreme-DataCloud project

2019 ◽  
Vol 214 ◽  
pp. 04044
Author(s):  
Daniele Cesini ◽  
Giacinto Donvito ◽  
Alessandro Costantini ◽  
Fernando Aguilar Gomez ◽  
Doina Cristina Duma ◽  
...  

The development of data management services capable to cope with very large data resources is a key challenge to allow the future einfrastructures to address the needs of the next generation extreme scale scientific experiments. To face this challenge, in November 2017 the H2020 eXtreme DataCloud - XDC project has been launched. Lasting for 27 months and combining the expertise of eight large European research organisations, the project aims at developing scalable technologies for federating storage resources and managing data in highly distributed computing environments. The targeted platforms are the current and next generation e-Infrastructures deployed in Europe, such as the European Open Science Cloud (EOSC), the European Grid Infrastructure (EGI), and the Worldwide LHC Computing Grid (WLCG). The project is use-case driven with a multidisciplinary approach, addressing requirements from research communities belonging to a wide range of scientific domains: High Energy Physics, Astronomy, Photon and Life Science, Medical research. XDC is aimed at implementing data management scalable services, combining already established data management and orchestration tools, to address the following high level topics: policy driven data management based on Quality-of-Service, Data Life-cycle management, smart placement of data with caching mechanisms to reduce access latency, meta-data with no predefined schema handling, execution of pre-processing applications during ingestion, data management and protection of sensitive data in distributed e-infrastructures, intelligent data placement based on access patterns. This contribution introduces the project, presents the foreseen overall architecture and the developments that are being carried on to implement the requested functionalities.

2020 ◽  
Vol 245 ◽  
pp. 04010
Author(s):  
Daniele Cesini ◽  
Giacinto Donvito ◽  
Alessandro Costantini ◽  
Fernando Aguilar Gomez ◽  
Doina Cristina Duma ◽  
...  

The eXtreme DataCloud (XDC) project is aimed at developing data management services capable to cope with very large data resources allowing the future e-infrastructures to address the needs of the next generation extreme scale scientific experiments. Started in November 2017, XDC is combining the expertise of 8 large European research organisations. The project aims at developing scalable technologies for federating storage resources and managing data in highly distributed computing environments. The project is use case driven with a multidisciplinary approach, addressing requirements from research communities belonging to a wide range of scientific domains: Life Science, Biodiversity, Clinical Research, Astrophysics, High Energy Physics and Photon Science, that represent an indicator in terms of data management needs in Europe and worldwide. The use cases proposed by the different user communities are addressed integrating different data management services ready to manage an increasing volume of data. Different scalability and performance tests have been defined to show that the XDC services can be harmonized in different contexts and complex frameworks like the European Open Science Cloud. The use cases have been used to measure the success of the project and to prove that the developments fulfil the defined needs and satisfy the final users. The present contribution describes the results carried out from the adoption of the XDC solutions and provides a complete overview of the project achievements.


2014 ◽  
Vol 25 (02) ◽  
pp. 1430001 ◽  
Author(s):  
ALBERTO PACE

In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.


2021 ◽  
Vol 251 ◽  
pp. 03055
Author(s):  
John Blue ◽  
Braden Kronheim ◽  
Michelle Kuchera ◽  
Raghuram Ramanujan

Detector simulation in high energy physics experiments is a key yet computationally expensive step in the event simulation process. There has been much recent interest in using deep generative models as a faster alternative to the full Monte Carlo simulation process in situations in which the utmost accuracy is not necessary. In this work we investigate the use of conditional Wasserstein Generative Adversarial Networks to simulate both hadronization and the detector response to jets. Our model takes the 4-momenta of jets formed from partons post-showering and pre-hadronization as inputs and predicts the 4-momenta of the corresponding reconstructed jet. Our model is trained on fully simulated tt events using the publicly available GEANT-based simulation of the CMS Collaboration. We demonstrate that the model produces accurate conditional reconstructed jet transverse momentum (pT) distributions over a wide range of pT for the input parton jet. Our model takes only a fraction of the time necessary for conventional detector simulation methods, running on a CPU in less than a millisecond per event.


2005 ◽  
Vol 20 (14) ◽  
pp. 3021-3032
Author(s):  
Ian M. Fisk

In this review, the computing challenges facing the current and next generation of high energy physics experiments will be discussed. High energy physics computing represents an interesting infrastructure challenge as the use of large-scale commodity computing clusters has increased. The causes and ramifications of these infrastructure challenges will be outlined. Increasing requirements, limited physical infrastructure at computing facilities, and limited budgets have driven many experiments to deploy distributed computing solutions to meet the growing computing needs for analysis reconstruction, and simulation. The current generation of experiments have developed and integrated a number of solutions to facilitate distributed computing. The current work of the running experiments gives an insight into the challenges that will be faced by the next generation of experiments and the infrastructure that will be needed.


2002 ◽  
Vol 20 (4) ◽  
pp. 551-554 ◽  
Author(s):  
D. MUELLER ◽  
L. GRISHAM ◽  
I. KAGANOVICH ◽  
R.L. WATSON ◽  
V. HORVAT ◽  
...  

One approach being explored as a route to practical fusion energy uses heavy ion beams focused on an indirect drive target. Such beams will lose electrons while passing through background gas in the target chamber, and therefore it is necessary to assess the rate at which the charge state of the incident beam evolves on the way to the target. Accelerators designed primarily for nuclear physics or high energy physics experiments utilize ion sources that generate highly stripped ions in order to achieve high energies economically. As a result, accelerators capable of producing heavy ion beams of 10 to 40 MeV/amu with charge state 1 currently do not exist. Hence, the stripping cross sections used to model the performance of heavy ion fusion driver beams have, up to now, been based on theoretical calculations. We have investigated experimentally the stripping of 3.4 MeV/amu Kr+7 and Xe+11 in N2; 10.2 MeV/amu Ar+6 in He, N2, Ar, and Xe; 19 MeV/amu Ar+8 in He, N2, Ar, and Xe; 30 MeV He+1 in He, N2, Ar, and Xe; and 38 MeV/amu N+6 in He, N2, Ar, and Xe. The results of these measurements are compared with the theoretical calculations to assess their applicability over a wide range of parameters.


Sign in / Sign up

Export Citation Format

Share Document