Accelerating Data Delivery Service with Collaborated Parallel Traffic Scheduling for Distributed Datacenters

Author(s):  
Xiaoyuan Cao ◽  
Kai Luan ◽  
Xiang Luo ◽  
Qiang Bian ◽  
Zhi Li ◽  
...  
2020 ◽  
Vol 245 ◽  
pp. 04043
Author(s):  
B. Galewsky ◽  
R. Gardner ◽  
L. Gray ◽  
M. Neubauer ◽  
J. Pivarski ◽  
...  

We will describe a component of the Intelligent Data Delivery Service being developed in collaboration with IRIS-HEP and the LHC experiments. ServiceX is an experiment-agnostic service to enable on-demand data delivery specifically tailored for nearly-interactive vectorized analysis. This work is motivated by the data engineering challenges posed by HL-LHC data volumes and the increasing popularity of python and Spark-based analysis workflows. ServiceX gives analyzers the ability to query events by dataset metadata. It uses containerized transformations to extract just the data required for the analysis. This operation is colocated with the data to avoid transferring unnecessary branches over the WAN. Simple filtering operations are supported to further reduce the amount of data transferred. Transformed events are cached in a columnar datastore to accelerate delivery of subsequent similar requests. ServiceX will learn commonly related columns and automatically include them in the transformation to increase the potential for cache hits by other users. Selected events are streamed to the analysis system using an efficient wire protocol that can be readily consumed by a variety of computational frameworks. This reduces time-to-insight for physics analysis by delegating to ServiceX the complexity of event selection, slimming, reformatting, and streaming.


2020 ◽  
Vol 245 ◽  
pp. 04015
Author(s):  
Wen Guan ◽  
Tadashi Maeno ◽  
Gancho Dimitrov ◽  
Brian Paul Bockelman ◽  
Torre Wenaus ◽  
...  

The ATLAS Event Streaming Service (ESS) at the LHC is an approach to preprocess and deliver data for Event Service (ES) that has implemented a fine-grained approach for ATLAS event processing. The ESS allows one to asynchronously deliver only the input events required by ES processing, with the aim to decrease data traffic over WAN and improve overall data processing throughput. A prototype of ESS was developed to deliver streaming events to fine-grained ES jobs. Based on it, an intelligent Data Delivery Service (iDDS) is under development to decouple the “cold format” and the processing format of the data, which also opens the opportunity to include the production systems of other HEP experiments. Here we will at first present the ESS model view and its motivations for iDDS system. Then we will also present the iDDS schema, architecture and the applications of iDDS.


2013 ◽  
Vol 31 (12) ◽  
pp. 2632-2645 ◽  
Author(s):  
Jiaxin Cao ◽  
Chuanxiong Guo ◽  
Guohan Lu ◽  
Yongqiang Xiong ◽  
Yixin Zheng ◽  
...  

2021 ◽  
Vol 251 ◽  
pp. 02007
Author(s):  
Wen Guan ◽  
Tadashi Maeno ◽  
Brian Paul Bockelman ◽  
Torre Wenaus ◽  
Fahui Lin ◽  
...  

The intelligent Data Delivery Service (iDDS) has been developed to cope with the huge increase of computing and storage resource usage in the coming LHC data taking. iDDS has been designed to intelligently orchestrate workflow and data management systems, decoupling data pre-processing, delivery, and main processing in various workflows. It is an experiment-agnostic service around a workflow-oriented structure to work with existing and emerging use cases in ATLAS and other experiments. Here we will present the motivation for iDDS, its design schema and architecture, use cases and current status, and plans for the future.


Sign in / Sign up

Export Citation Format

Share Document