Mass Storage System Construction Method in a High Performance Computing Center

Author(s):  
Yunwen Ge ◽  
Shaochun Wu
2020 ◽  
Vol 245 ◽  
pp. 01018
Author(s):  
Jörn Adamczewski-Musch ◽  
Thomas Stibor

Since 2018 several FAIR Phase 0 beamtimes have been operated at GSI, Darmstadt. Here the new challenging technologies for the upcoming FAIR facility shall be tested while various physics experiments are performed with the existing GSI accelerators. One of these challenges concerns the performance, reliability, and scalability of the experiment data storage. Raw data as collected by event building software of large scale detector data acquisition has to be safely written to a mass storage system like a magnetic tape library. Besides this long term archive, it is often required to process this data as soon as possible on a high performance compute farm. The C library LTSM (“Lightweight Tivoli Storage Management”) has been developed at the GSI IT department based on the IBM TSM software. It provides a file API that allows for writing raw listmode data files via TCP/IP sockets directly to an IBM TSM storage server. Moreover, the LTSM library offers Lustre HSM (“Hierarchical Storage Management”) capabilities for seamlessly archiving and retrieving data stored on Lustre file system and TSM server. In spring 2019 LTSM has been employed at the FAIR Phase 0 beamtimes at GSI. For the HADES experiment LTSM was implemented into the DABC (“Data Acquisition Backbone Core”) event building software. During the 4 weeks of [email protected] AGeV beam, the HADES event builders have transferred about 400 TB of data via 8 parallel 10 GbE sockets, both to the TSM archive and to the “GSI green cube” HPC farm. For other FAIR Phase 0 experiments using the vintage MBS (“Multi Branch System”) event builders, an LTSM gateway application has been developed to connect the legacy RFIO (“Remote File I/O”) protocol of these DAQ systems with the new storage interface.


2017 ◽  
Vol 1 (4) ◽  
pp. 139-144
Author(s):  
Periola AA ◽  
Ohize H

Mechanisms that reduce the capital and operational costs are important for increased participation in astronomy. It is important that capital constrained organizations can engage in astronomy in cost effective manner. Approaches such as telescope conversion and using small satellites reduce the cost of astronomy observations. However, astronomy data observed by converted and small satellite telescopes require storage and processing by high performance computing infrastructure. High performance computing infrastructure acquisition is expensive for capital constrained astronomy organizations. The reduction in costs obtained by using converted and small satellite telescopes is not matched by a corresponding reduction in high performance computing. This paper addresses this challenge and proposes using a software defined space data storage system. The software defined space data storage system considers space telescopes as primary satellites and telecommunication and earth observation satellites as secondary satellites. The primary and secondary satellites are grouped in logical clusters. Secondary satellites are temporal data centers that store the astronomy data that cannot be held on primary satellites. The discussion in this paper presents algorithms that enable the identification of suitable secondary satellites and also influence the entry and exit of secondary satellite into dynamic clusters.


Sign in / Sign up

Export Citation Format

Share Document