scholarly journals Compiler Optimizations for Non-contiguous Remote Data Movement

Author(s):  
Timo Schneider ◽  
Robert Gerstenberger ◽  
Torsten Hoefler
Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2486
Author(s):  
Se-young Yu

Distributing Big Data for science is pushing the capabilities of networks and computing systems. However, the fundamental concept of copying data from one machine to another has not been challenged in collaborative science. As recent storage system development uses modern fabrics to provide faster remote data access with lower overhead, traditional data movement using Data Transfer Nodes must cope with the paradigm shift from a store-and-forward model to streaming data with direct storage access over the networks. This study evaluates NVMe-over-TCP (NVMe-TCP) in a long-distance network using different file systems and configurations to characterize remote NVMe file system access performance in MAN and WAN data moving scenarios. We found that NVMe-TCP is more suitable for remote data read than remote data write over the networks, and using RAID0 can significantly improve performance in a long-distance network. Additionally, a fine-tuning file system can improve remote write performance in DTNs with a long-distance network.


2019 ◽  
Vol 214 ◽  
pp. 04028
Author(s):  
Volodimir Begy ◽  
Martin Barisits ◽  
Mario Lassnig ◽  
Erich Schikuta

This work describes the technique of remote data access from computational jobs on the ATLAS data grid. In comparison to traditional data movement and stage-in approaches it is well suited for data transfers which are asynchronous with respect to the job execution. Hence, it can be used for optimization of data access patterns based on various policies. In this study, remote data access is realized with the HTTP and WebDAV protocols, and is investigated in the context of intra- and inter-computing site data transfers. In both cases, the typical scenarios for application of remote data access are identified. The paper also presents an analysis of parameters influencing the data goodput between heterogeneous storage element - worker node pairs on the grid.


1966 ◽  
Vol 2 (4) ◽  
pp. 135
Author(s):  
A.W. Nicholson ◽  
D.M. Holmes

Author(s):  
Jack Dongarra ◽  
Laura Grigori ◽  
Nicholas J. Higham

A number of features of today’s high-performance computers make it challenging to exploit these machines fully for computational science. These include increasing core counts but stagnant clock frequencies; the high cost of data movement; use of accelerators (GPUs, FPGAs, coprocessors), making architectures increasingly heterogeneous; and multi- ple precisions of floating-point arithmetic, including half-precision. Moreover, as well as maximizing speed and accuracy, minimizing energy consumption is an important criterion. New generations of algorithms are needed to tackle these challenges. We discuss some approaches that we can take to develop numerical algorithms for high-performance computational science, with a view to exploiting the next generation of supercomputers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


2021 ◽  
Vol 20 ◽  
pp. 160940692110258
Author(s):  
Constance Iloh

Memes are a prominent feature of global life in the 21st century. The author asserts that memes are significant to current and future qualitative research. In particular, the text establishes memes as: (a) part of everyday communication, expression, and explanation, thus useful in qualitative research; (b) valuable cultural units and symbols; (c) forms of rapport building and cultivating relational research; (d) approaches that bolster and sustain remote data collection; (e) methods that infuse agency, humor, and creativity into the research process. The author then showcases distinctive ways memes can be effectively incorporated in qualitative research pursuits and publications. The article concludes with the necessity of data collection and representation approaches that advance the meaningfulness and cultural-relevance of qualitative inquiry.


Systems ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 6
Author(s):  
Allen D. Parks ◽  
David J. Marchette

The Müller-Wichards model (MW) is an algebraic method that quantitatively estimates the performance of sequential and/or parallel computer applications. Because of category theory’s expressive power and mathematical precision, a category theoretic reformulation of MW, i.e., CMW, is presented in this paper. The CMW is effectively numerically equivalent to MW and can be used to estimate the performance of any system that can be represented as numerical sequences of arithmetic, data movement, and delay processes. The CMW fundamental symmetry group is introduced and CMW’s category theoretic formalism is used to facilitate the identification of associated model invariants. The formalism also yields a natural approach to dividing systems into subsystems in a manner that preserves performance. Closed form models are developed and studied statistically, and special case closed form models are used to abstractly quantify the effect of parallelization upon processing time vs. loading, as well as to establish a system performance stationary action principle.


Sign in / Sign up

Export Citation Format

Share Document