Accelerating I/O in ESMs using on demand filesystems

Author(s):  
Stefan Versick ◽  
Thomas Fischer ◽  
Ole Kirner ◽  
Tobias Meisel ◽  
Jörg Meyer

<p>Earth System Models (ESM) got much more demanding over the last years. Modelled processes got more complex and more and more processes are considered in models. In addition resolutions of the models got higher to improve accuracy of predictions. This requires faster high performance computers (HPC) and better I/O performance. One way to improve I/O performance is to use faster file systems. Last year we showed the impact of the ad-hoc file system on the performance of the ESM EMAC. An ad-hoc file system is a private parallel file system which is created on-demand for an HPC job using the node-local storage devices, in our case solid-state-disks (SSD). It only exists during the runtime of the job. Therefore output data have to be moved to a permanent file system before the job has finished. Performance improvements are due to the use of SSDs in case of small chunks of I/O or a high amount of I/O operations per second. Another reason for a performace boost is because the running job can exclusively access the file system. To get a better overview in which cases ESMs benefit from using ad-hoc file systems we repeated our performance tests with further ESMs with different I/O strategies. In total we now analyzed EMAC (parallel netcdf), ICON2.5 (netcdf with asynchronous I/O), ICON2.6 (netcdf with Climate Data Interface (CDI) library) and OpenGeoSys (parallel VTU).</p>

2020 ◽  
Author(s):  
Stefan Versick ◽  
Ole Kirner ◽  
Jörg Meyer ◽  
Holger Obermaier ◽  
Mehmet Soysal

<p>Earth System Models (ESM) got much more demanding over the last years. Modelled processes got more complex and more and more processes are considered in models. In addition resolutions of the models got higher to improve weather and climate forecasts. This requires faster high performance computers (HPC) and better I/O performance.</p><p>Within our Pilot Lab Exascale Earth System Modelling (PL-EESM) we do performance analysis of the ESM EMAC using a standard Lustre file system for output and compare it to the performance using a parallel ad-hoc overlay file system. We will show the impact for two scenarios: one for todays standard amount of output and one with artificial heavy output simulating future ESMs.</p><p>An ad-hoc file system is a private parallel file system which is created on-demand for an HPC job using the node-local storage devices, in our case solid-state-disks (SSD). It only exists during the runtime of the job. Therefore output data have to be moved to a permanent file system before the job has finished. Quasi in-situ data analysis and post-processing allows to gain performance as it might result in a decreased amount of data which you have to store - saving disk space and time during the transfer of data to permanent storage. We will show first tests for quasi in-situ post-processing.</p>


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Trung Kien Vu ◽  
Sungoh Kwon

We propose a mobility-assisted on-demand routing algorithm for mobile ad hoc networks in the presence of location errors. Location awareness enables mobile nodes to predict their mobility and enhances routing performance by estimating link duration and selecting reliable routes. However, measured locations intrinsically include errors in measurement. Such errors degrade mobility prediction and have been ignored in previous work. To mitigate the impact of location errors on routing, we propose an on-demand routing algorithm taking into account location errors. To that end, we adopt the Kalman filter to estimate accurate locations and consider route confidence in discovering routes. Via simulations, we compare our algorithm and previous algorithms in various environments. Our proposed mobility prediction is robust to the location errors.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-25
Author(s):  
Bohong Zhu ◽  
Youmin Chen ◽  
Qing Wang ◽  
Youyou Lu ◽  
Jiwu Shu

Non-volatile memory and remote direct memory access (RDMA) provide extremely high performance in storage and network hardware. However, existing distributed file systems strictly isolate file system and network layers, and the heavy layered software designs leave high-speed hardware under-exploited. In this article, we propose an RDMA-enabled distributed persistent memory file system, Octopus + , to redesign file system internal mechanisms by closely coupling non-volatile memory and RDMA features. For data operations, Octopus + directly accesses a shared persistent memory pool to reduce memory copying overhead, and actively fetches and pushes data all in clients to rebalance the load between the server and network. For metadata operations, Octopus + introduces self-identified remote procedure calls for immediate notification between file systems and networking, and an efficient distributed transaction mechanism for consistency. Octopus + is enabled with replication feature to provide better availability. Evaluations on Intel Optane DC Persistent Memory Modules show that Octopus + achieves nearly the raw bandwidth for large I/Os and orders of magnitude better performance than existing distributed file systems.


Author(s):  
Armando Fandango ◽  
William Rivera

Scientific Big Data being gathered at exascale needs to be stored, retrieved and manipulated. The storage stack for scientific Big Data includes a file system at the system level for physical organization of the data, and a file format and input/output (I/O) system at the application level for logical organization of the data; both of them of high-performance variety for exascale. The high-performance file system is designed with concurrent access, high-speed transmission and fault tolerance characteristics. High-performance file formats and I/O are designed to allow parallel and distributed applications with easy and fast access to Big Data. These specialized file formats make it easier to store and access Big Data for scientific visualization and predictive analytics. This chapter provides a brief review of the characteristics of high-performance file systems such as Lustre and GPFS, and high-performance file formats such as HDF5, NetCDF, MPI-IO, and HDFS.


Author(s):  
Mian-Guan Lim ◽  
Sining Wu ◽  
Tomasz Simon ◽  
Md Rashid ◽  
Na Helian

On-demand cloud applications like online email accounts and online virtual disk space are becoming widely available in various forms. In cloud applications, one can see the importance of underlying resources, such as disk space, that is available to the end-user but not easily accessible. In the authors’ study, a modern file system developed in linux is proposed, which enables consuming of cloud applications and making the underlying disk space resource available to the end-user. This system is developed as a web service to support cross operation system support. A free online mail account was used to demonstrate this solution, and an IMAP protocol to communicate with remote data spaces was used so that this method can mount onto any email system that supports IMAP. The authors’ definition of infinite storage as the user is able to mount file systems as a single logical drive.


Author(s):  
Bo Li ◽  
Ziyi Peng ◽  
Peng Hou ◽  
Min He ◽  
Marco Anisetti ◽  
...  

AbstractIn the Internet of Vehicles (IoV), with the increasing demand for intelligent technologies such as driverless driving, more and more in-vehicle applications have been put into autonomous driving. For the computationally intensive task, the vehicle self-organizing network uses other high-performance nodes in the vehicle driving environment to hand over tasks to these nodes for execution. In this way, the computational load of the cloud alleviated. However, due to the unreliability of the communication link and the dynamic changes of the vehicle environment, lengthy task completion time may lead to the increase of task failure rate. Although the flooding algorithm can improve the success rate of task completion, the offloading expend will be large. Aiming at this problem, we design the partial flooding algorithm, which is a comprehensive evaluation method based on system reliability in the vehicle computing environment without infrastructure. Using V2V link to select some nodes with better performance for partial flooding offloading to reduce the task complete time, improve system reliability and cut down the impact of vehicle mobility on offloading. The results show that the proposed offloading strategy can not only improve the utilization of computing resources, but also promote the offloading performance of the system.


2020 ◽  
Vol 35 (1) ◽  
pp. 4-26 ◽  
Author(s):  
André Brinkmann ◽  
Kathryn Mohror ◽  
Weikuan Yu ◽  
Philip Carns ◽  
Toni Cortes ◽  
...  

2018 ◽  
Vol 210 ◽  
pp. 04042
Author(s):  
Ammar Alhaj Ali ◽  
Pavel Varacha ◽  
Said Krayem ◽  
Roman Jasek ◽  
Petr Zacek ◽  
...  

Nowadays, a wide set of systems and application, especially in high performance computing, depends on distributed environments to process and analyses huge amounts of data. As we know, the amount of data increases enormously, and the goal to provide and develop efficient, scalable and reliable storage solutions has become one of the major issue for scientific computing. The storage solution used by big data systems is Distributed File Systems (DFSs), where DFS is used to build a hierarchical and unified view of multiple file servers and shares on the network. In this paper we will offer Hadoop Distributed File System (HDFS) as DFS in big data systems and we will present an Event-B as formal method that can be used in modeling, where Event-B is a mature formal method which has been widely used in a number of industry projects in a number of domains, such as automotive, transportation, space, business information, medical device and so on, And will propose using the Rodin as modeling tool for Event-B, which integrates modeling and proving as well as the Rodin platform is open source, so it supports a large number of plug-in tools.


2021 ◽  
Vol 17 (1) ◽  
pp. 1-22
Author(s):  
Wen Cheng ◽  
Chunyan Li ◽  
Lingfang Zeng ◽  
Yingjin Qian ◽  
Xi Li ◽  
...  

In high-performance computing (HPC), data and metadata are stored on special server nodes and client applications access the servers’ data and metadata through a network, which induces network latencies and resource contention. These server nodes are typically equipped with (slow) magnetic disks, while the client nodes store temporary data on fast SSDs or even on non-volatile main memory (NVMM). Therefore, the full potential of parallel file systems can only be reached if fast client side storage devices are included into the overall storage architecture. In this article, we propose an NVMM-based hierarchical persistent client cache for the Lustre file system (NVMM-LPCC for short). NVMM-LPCC implements two caching modes: a read and write mode (RW-NVMM-LPCC for short) and a read only mode (RO-NVMM-LPCC for short). NVMM-LPCC integrates with the Lustre Hierarchical Storage Management (HSM) solution and the Lustre layout lock mechanism to provide consistent persistent caching services for I/O applications running on client nodes, meanwhile maintaining a global unified namespace of the entire Lustre file system. The evaluation results presented in this article show that NVMM-LPCC can increase the average read throughput by up to 35.80 times and the average write throughput by up to 9.83 times compared with the native Lustre system, while providing excellent scalability.


2021 ◽  
Vol 13 (11) ◽  
pp. 6375
Author(s):  
Cristina Baglivo

This paper addresses the effects of long-term climate change on retrofit actions on a school building located in a Mediterranean climate. Dynamic energy simulations were performed using Termolog EpiX 11, first with conventional climate data and then with future year climate data exported from the CCWorldWeatherGen computational software. To date, many incentive actions are promoted for school renovations, but are these measures effective in preventing the discomfort that will be found due to overheating generated by climate change? Today, one of the main objectives in retrofit measures is the achievement of ZEB (Zero Energy Building) performance. Achieving this target requires first and foremost a high-performance envelope. This study evaluates the impact of retrofit strategies mostly applied to the school building envelope, over the years, considering three different time horizons, until 2080. Thermal performance indices and indoor operative temperature under free-floating were evaluated. The results highlight that, with a changing climate, it is no longer possible to assume a constant static condition to evaluate retrofit actions, but it is necessary to develop a predictive mathematical model that considers the design variability for future years. There is an urgent necessity to ensure both the safety and comfort of buildings while also anticipating future variations in climate.


Sign in / Sign up

Export Citation Format

Share Document