scholarly journals Log-Less Metadata Management on Metadata Server for Parallel File Systems

2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Jianwei Liao ◽  
Guoqiang Xiao ◽  
Xiaoning Peng

This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

2021 ◽  
Vol 17 (3) ◽  
pp. 1-25
Author(s):  
Bohong Zhu ◽  
Youmin Chen ◽  
Qing Wang ◽  
Youyou Lu ◽  
Jiwu Shu

Non-volatile memory and remote direct memory access (RDMA) provide extremely high performance in storage and network hardware. However, existing distributed file systems strictly isolate file system and network layers, and the heavy layered software designs leave high-speed hardware under-exploited. In this article, we propose an RDMA-enabled distributed persistent memory file system, Octopus + , to redesign file system internal mechanisms by closely coupling non-volatile memory and RDMA features. For data operations, Octopus + directly accesses a shared persistent memory pool to reduce memory copying overhead, and actively fetches and pushes data all in clients to rebalance the load between the server and network. For metadata operations, Octopus + introduces self-identified remote procedure calls for immediate notification between file systems and networking, and an efficient distributed transaction mechanism for consistency. Octopus + is enabled with replication feature to provide better availability. Evaluations on Intel Optane DC Persistent Memory Modules show that Octopus + achieves nearly the raw bandwidth for large I/Os and orders of magnitude better performance than existing distributed file systems.


2017 ◽  
Vol 2 (3) ◽  
pp. 161
Author(s):  
S. Sathya ◽  
M. Ranjith Kumar ◽  
K. Madheswaran

The keyestablishment for secure many-to-many communications is very important nowadays. The problem is inspired by the proliferation of large-scale distributed file systems supporting parallel access to multiple storage devices. In this, a variety of authenticated key exchange protocols that are designed to address the issues. This shows that these protocols are capable of reducing the workload of the metadata server and concurrently supporting forward secrecy and escrow-freeness. All this requires only a small fraction of increased computation overhead at the client. This proposed three authenticated key exchange protocols for parallel network file system (pNFS). The protocols offer three appealing advantages over the existing Kerberos-based protocol. First, the metadata server executing these protocols has much lower workload than that of the Kerberos-based approach. Second, two of these protocols provide forward secrecy: one is partially forward secure (with respect to multiple sessions within a time period), while the other is fully forward secure (with respect to a session). Third, designed a protocol which not only provides forward secrecy, but is also escrow-free.


2019 ◽  
Vol 30 (9) ◽  
pp. 1962-1974 ◽  
Author(s):  
Yuanning Gao ◽  
Xiaofeng Gao ◽  
Xiaochun Yang ◽  
Jiaxi Liu ◽  
Guihai Chen

2018 ◽  
Vol 210 ◽  
pp. 04042
Author(s):  
Ammar Alhaj Ali ◽  
Pavel Varacha ◽  
Said Krayem ◽  
Roman Jasek ◽  
Petr Zacek ◽  
...  

Nowadays, a wide set of systems and application, especially in high performance computing, depends on distributed environments to process and analyses huge amounts of data. As we know, the amount of data increases enormously, and the goal to provide and develop efficient, scalable and reliable storage solutions has become one of the major issue for scientific computing. The storage solution used by big data systems is Distributed File Systems (DFSs), where DFS is used to build a hierarchical and unified view of multiple file servers and shares on the network. In this paper we will offer Hadoop Distributed File System (HDFS) as DFS in big data systems and we will present an Event-B as formal method that can be used in modeling, where Event-B is a mature formal method which has been widely used in a number of industry projects in a number of domains, such as automotive, transportation, space, business information, medical device and so on, And will propose using the Rodin as modeling tool for Event-B, which integrates modeling and proving as well as the Rodin platform is open source, so it supports a large number of plug-in tools.


2016 ◽  
Vol 9 (7) ◽  
pp. 2293-2300 ◽  
Author(s):  
Hisashi Yashiro ◽  
Koji Terasaki ◽  
Takemasa Miyoshi ◽  
Hirofumi Tomita

Abstract. In this paper, we propose the design and implementation of an ensemble data assimilation (DA) framework for weather prediction at a high resolution and with a large ensemble size. We consider the deployment of this framework on the data throughput of file input/output (I/O) and multi-node communication. As an instance of the application of the proposed framework, a local ensemble transform Kalman filter (LETKF) was used with a Non-hydrostatic Icosahedral Atmospheric Model (NICAM) for the DA system. Benchmark tests were performed using the K computer, a massive parallel supercomputer with distributed file systems. The results showed an improvement in total time required for the workflow as well as satisfactory scalability of up to 10 K nodes (80 K cores). With regard to high-performance computing systems, where data throughput performance increases at a slower rate than computational performance, our new framework for ensemble DA systems promises drastic reduction of total execution time.


2020 ◽  
Vol 31 (2) ◽  
pp. 374-392 ◽  
Author(s):  
Jiang Zhou ◽  
Yong Chen ◽  
Weiping Wang ◽  
Shuibing He ◽  
Dan Meng

2014 ◽  
Vol 602-605 ◽  
pp. 3282-3284
Author(s):  
Fa Gui Liu ◽  
Xiao Jie Zhang

Distributed file systems such as HDFS are facing the threat of Advanced Persistent Threat, APT. Although security mechanisms such as Kerberos and ACL are implemented in distributed file systems, most of them are not sufficient to solve the threats caused by APT. With the observation into traits of APT, we propose a trusted distributed file system based on HDFS, which guarantees another further security facing APT compared to the current security mechanism.


Sign in / Sign up

Export Citation Format

Share Document