Hybrid File System - A Strategy for the Optimization of File System

2013 ◽  
Vol 734-737 ◽  
pp. 3129-3132
Author(s):  
Ya Rong Wang ◽  
Pei Rong Wang ◽  
Rui Liu
Keyword(s):  
System A ◽  

The hybrid file system is designed to optimize the latency of the response of File System I/Os and extend the capacity of the local file system to cloud by taking the advantage of Internet. Our hybrid file system is consist of SSD, HDD and Amazon S3 cloud file system. We store small files, directory tree and metadata of all the files in SSD, because SSD has a good performance for the response of small and random I/Os. HDD is good at responding big and sequential I/Os, so we use it just like a warehouse to store big files which are linked by the symbolic files in the SSD. We also extend the local file system to cloud in order to enlarge its capacity. In this paper we describe test data of our hybrid file system and also its design and implement details.

1966 ◽  
Vol 88 (2) ◽  
pp. 164-168 ◽  
Author(s):  
S. S. Grover

This paper deals with pulsations in pressure and flow in the reciprocating compressor and connected piping system. A model is presented that describes the excitation at the compressor and the propagation of the pulsations in the interconnected piping. It has been adapted to digital computations to predict the pulse magnitudes in reciprocating compressor piping systems and to assess measures for their control. Predicted results have been compared with field test data and with simplified limiting condition results. A discussion of its practical application is included.


Author(s):  
Eric Villasenor ◽  
Timothy Pritchett ◽  
Jagadeesh M. Dyaberi ◽  
Vijay S. Pai ◽  
Mithuna Thottethodi
Keyword(s):  
Big Data ◽  

2013 ◽  
Vol 756-759 ◽  
pp. 4207-4211
Author(s):  
Bo Qu

This paper describes the design and implementation of SD card driver and tiny file system for multi-process micro-kernel embedded operating system on ARM in technical details, including structure of device driver, key techniques of designing SD card driver, architecture of the tiny file system, a brief description of its designing and a demo example. The SD card driver and tiny FS are implemented with GNU tool chain by the author of this paper. Practice proves that the system can be used for not only embedded application developments but also related curriculum teaching.


2019 ◽  
Vol 214 ◽  
pp. 05001 ◽  
Author(s):  
Stefan-Gabriel Chitic ◽  
Ben Couturier ◽  
Marco Clemencic ◽  
Joel Closier

A continuous integration system is crucial to maintain the quality of the 6 millions lines of C++ and Python source code of the LHCb software in order to ensure consistent builds of the software as well as to run the unit and integration tests. Jenkins automation server is used for this purpose. It builds and tests around 100 configurations and produces in the order of 1500 built artifacts per day which are installed on the CVMFS file system or potentially on the developers’ machines. Faced with a large and growing number of configurations built every day, and in order to ease inter-operation between the continuous integration system and the developers, we decided to put in place a flexible messaging system. As soon as the built artifacts have been produced, the distributed system allows their deployment based on the priority of the configurations. We will describe the architecture of the new system, which is based on RabbitMQ messaging system (and the pika Python client library), and uses priority queues to start the LHCb software integration tests and to drive the installation of the nightly builds on the CVMFS file system. We will also show how the introduction of an event based system can help with the communication of results to developers.


2015 ◽  
Vol 2 (3) ◽  
pp. 170
Author(s):  
Ade Jamal ◽  
Denny Hermawan ◽  
Muhammad Nugraha

<p class="Default"><em>Abstrak</em> – <strong>T</strong><strong>elah dilakukan penelitian tentang </strong><strong>pengolahan terdistribusi data genbank menggunakan <em>Hadoop Distributed Filesystem </em>(HDFS) dengan tujuan mengetahui efektifitas pengolahan data genbank khususnya pada pencarian sequens dengan data masukan yang berukuran besar.</strong><strong> Penelitian dilakukan di </strong><strong>L</strong><strong>aboratorium </strong><strong>Jaringan Universitas Al Azhar Indonesia dengan menggunakan 6 komputer dan satu <em>server</em> dimana dalam <em>Hadoop</em> menjadi 7 <em>node</em> dengan rincian 1 <em>namenode</em>, 7 <em>datanode</em>, 1 secondary <em>namenode</em>. Dengan eksperimen HDFS menggunakan 1 <em>node</em>, 2 <em>node</em>, 4 <em>node</em>, 6 <em>node</em>, dan 7 <em>node</em> dibandingkan dengan <em>Local Filesystem</em>. Hasil menunjukan proses pencarian sequens data genbank menggunakan 1 – 7 <em>node</em> pada skenario eksperimen pertama dengan <em>output</em> yang menampilkan hasil 3 <em>field</em> <em>(Locus, Definition, </em>dan<em> Authors</em>), skenario eksperimen kedua dengan <em>output</em> yang menampilkan hasil 3 <em>field</em> <em>(Locus, Authors, </em>dan<em> Origin)</em>, dan skenario eksperimen ketiga menggunakan HDFS dan LFS dengan <em>output</em> yang menampilkan seluruh <em>field</em> yang terdapat dalam data genbank (</strong><strong><em>Locus, Definition, Accesion, Version, Keywords, Source, Organism, Reference, Authors, Title, Journal, Pubmed, Comment, Features, </em></strong><strong>dan<em> Origin</em></strong><strong>). Evaluasi menunjukan bahwa proses pencarian sequens data genbank menggunakan HDFS dengan 7 <em>node</em> adalah 4 kali lebih cepat dibandingkan dengan menggunakan 1 <em>node</em>. Sedangkan perbedaan waktu pada penggunaan HDFS dengan 1 <em>node</em> adalah 1.02 kali lebih cepat dibandingkan dengan <em>Local Filesystem</em> dengan 4 <em>core</em> <em>processor</em>.</strong></p><p class="Default"><strong> </strong></p><p><em>Abstract </em><strong>- A research on distributed processing of GenBank data using Hadoop Distributed File System GenBank (HDFS) in order to know the effectiveness of data processing, especially in the search sequences with large input data. Research conducted at the Network Laboratory of the University of Al Azhar Indonesia using 6 computers and a server where the Hadoop to 7 nodes with details 1 namenode, 7 datanode, 1 secondary namenode. With HDFS experiments using 1 node, node 2, node 4, node 6, and 7 nodes compared with the Local Filesystem. The results show the search process of data GenBank sequences using 1-7 nodes in the first experiment scenario with an output that displays the results of 3 fields (Locus, Definition, and Authors), a second experiment scenario with an output that displays the results of 3 fields (Locus, Authors, and Origin) , and the third experiment scenarios using HDFS and LFS with output that displays all the data fields contained in GenBank (Locus, Definition, Accesion, Version, Keywords, Source, Organism, Reference, Authors, Title, Journal, Pubmed, Comment, Features, and Origin). Evaluation shows that the search process of data GenBank sequences using HDFS with 7 nodes is 4 times faster than using one node. While the time difference in the use of HDFS with one node is 1:02 times faster than the Local File System with 4 core processor.</strong></p><p><strong><em> </em></strong></p><p><strong><em></em></strong><strong><em>Keywords </em></strong><em>–  genbank, sequens, distributed computing, Hadoop, HDFS</em></p>


2012 ◽  
Vol 433-440 ◽  
pp. 4704-4709
Author(s):  
Yan Shen Chen ◽  
De Zhi Han

To solve the data security issue in intranet massive storage system, a Multi-Protocol Secure File System ( for short MPSFS) is designed. Firstly, the MPSFS supports the access of users with different protocols, and provides the unified access interface, so can achieve high performance in data storage and retrieval; secondly, with the help of other technologies such as identity authentication, access control and data encryption, the MPSFS can effectively ensure the data security in the intranet storage system. By the experiment, the MPSFS can provide good security and scalability for intranet massive storage system, and has less effect to the network I/O performance.


Author(s):  
Kuiyu Chang ◽  
I. Wayan Tresna Perdana ◽  
Bramandia Ramadhana ◽  
Kailash Sethuraman ◽  
Truc Viet Le ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document