Design and Evaluation of Bupt-Cloud-Storage System

2012 ◽  
Vol 566 ◽  
pp. 560-567
Author(s):  
Li Feng Zhou ◽  
Wen Bin Yao ◽  
De Yan Jiang ◽  
Cong Wang

Cloud storage, which is composed of a large number storage devices and servers, provides large-scale flexible storage services through Internet. BCSS (Bupt-Cloud-Storage System) based on some cheap irresponsible PCs is designed as a mass storage platform to offer high reliable and available storage services. Meanwhile, it improves performance of data access by providing support of multi-user concurrent control. The experimental results verify efficiency of storage services and the performance of BCSS.

2014 ◽  
Vol 1030-1032 ◽  
pp. 1619-1622
Author(s):  
Bing Xin Zhu ◽  
Jing Tao Li

In large-scale storage system, variety of calculations, transfer, and storage devices both in performance and in characteristics such as reliability, there are physical differences. While operational load data access for storage devices is also not uniform, there is a big difference in space and time. If all the data is stored in the high-performance equipment is unrealistic and unwise. Hierarchical storage concept effectively solves this problem. It is able to monitor the data access loads, and depending on the load and application requirements based on storage resources optimally configure properties [1]. Traditional classification policy is generally against file data, based on frequency of access to files, file IO heat index for classification. This paper embarks from the website user value concept, aiming at the disadvantages of traditional data classification strategy, puts forward the centralized data classification strategy based on user value.


2013 ◽  
Vol 380-384 ◽  
pp. 2371-2374
Author(s):  
Lei Zhang ◽  
Li Gu Zhu ◽  
Sai Feng Zeng

There are various calculations, transmission, and storage devices in terms of performance or reliability characteristics in great physical differences exist of large-scale cluster storage systems. Meanwhile, the actual traffic load data access for storage devices is also not uniform in space and time and there is a big difference. It is unrealistic and unwise if all the data stored on the high-performance devices. In order to resolve this problem effectively, we propose large-scale adaptive tiered storage system architecture in which structure can carry out effective monitoring access to the load and adapting allocation of storage resources based on the application environment. This can fulfill the full potential to the advantages of high-performance storage nodes to improve the performance of large-scale clustered storage systems.


2018 ◽  
Vol 228 ◽  
pp. 01011
Author(s):  
Haifeng Zhong ◽  
Jianying Xiong

The wan Internet storage system based on Distributed Hash Table uses fully distributed data and metadata management, and constructs an extensible and efficient mass storage system for the application based on Internet. However, such systems work in highly dynamic environments, and the frequent entry and exit of nodes will lead to huge communication costs. Therefore, this paper proposes a new hierarchical metadata routing management mechanism based on DHT, which makes full use of the node stabilization point to reduce the maintenance overhead of the overlay. Analysis shows that the algorithm can effectively improve efficiency and enhance stability.


2013 ◽  
Vol 5 (1) ◽  
pp. 53-69
Author(s):  
Jacques Jorda ◽  
Aurélien Ortiz ◽  
Abdelaziz M’zoughi ◽  
Salam Traboulsi

Grid computing is commonly used for large scale application requiring huge computation capabilities. In such distributed architectures, the data storage on the distributed storage resources must be handled by a dedicated storage system to ensure the required quality of service. In order to simplify the data placement on nodes and to increase the performance of applications, a storage virtualization layer can be used. This layer can be a single parallel filesystem (like GPFS) or a more complex middleware. The latter is preferred as it allows the data placement on the nodes to be tuned to increase both the reliability and the performance of data access. Thus, in such a middleware, a dedicated monitoring system must be used to ensure optimal performance. In this paper, the authors briefly introduce the Visage middleware – a middleware for storage virtualization. They present the most broadly used grid monitoring systems, and explain why they are not adequate for virtualized storage monitoring. The authors then present the architecture of their monitoring system dedicated to storage virtualization. We introduce the workload prediction model used to define the best node for data placement, and show on a simple experiment its accuracy.


The rapid development in information technology has rendered an increase in the data volume at a speed which is surprising. In recent times, cloud computing and the Internet of Things (IoT) have become the hottest among the topics in the industry of information technology. There are many advantages to Cloud computing such as scalability, low price, and large scale and the primary technique of the IoTs like the Radio-Frequency Identification (RFID) have been applied to a large scale. In the recent times, the users of cloud storage have been increasing to a great extent and the reason behind this was the cloud storage system bringing down the issues in maintenance and also has a low amount of storage when compared to other methods. This system provides a high degree of reliability and availability where redundancy is introduced to the systems. In the replicated systems, objects get to be copied many times and every copy resides in a different location found in distributed computing. So, replication of data has been posing some threat to the cloud storage for users and also for the providers since it has been a major challenge providing efficient storage of data. So, the work has been analysing different strategies of replication of data and have pointed out several issues that are affected by this. For the purpose of this work, replication of data has been presented by employing the Cuckoo Search (CS) and the Greedy Search. The research is proceeding in a direction to reduce the replications without any adverse effect on the reliability and the availability of data.


Cloud computing, an efficient technology that utilizes huge amount of data file storage with security. However, the content owner does not controlling data access for unauthorized clients and does not control data storage and usage of data. Some previous approaches data access control to help data de-duplication concurrently for cloud storage system. Encrypted data for cloud storage is not effectively handled by current industrial de-duplication solutions. The deduplication is unguarded from brute-force attacks and fails in supporting control of data access .An efficient data confining technique that eliminates redundant data’s multiple copies which is commonly used is Data-Deduplication. It reduces the space needed to store these data and thus bandwidth is saved. An efficient content discovery and preserving De-duplication (ECDPD) algorithm that detects client file range and block range of de-duplication in storing data files in the cloud storage system was proposed to overpower the above problems.Data access control is supported by ECDPD actively. Based on Experimental evaluations, proposed ECDPD method reduces 3.802 milliseconds of DUT (Data Uploading Time) and 3.318 milliseconds of DDT (Data Downloading Time) compared than existing approaches


2012 ◽  
Vol 532-533 ◽  
pp. 677-681
Author(s):  
Li Qun Luo ◽  
Si Jin He

The advent of cloud is drastically changing the High Performance Computing (HPC) application scenarios. Current virtual machine-based IaaS architectures are not designed for HPC applications. This paper presents a new cloud oriented storage system by constructing a large scale memory grid in a distributed environment in order to support low latency data access of HPC applications. This Cloud Memory model is built through the implementation of a private virtual file system (PVFS) upon virtual operating system (OS) that allows HPC applications to access data in such a way that Cloud Memory can access local disks in the same fashion.


Sign in / Sign up

Export Citation Format

Share Document