scholarly journals Supporting SLA via Adaptive Mapping and Heterogeneous Storage Devices in Ceph

Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 847
Author(s):  
Sopanhapich Chum ◽  
Heekwon Park ◽  
Jongmoo Choi

This paper proposes a new resource management scheme that supports SLA (Service-Level Agreement) in a bigdata distributed storage system. Basically, it makes use of two mapping modes, isolated mode and shared mode, in an adaptive manner. In specific, to ensure different QoS (Quality of Service) requirements among clients, it isolates storage devices so that urgent clients are not interfered by normal clients. When there is no urgent client, it switches to the shared mode so that normal clients can access all storage devices, thus achieving full performance. To provide this adaptability effectively, it devises two techniques, called logical cluster and normal inclusion. In addition, this paper explores how to exploit heterogeneous storage devices, HDDs (Hard Disk Drives) and SSDs (Solid State Drives), to support SLA. It examines two use cases and observes that separating data and metadata into different devices gives a positive impact on the performance per cost ratio. Real implementation-based evaluation results show that this proposal can satisfy the requirements of diverse clients and can provide better performance compared with a fixed mapping-based scheme.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xiangli Chang ◽  
Hailang Cui

With the increasing popularity of a large number of Internet-based services and a large number of services hosted on cloud platforms, a more powerful back-end storage system is needed to support these services. At present, it is very difficult or impossible to implement a distributed storage to meet all the above assumptions. Therefore, the focus of research is to limit different characteristics to design different distributed storage solutions to meet different usage scenarios. Economic big data should have the basic requirements of high storage efficiency and fast retrieval speed. The large number of small files and the diversity of file types make the storage and retrieval of economic big data face severe challenges. This paper is oriented to the application requirements of cross-modal analysis of economic big data. According to the source and characteristics of economic big data, the data types are analyzed and the database storage architecture and data storage structure of economic big data are designed. Taking into account the spatial, temporal, and semantic characteristics of economic big data, this paper proposes a unified coding method based on the spatiotemporal data multilevel division strategy combined with Geohash and Hilbert and spatiotemporal semantic constraints. A prototype system was constructed based on Mongo DB, and the performance of the multilevel partition algorithm proposed in this paper was verified by the prototype system based on the realization of data storage management functions. The Wiener distributed memory based on the principle of Wiener filter is used to store the workload of each workload distributed storage window in a distributed manner. For distributed storage workloads, this article adopts specific types of workloads. According to its periodicity, the workload is divided into distributed storage windows of specific duration. At the beginning of each distributed storage window, distributed storage is distributed to the next distributed storage window. Experiments and tests have verified the distributed storage strategy proposed in this article, which proves that the Wiener distributed storage solution can save platform resources and configuration costs while ensuring Service Level Agreement (SLA).


2019 ◽  
Vol 11 (6) ◽  
pp. 1523 ◽  
Author(s):  
TaeYoung Kim ◽  
JongBeom Lim

As online learning and e-learning are prevalent and widely used in education, it is important to design an efficient and reliable information system for storing learning data and providing on-demand learning services. In this paper, we design a cloud-based information system architecture for online lifelong education. Since a cloud system is based on virtualization technology, we propose a virtual resource management scheme—virtual machine allocation and monitoring nodes assignment. With the proposed cloud-based architecture, we can build and operate an e-learning information system for online lifelong education, which requires efficiency, reliability, and persistence. The evaluation results show that our proposed method can deal with more tasks for e-learning (requests for learning management system (LMS) navigations, text learning contents, text and media learning contents, and video learning contents) while introducing 48× fewer service level agreement (SLA) violations than the existing method.


2021 ◽  
Author(s):  
Paul ChanHyung Park

Docker has been widely adopted as a platform solution for microservice. As the popularity of microservice increases, the importance of fine-tuning the efficiency of resource management in the Docker platform also increases. While Docker’s out-of-box resource management solution provides some generic management capability, more work is required to improve resource utilization and enforce Service Level Agreement (SLA) for critical services. In this research, an efficient Docker resource management scheme, called Adaptive SLA Enforcement, is designed and implemented. For the sake of comparison, we also study and implement three simpler schemes: 1) Fixed Number of Containers, 2) Dynamic Resource Management without SLA Enforcement, 3) Strict SLA Enforcement. We found that the Adaptive SLA Enforcement scheme can deliver efficient resource management with SLA enforcement, thus successfully addressing the deficiencies of the other three schemes.


2021 ◽  
Author(s):  
Paul ChanHyung Park

Docker has been widely adopted as a platform solution for microservice. As the popularity of microservice increases, the importance of fine-tuning the efficiency of resource management in the Docker platform also increases. While Docker’s out-of-box resource management solution provides some generic management capability, more work is required to improve resource utilization and enforce Service Level Agreement (SLA) for critical services. In this research, an efficient Docker resource management scheme, called Adaptive SLA Enforcement, is designed and implemented. For the sake of comparison, we also study and implement three simpler schemes: 1) Fixed Number of Containers, 2) Dynamic Resource Management without SLA Enforcement, 3) Strict SLA Enforcement. We found that the Adaptive SLA Enforcement scheme can deliver efficient resource management with SLA enforcement, thus successfully addressing the deficiencies of the other three schemes.


2020 ◽  
Vol 245 ◽  
pp. 04037
Author(s):  
Xiaowei Aaron Chu ◽  
Jeff LeFevre ◽  
Aldrin Montana ◽  
Dana Robinson ◽  
Quincey Koziol ◽  
...  

Access libraries such as ROOT[1] and HDF5[2] allow users to interact with datasets using high level abstractions, like coordinate systems and associated slicing operations. Unfortunately, the implementations of access libraries are based on outdated assumptions about storage systems interfaces and are generally unable to fully benefit from modern fast storage devices. For example, access libraries often implement buffering and data layout that assume that large, single-threaded sequential access patterns are causing less overall latency than small parallel random access: while this is true for spinning media, it is not true for flash media. The situation is getting worse with rapidly evolving storage devices such as non-volatile memory and ever larger datasets. This project explores distributed dataset mapping infrastructures that can integrate and scale out existing access libraries using Ceph’s extensible object model, avoiding re-implementation or even modifications of these access libraries as much as possible. These programmable storage extensions coupled with our distributed dataset mapping techniques enable: 1) access library operations to be offloaded to storage system servers, 2) the independent evolution of access libraries and storage systems and 3) fully leveraging of the existing load balancing, elasticity, and failure management of distributed storage systems like Ceph. They also create more opportunities to conduct storage server-local optimizations specific to storage servers. For example, storage servers might include local key/value stores combined with chunk stores that require different optimizations than a local file system. As storage servers evolve to support new storage devices like non-volatile memory, these server-local optimizations can be implemented while minimizing disruptions to applications. We will report progress on the means by which distributed dataset mapping can be abstracted over particular access libraries, including access libraries for ROOT data, and how we address some of the challenges revolving around data partitioning and composability of access operations.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8059
Author(s):  
Fengxia Han ◽  
Hao Deng ◽  
Jianfeng Shi ◽  
Hao Jiang

Wireless distributed storage is beneficial in the provision of reliable content storage and offloading of cellular traffic. In this paper, we consider a cellular device-to-device (D2D) underlay-based wireless distributed storage system, in which the minimum storage regenerating (MSR) coding combined with the partial downloading scheme is employed. To alleviate burdens on insufficient cellular resources and improve spectral efficiency in densely deployed networks, multiple storage devices can simultaneously use the same uplink cellular subchannel under the non-orthogonal multiple access (NOMA) protocol. Our objective is to minimize the total transmission power for content reconstruction, while guaranteeing the signal-to-interference-plus-noise ratio (SINR) constraints for cellular users by jointly optimizing power and subchannel allocation. To tackle the non-convex combinational program, we decouple the original problem into two subproblems and propose two low-complexity algorithms to efficiently solve them, followed by a joint optimization, implemented by alternately updating the solutions to each subproblem. The numerical results illustrate that our proposed algorithms are capable of performing an exhaustive search with lower computation complexity, and the NOMA-enhanced scheme provides more transmission opportunities for neighbor storage devices, thus significantly reducing the total power consumption.


The advent of social media, smart mobile devices and the Internet of Things (IoT) has led to the generation of unstructured data at an astronomical rate, thereby creating an ever-increasing demand for object storage. These object storage systems consume a lot of energy, resulting in increased heat dissipation, greater cooling requirements (which in turn consumes more energy), higher operational costs, and excessive carbon footprint. Although there has been some progress in building energy-efficient disk systems, works on energy-efficient object storage systems are still in the nascent stage. In this paper, we propose SEA: An SSD Staged Energy Efficient Object Storage System Architecture, wherein we introduce a staging layer comprising Solid State Drives (SSDs) on top of the existing object storage system consisting primarily of Hard Disk Drives (HDDs). SSDs not only consume lesser power as compared to HDDs but are also much faster. Leveraging SSDs for staging reduces the number and frequency of requests hitting the object storage system underneath, allowing us to selectively spin down a substantial number of disks without violating any Service Level Agreements driven by Quality of Service requirements while reducing the total disk energy consumption. Given the high-performance characteristics of SSDs, this SSD staging layer significantly enhances the performance of the object storage system as a whole. As a case study, we have modeled this architecture for OpenStack Swift. Our simulation results using a Dropbox-like workload show that, even after factoring in the additional energy consumed by the SSD staging layer, our model was able to reduce the total disk energy consumption by up to 15.235 % and improve performance by up to 29.06 %.


Now a day’s quantity of data growing day by day accordingly the size of storage media is also increasing rapidly. In most of the storage devices flash memories are used one of them is Solid State drive. Solid state drives i.e. SSDs are non-volatile data storage devices which store determined data in NAND or NOR i.e. in flash memories, which provides similar functionality like traditional hard disk (HDD). This paper provides comparative study of Solid-state drives over Hard-disk drives. Also, implementation of algorithm to enhance the security of Solid-state drives in terms of user authentication, access control and media recovery from ATA security feature set. This algorithm fulfils security principles like Authentication and Data Integrity.


2020 ◽  
Vol 10 (24) ◽  
pp. 9149
Author(s):  
Jaeho Kim ◽  
Jung Kyu Park

The demand for mass storage devices has become an inevitable consequence of the explosive increase in data volume. The three-dimensional (3D) vertical NAND (V-NAND) and quad-level cell (QLC) technologies rapidly accelerate the capacity increase of flash memory based storage system, such as SSDs (Solid State Drives). Massive capacity SSDs adopt dozens or hundreds of flash memory chips in order to implement large capacity storage. However, employing such a large number of flash chips increases the error rate in SSDs. A RAID-like technique inside an SSD has been used in a variety of commercial products, along with various studies, in order to protect user data. With the advent of new types of massive storage devices, studies on the design of RAID-like protection techniques for such huge capacity SSDs are important and essential. In this paper, we propose a massive SSD-Aware Parity Logging (mSAPL) scheme that protects against n-failures at the same time in a stripe, where n is protection strength that is specified by the user. The proposed technique allows for us to choose the strength of protection for user data. We implemented mSAPL on a trace-based simulator and evaluated it with real-world I/O workload traces. In addition, we quantitatively analyze the error rates of a flash based SSD for different RAID-like configurations with analytic models. We show that mSAPL outperforms the state-of-the-art RAID-like technique in the performance and reliability.


Sign in / Sign up

Export Citation Format

Share Document