scholarly journals A Durable Hybrid RAM Disk with a Rapid Resilience for Sustainable IoT Devices

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2159 ◽  
Author(s):  
Sung Hoon Baek ◽  
Ki-Woong Park

Flash-based storage is considered to be a de facto storage module for sustainable Internet of things (IoT) platforms under a harsh environment due to its relatively fast speed and operational stability compared to disk storage. Although their performance is considerably faster than disk-based mechanical storage devices, the read and write latency still could not catch up with that of Random-access memory (RAM). Therefore, RAM could be used as storage devices or systems for time-critical IoT applications. Despite such advantages of RAM, a RAM-based storage system has limitations in its use for sustainable IoT devices due to its nature of volatile storage. As a remedy to this problem, this paper presents a durable hybrid RAM disk enhanced with a new read interface. The proposed durable hybrid RAM disk is designed for sustainable IoT devices that require not only high read/write performance but also data durability. It includes two performance improvement schemes: rapid resilience with a fast initialization and direct byte read (DBR). The rapid resilience with a fast initialization shortens the long booting time required to initialize the durable hybrid RAM disk. The new read interface, DBR, enables the durable hybrid RAM disk to bypass the disk cache, which is an overhead in RAM-based storages. DBR performs byte–range I/O, whereas direct I/O requires block-range I/O; therefore, it provides a more efficient interface than direct I/O. The presented schemes and device were implemented in the Linux kernel. Experimental evaluations were performed using various benchmarks at the block level till the file level. In workloads where reads and writes were mixed, the durable hybrid RAM disk showed 15 times better performance than that of Solid-state drive (SSD) itself.

2020 ◽  
Vol 245 ◽  
pp. 04037
Author(s):  
Xiaowei Aaron Chu ◽  
Jeff LeFevre ◽  
Aldrin Montana ◽  
Dana Robinson ◽  
Quincey Koziol ◽  
...  

Access libraries such as ROOT[1] and HDF5[2] allow users to interact with datasets using high level abstractions, like coordinate systems and associated slicing operations. Unfortunately, the implementations of access libraries are based on outdated assumptions about storage systems interfaces and are generally unable to fully benefit from modern fast storage devices. For example, access libraries often implement buffering and data layout that assume that large, single-threaded sequential access patterns are causing less overall latency than small parallel random access: while this is true for spinning media, it is not true for flash media. The situation is getting worse with rapidly evolving storage devices such as non-volatile memory and ever larger datasets. This project explores distributed dataset mapping infrastructures that can integrate and scale out existing access libraries using Ceph’s extensible object model, avoiding re-implementation or even modifications of these access libraries as much as possible. These programmable storage extensions coupled with our distributed dataset mapping techniques enable: 1) access library operations to be offloaded to storage system servers, 2) the independent evolution of access libraries and storage systems and 3) fully leveraging of the existing load balancing, elasticity, and failure management of distributed storage systems like Ceph. They also create more opportunities to conduct storage server-local optimizations specific to storage servers. For example, storage servers might include local key/value stores combined with chunk stores that require different optimizations than a local file system. As storage servers evolve to support new storage devices like non-volatile memory, these server-local optimizations can be implemented while minimizing disruptions to applications. We will report progress on the means by which distributed dataset mapping can be abstracted over particular access libraries, including access libraries for ROOT data, and how we address some of the challenges revolving around data partitioning and composability of access operations.


2020 ◽  
Vol 10 (3) ◽  
pp. 999
Author(s):  
Hyokyung Bahn ◽  
Kyungwoon Cho

Recently, non-volatile memory (NVM) has advanced as a fast storage medium, and legacy memory subsystems optimized for DRAM (dynamic random access memory) and HDD (hard disk drive) hierarchies need to be revisited. In this article, we explore the memory subsystems that use NVM as an underlying storage device and discuss the challenges and implications of such systems. As storage performance becomes close to DRAM performance, existing memory configurations and I/O (input/output) mechanisms should be reassessed. This article explores the performance of systems with NVM based storage emulated by the RAMDisk under various configurations. Through our measurement study, we make the following findings. (1) We can decrease the main memory size without performance penalties when NVM storage is adopted instead of HDD. (2) For buffer caching to be effective, judicious management techniques like admission control are necessary. (3) Prefetching is not effective in NVM storage. (4) The effect of synchronous I/O and direct I/O in NVM storage is less significant than that in HDD storage. (5) Performance degradation due to the contention of multi-threads is less severe in NVM based storage than in HDD. Based on these observations, we discuss a new PC configuration consisting of small memory and fast storage in comparison with a traditional PC consisting of large memory and slow storage. We show that this new memory-storage configuration can be an alternative solution for ever-growing memory demands and the limited density of DRAM memory. We anticipate that our results will provide directions in system software development in the presence of ever-faster storage devices.


Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 847
Author(s):  
Sopanhapich Chum ◽  
Heekwon Park ◽  
Jongmoo Choi

This paper proposes a new resource management scheme that supports SLA (Service-Level Agreement) in a bigdata distributed storage system. Basically, it makes use of two mapping modes, isolated mode and shared mode, in an adaptive manner. In specific, to ensure different QoS (Quality of Service) requirements among clients, it isolates storage devices so that urgent clients are not interfered by normal clients. When there is no urgent client, it switches to the shared mode so that normal clients can access all storage devices, thus achieving full performance. To provide this adaptability effectively, it devises two techniques, called logical cluster and normal inclusion. In addition, this paper explores how to exploit heterogeneous storage devices, HDDs (Hard Disk Drives) and SSDs (Solid State Drives), to support SLA. It examines two use cases and observes that separating data and metadata into different devices gives a positive impact on the performance per cost ratio. Real implementation-based evaluation results show that this proposal can satisfy the requirements of diverse clients and can provide better performance compared with a fixed mapping-based scheme.


2019 ◽  
Vol 214 ◽  
pp. 04033
Author(s):  
Hervé Rousseau ◽  
Belinda Chan Kwok Cheong ◽  
Cristian Contescu ◽  
Xavier Espinal Curull ◽  
Jan Iven ◽  
...  

The CERN IT Storage group operates multiple distributed storage systems and is responsible for the support of the infrastructure to accommodate all CERN storage requirements, from the physics data generated by LHC and non-LHC experiments to the personnel users' files. EOS is now the key component of the CERN Storage strategy. It allows to operate at high incoming throughput for experiment data-taking while running concurrent complex production work-loads. This high-performance distributed storage provides now more than 250PB of raw disks and it is the key component behind the success of CERNBox, the CERN cloud synchronisation service which allows syncing and sharing files on all major mobile and desktop platforms to provide offline availability to any data stored in the EOS infrastructure. CERNBox recorded an exponential growth in the last couple of year in terms of files and data stored thanks to its increasing popularity inside CERN users community and thanks to its integration with a multitude of other CERN services (Batch, SWAN, Microsoft Office). In parallel CASTOR is being simplified and transitioning from an HSM into an archival system, focusing mainly in the long-term data recording of the primary data from the detectors, preparing the road to the next-generation tape archival system, CTA. The storage services at CERN cover as well the needs of the rest of our community: Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy home directory filesystem services and its ongoing phase-out and CVMFS for software distribution. In this paper we will summarise our experience in supporting all our distributed storage system and the ongoing work in evolving our infrastructure, testing very-dense storage building block (nodes with more than 1PB of raw space) for the challenges waiting ahead.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2982 ◽  
Author(s):  
Bongjae Kim ◽  
Hong Min ◽  
Junyoung Heo ◽  
Jinman Jung

Recently, various technologies for utilizing unmanned aerial vehicles have been studied. Drones are a kind of unmanned aerial vehicle. Drone-based mobile surveillance systems can be applied for various purposes such as object recognition or object tracking. In this paper, we propose a mobility-aware dynamic computation offloading scheme, which can be used for tracking and recognizing a moving object on the drone. The purpose of the proposed scheme is to reduce the time required for recognizing and tracking a moving target object. Reducing recognition and tracking time is a very important issue because it is a very time critical job. Our dynamic computation offloading scheme considers both the dwell time of the moving target object and the network failure rate to estimate the response time accurately. Based on the simulation results, our dynamic computation offloading scheme can reduce the response time required for tracking the moving target object efficiently.


2008 ◽  
Vol 2008 ◽  
pp. 1-9 ◽  
Author(s):  
Y. Guillemenet ◽  
L. Torres ◽  
G. Sassatelli ◽  
N. Bruchon

This paper describes the integration of field-induced magnetic switching (FIMS) and thermally assisted switching (TAS) magnetic random access memories in FPGA design. The nonvolatility of the latter is achieved through the use of magnetic tunneling junctions (MTJs) in the MRAM cell. A thermally assisted switching scheme helps to reduce power consumption during write operation in comparison to the writing scheme in the FIMS-MTJ device. Moreover, the nonvolatility of such a design based on either an FIMS or a TAS writing scheme should reduce both power consumption and configuration time required at each power up of the circuit in comparison to classical SRAM-based FPGAs. A real-time reconfigurable (RTR) micro-FPGA using FIMS-MRAM or TAS-MRAM allows dynamic reconfiguration mechanisms, while featuring simple design architecture.


Author(s):  
Imam Riadi ◽  
Rusydi Umar ◽  
Imam Mahfudl Nasrulloh

The rapid development of computer technology in hardware, is currently developing non-volatile computer storage media Solid State Drive (SSD). SSD technology has a faster data access speed than Hard Disk and is currently starting to replace Hard Disk storage media. Freezing software on computer systems is often carried out by computer technicians, because it can save a computer maintenance costs due to errors, be exposed to computer viruses or malware. This software is used to prevent unwanted changes to the computer system, when the computer is restarted changes that occur in the computer system will not be stored on storage media. When this happens, what should be done by digital forensic investigators. This study discusses experimental forensic investigations on SSD media storage with frozen conditions or in this study said the frozen SSD. Frozen SSD is the condition of the drive that is locked so that there is no change in the computer system. Software used to lock and prevent changes such as Deep Freeze, Shadow Defender, Windows Steady State, and Toolwiz Time Freeze. Forensic research stages using methods NIST. The result shows that from comparative analysis conducted with Deep Freeze the results of the RecoverMyFile gives 76.38% and Autopsy gives 75,27%, while frozen condition with Shadow Defender the results of the RecoverMyFile gives 59.72% and Autopsy gives 74.44%. So the results of this study indicate the drive freezing software has an effect obtained can be an obstacle in the digital forensic process.  


Sign in / Sign up

Export Citation Format

Share Document