data disk
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 1)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 14 (11) ◽  
pp. 2355-2368
Author(s):  
Tobias Schmidt ◽  
Maximilian Bandle ◽  
Jana Giceva

With today's data deluge, approximate filters are particularly attractive to avoid expensive operations like remote data/disk accesses. Among the many filter variants available, it is non-trivial to find the most suitable one and its optimal configuration for a specific use-case. We provide open-source implementations for the most relevant filters (Bloom, Cuckoo, Morton, and Xor filters) and compare them in four key dimensions: the false-positive rate, space consumption, build, and lookup throughput. We improve upon existing state-of-the-art implementations with a new optimization, radix partitioning, which boosts the build and lookup throughput for large filters by up to 9x and 5x. Our in-depth evaluation first studies the impact of all available optimizations separately before combining them to determine the optimal filter for specific use-cases. While register-blocked Bloom filters offer the highest throughput, the new Xor filters are best suited when optimizing for small filter sizes or low false-positive rates.


2013 ◽  
Vol 380-384 ◽  
pp. 3421-3424
Author(s):  
Kai Bu ◽  
Wei Yi ◽  
Hui Xu ◽  
Qi You Xie ◽  
Jian Bin Liu

NAND Flash-based SSD has gained prevalence in enterprise and embedded system as storage device because its high I/O performance, low-power consumption, and Anti-vibration characteristics. RAID consisting of SSD can achieve higher performance. However, it brings new problems such as parity disks suffering from premature aging, data disk aging simultaneously. This paper based on the variation of SSDs reliability, design a new RAID-5 architecture with dynamic stripe length. It can effectively reduce the disk space overhead and improve the safety performance of the RAID.


Author(s):  
Doug White ◽  
Alan Rea

Hard disk wipes are a crucial component of computing security. However, more often than not, hard drives are not adequately processed before either disposing or reusing them within an environment. When an organization does not follow a standard disk wipe procedure, the opportunity to expose sensitive data occurs. More often than not, most organizations do not wipe drives because of the intense time and resource commitment of a highly-secure seven-pass DOD wipe. However, we posit that our one-pass methodology, verified with a zero checksum, is more than adequate for organizations wishing to protect against the loss of sensitive hard drive data.


Author(s):  
Martin H. Weik
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document