DaP∀: Deconstruct and Preserve for All: A Procedure for the Preservation of Digital Evidence on Solid State Drives and Traditional Storage Media

Author(s):  
Ian Mitchell ◽  
Josué Ferriera ◽  
Tharmila Anandaraja ◽  
Sukhvinder Hara

Solid state drives (SSDs)have emerged as faster and more reliable data storages over the last few years. Their intrinsic characteristics prove them to be more efficient as compared to other traditional storage media such as the Hard Disk Drives (HDDs). Issues such as write amplification, however, degrade the performance and lifespan of an SSD. This issue is in turn handled by the Garbage Collection (GC) algorithms that are put in place to supply free blocks for serving the writes being made to the flash-based SSDs and thus reduce the need of extra unnecessary writes. The LRU/FIFO, Greedy, Windowed Greedy and D choices algorithms have been described to lower write amplification for incoming writes which are different in nature. The performance of the GC algorithms varies based on factors such as pre-defined hot/cold data separation, hotness of data, uniform/non-uniform nature of incoming writes, the GC window size and the number of pages in each block of the flash memory package. Finally, it can be seen that the number of write frontiers so used, can dictate the separation of hot/cold data and increase the performance of a GC algorithm.


Now a day’s quantity of data growing day by day accordingly the size of storage media is also increasing rapidly. In most of the storage devices flash memories are used one of them is Solid State drive. Solid state drives i.e. SSDs are non-volatile data storage devices which store determined data in NAND or NOR i.e. in flash memories, which provides similar functionality like traditional hard disk (HDD). This paper provides comparative study of Solid-state drives over Hard-disk drives. Also, implementation of algorithm to enhance the security of Solid-state drives in terms of user authentication, access control and media recovery from ATA security feature set. This algorithm fulfils security principles like Authentication and Data Integrity.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1394
Author(s):  
Cristian Zambelli ◽  
Lorenzo Zuolo ◽  
Antonio Aldarese ◽  
Salvatrice Scommegna ◽  
Rino Micheloni ◽  
...  

3D NAND Flash is the preferred storage medium for dense mass storage applications, including Solid State Drives and multimedia cards. Improving the latency of these systems is a mandatory task to narrow the gap between computing elements, such as CPUs and GPUs, and the storage environment. To this extent, relatively time-consuming operations in the storage media, such as data programming and data erasing, need to be prioritized and be potentially suspendable by shorter operations, like data reading, in order to improve the overall system quality of service. However, such benefits are strongly dependent on the storage characteristics and on the timing of the single operations. In this work, we investigate, through an extensive characterization, the impacts of suspending the data programming operation in a 3D NAND Flash device. System-level simulations proved that such operations must be carefully characterized before exercising them on Solid State Drives to eventually understand the performance benefits introduced and to disclose all the potential shortcomings.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 486
Author(s):  
Yongjae Chun ◽  
Kyeore Han ◽  
Youpyo Hong

Owing to their advantages over hard disc drives (HDDs), solid-state drives (SSDs) are widely used in many applications, including consumer electronics and data centers. As erase operations are feasible only in block units, modification or deletion of pages cause invalidation of the pages in their corresponding blocks. To reclaim these invalid pages, the valid pages in the block are copied to other blocks, and the block with the invalid pages is initialized, which adversely affects the performance and durability of the SSD. The objective of a multi-stream SSD is to group data by their expected lifetimes and store each group of data in a separate area called a stream to minimize the frequency of wasteful copy-back and initialization operations. In this paper, we propose an algorithm that groups the data based on input/output (I/O) types and rewrite frequency, which show significant improvements over existing multi-stream algorithms not only for performance but also for effectiveness in covering most applications.


Author(s):  
Hyunchan Park ◽  
Cheol-Ho Hong ◽  
Younghyun Kim ◽  
Seehwan Yoo ◽  
Chuck Yoo

Sign in / Sign up

Export Citation Format

Share Document