scholarly journals Penerapan Algoritma Elias Delta Codes Dalam Kompresi File Teks

2020 ◽  
Vol 2 (2) ◽  
pp. 109-114
Author(s):  
Nadia Fariza Rizky ◽  
Surya Darma Nasution ◽  
Fadlina Fadlina

File size or large files are not only a problem in terms of storage, but also another problem when communication between computers. Data usage with a larger size will take longer transfer times compared to data that has a smaller size. Therefore to overcome this problem you can use data compression. Seeing the compression of data compression, data compression is needed that can reduce the size of the data, so that it gains the advantage of reducing the use of external storage media space, accelerating the process of transferring data between storage media, data utilization used in this study is the elias delta codes algorithm. Compression is a method used to reduce the bit or data from the original result to a new result. Where this compression will be applied into the algorithm, the elias delta codes algorithm. After getting the results of the compression it will be designed into Microsoft Visual Studio 2008

Information ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 115
Author(s):  
Ahmad Saeed Mohammad ◽  
Dhafer Zaghar ◽  
Walaa Khalaf

With the development of mobile technology, the usage of media data has increased dramatically. Therefore, data reduction represents a research field to maintain valuable information. In this paper, a new scheme called Multi Chimera Transform (MCT) based on data reduction with high information preservation, which aims to improve the reconstructed data by producing three parameters from each 16×16 block of data, is proposed. MCT is a 2D transform that depends on constructing a codebook of 256 picked blocks from some selected images which have a low similarity. The proposed transformation was applied on solid and soft biometric modalities of AR database, giving high information preservation with small resulted file size. The proposed method produced outstanding performance compared with KLT and WT in terms of SSIM and PSNR. The highest SSIM was 0.87 for the proposed scheme MCT of the full image of AR database, while the existed method KLT and WT had 0.81 and 0.68, respectively. In addition, the highest PSNR was 27.23 dB for the proposed scheme on warp facial image of AR database, while the existed methods KLT and WT had 24.70 dB and 21.79 dB, respectively.


2021 ◽  
Vol 11 (1-2) ◽  
pp. 161-176
Author(s):  
Michael Hedges

This article presents a reading of ‘Modulation’ (2008) by Richard Powers. Firstly, I consider the short story’s representation of the MP3 music file, specifically its effects on how music is circulated and stored, as well as how it sounds. These changes are the result of different processes of compression. The MP3 format makes use of data compression to reduce the file size of a digital recording significantly. Such a loss of information devises new social and material relations between what remains of the original music, the recording industry from which MP3s emerged and the online markets into which they enter. I argue that ‘Modulation’ is a powerful evocation of a watershed moment in how we consume digital sound: what Jonathan Sterne has termed the rise of the MP3 as ‘cultural artifact’. I contend that the short story, like the MP3, is also a compressed manner of representation. I use narrative theory and short story criticism to substantiate this claim, before positioning ‘Modulation’ alongside Powers’s novels of information. I conclude by suggesting that ‘Modulation’ offers an alternative to representing information through an excess of data. This article reads Powers’s compressed prose as a formal iteration of the data compression the story narrates.


2019 ◽  
Vol 13 (1) ◽  
pp. 36-42
Author(s):  
Jasna Prester ◽  
Mihaela Jurić

The article analyses big data usage in the Croatian manufacturing sector. Big data usage is still low but present. We analysed the influence of six sources of big data and their influence on share of returns generated by new products using two step OLS regression analysis. The results are robust but they show that some sources have positive and some have negative effects on share of returns generated by new products. Based on the most recent research of scholarly papers we define big data and show a clear research gap by linking big data and innovation. That is, only six papers deal with big data and innovation. In five papers big data comes from social media data, and in the remaining one paper they use data from sensors but predominantly to reduce cost or support the product. Therefore, we contribute by closing this research gap of linking big data and innovation.


2020 ◽  
Vol 5 (1) ◽  
pp. 119
Author(s):  
Erlin Erlin ◽  
Boby Hasbul Fikri ◽  
Susanti Susanti ◽  
Triyani Arita Fitri

Metadata files help user find relevant information, provides digital identification, archives and conserves stored files so that they are easily found and reused. The large number of data files on the storage media often makes the user unaware of the duplication and redundancy of the files that have an impact on the waste of storage media space, affecting the speed of a computer in the indexing process, finding or backing up data. This study employ the Latent Semantic Analysis method to detect file duplication and analyze the metadata of various file types in storage media. The findings showed that Latent Semantic Analysis method is able to detect duplicate file metadata in various types of storage media thereby further increasing the usability and speed of access of the data storage media.


2018 ◽  
Author(s):  
Andysah Putera Utama Siahaan

Compression aims to reduce data before storing or moving it into storage media. Huffman and Elias Delta Code are two algorithms used for the compression process in this research. Data compression with both algorithms is used to compress text files. These two algorithms have the same way of working. It starts by sorting characters based on their frequency, binary tree formation and ends with code formation. In the Huffman algorithm, binary trees are formed from leaves to roots and are called tree-forming from the bottom up. In contrast, the Elias Delta Code method has a different technique. Text file compression is done by reading the input string in a text file and encoding the string using both algorithms. The compression results state that the Huffman algorithm is better overall than Elias Delta Code.


Author(s):  
Larry Seiler ◽  
Daqi Lin ◽  
Cem Yuksel

We propose a method to reduce the footprint of compressed data by using modified virtual address translation to permit random access to the data. This extends our prior work on using page translation to perform automatic decompression and deswizzling upon accesses to fixed rate lossy or lossless compressed data. Our compaction method allows a virtual address space the size of the uncompressed data to be used to efficiently access variable-size blocks of compressed data. Compression and decompression take place between the first and second level caches, which allows fast access to uncompressed data in the first level cache and provides data compaction at all other levels of the memory hierarchy. This improves performance and reduces power relative to compressed but uncompacted data. An important property of our method is that compression, decompression, and reallocation are automatically managed by the new hardware without operating system intervention and without storing compression data in the page tables. As a result, although some changes are required in the page manager, it does not need to know the specific compression algorithm and can use a single memory allocation unit size. We tested our method with two sample CPU algorithms. When performing depth buffer occlusion tests, our method reduces the memory footprint by 3.1x. When rendering into textures, our method reduces the footprint by 1.69x before rendering and 1.63x after. In both cases, the power and cycle time are better than for uncompacted compressed data, and significantly better than for accessing uncompressed data.


2016 ◽  
Vol 15 (8) ◽  
pp. 6991-6998
Author(s):  
Idris Hanafi ◽  
Amal Abdel-Raouf

The increasing amount and size of data being handled by data analytic applications running on Hadoop has created a need for faster data processing. One of the effective methods for handling big data sizes is compression. Data compression not only makes network I/O processing faster, but also provides better utilization of resources. However, this approach defeats one of Hadoop’s main purposes, which is the parallelism of map and reduce tasks. The number of map tasks created is determined by the size of the file, so by compressing a large file, the number of mappers is reduced which in turn decreases parallelism. Consequently, standard Hadoop takes longer times to process. In this paper, we propose the design and implementation of a Parallel Compressed File Decompressor (P-Codec) that improves the performance of Hadoop when processing compressed data. P-Codec includes two modules; the first module decompresses data upon retrieval by a data node during the phase of uploading the data to the Hadoop Distributed File System (HDFS). This process reduces the runtime of a job by removing the burden of decompression during the MapReduce phase. The second P-Codec module is a decompressed map task divider that increases parallelism by dynamically changing the map task split sizes based on the size of the final decompressed block. Our experimental results using five different MapReduce benchmarks show an average improvement of approximately 80% compared to standard Hadoop.


2013 ◽  
Vol 23 (4) ◽  
pp. 462-472
Author(s):  
Kazushige HIROI

Sign in / Sign up

Export Citation Format

Share Document