scholarly journals CRUSH: A New Lossless Compression Algorithm

2018 ◽  
Vol 12 (11) ◽  
pp. 406
Author(s):  
Evon Abu-Taieh ◽  
Issam AlHadid

Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).

2018 ◽  
Vol 12 (11) ◽  
pp. 387
Author(s):  
Evon Abu-Taieh ◽  
Issam AlHadid

Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 196
Author(s):  
Shmuel T. Klein ◽  
Dana Shapira

It seems reasonable to expect from a good compression method that its output should not be further compressible, because it should behave essentially like random data. We investigate this premise for a variety of known lossless compression techniques, and find that, surprisingly, there is much variability in the randomness, depending on the chosen method. Arithmetic coding seems to produce perfectly random output, whereas that of Huffman or Ziv-Lempel coding still contains many dependencies. In particular, the output of Huffman coding has already been proven to be random under certain conditions, and we present evidence here that arithmetic coding may produce an output that is identical to that of Huffman.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Pamela Vinitha Eric ◽  
Gopakumar Gopalakrishnan ◽  
Muralikrishnan Karunakaran

This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms.


2013 ◽  
Vol 21 (2) ◽  
pp. 133-143
Author(s):  
Hiroyuki Okazaki ◽  
Yuichi Futa ◽  
Yasunari Shidama

Summary Huffman coding is one of a most famous entropy encoding methods for lossless data compression [16]. JPEG and ZIP formats employ variants of Huffman encoding as lossless compression algorithms. Huffman coding is a bijective map from source letters into leaves of the Huffman tree constructed by the algorithm. In this article we formalize an algorithm constructing a binary code tree, Huffman tree.


Author(s):  
Kamal Al-Khayyat ◽  
Imad Al-Shaikhli ◽  
Mohamad Al-Hagery

This paper details the examination of a particular case of data compression, where the compression algorithm removes the redundancy from data, which occurs when edge-based compression algorithms compress (previously compressed) pixelated images. The newly created redundancy can be removed using another round of compression. This work utilized the JPEG-LS as an example of an edge-based compression algorithm for compressing pixelated images. The output of this process was subjected to another round of compression using a more robust but slower compressor (PAQ8f). The compression ratio of the second compression was, on average,  18%, which is high for random data. The results of the second compression were superior to the lossy JPEG. Under the used data set, lossy JPEG needs to sacrifice  10% on average to realize nearly total lossless compression ratios of the two-successive compressions. To generalize the results, fast general-purpose compression algorithms (7z, bz2, and Gzip) were used too.


2016 ◽  
Vol 12 (2) ◽  
Author(s):  
Yosia Adi Jaya ◽  
Lukas Chrisantyo ◽  
Willy Sudiarto Raharjo

Data Compression can save some storage space and accelerate data transfer. Among many compression algorithm, Run Length Encoding (RLE) is a simple and fast algorithm. RLE can be used to compress many types of data. However, RLE is not very effective for image lossless compression because there are many little differences between neighboring pixels. This research proposes a new lossless compression algorithm called YRL that improve RLE using the idea of Relative Encoding. YRL can treat the value of neighboring pixels as the same value by saving those little differences / relative value separately. The test done by using various standard image test shows that YRL have an average compression ratio of 75.805% for 24-bit bitmap and 82.237% for 8-bit bitmap while RLE have an average compression ratio of 100.847% for 24-bit bitmap and 97.713% for 8-bit bitmap.


2013 ◽  
Vol 64 (1) ◽  
pp. 44-49 ◽  
Author(s):  
Anton Biasizzo ◽  
Franc Novak ◽  
Peter Korošec

Arithmetic coding is a lossless compression algorithm with variable-length source coding. It is more flexible and efficient than the well-known Huffman coding. In this paper we present a non-adaptive FPGA implementation of a multi-alphabet arithmetic coding with separated statistical model of the data source. The alphabet of the data source is a 256-symbol ASCII character set and does not include the special end-of-file symbol. No context switching is used in the proposed design which gives maximal throughput without pipelining. We have synthesized the design for Xilinx FPGA devices and used their built-in hardware resources.


2015 ◽  
Vol 10 (2) ◽  
Author(s):  
Christian Puji Nugraha ◽  
R. Gunawan Santosa ◽  
Lukas Chrisantyo A.A.

Data compression is a very important process in the world that has been vastly using digital files, such as for texts, images, sounds or videos. Those digital files has a varied size and often taking disk storage spaces. To overcome this problem, many experts created compression algorithms, both for lossy and lossless compression. This research discusses about testing of four lossless compression algorithms that applied for text files, such as LZ77, Static Huffman, LZ77 combined with Static Huffman, and Deflate. Performance comparison of the four algorithms is measured by obtaining the compression ratio. From the test results can be concluded that the Deflate algorithm is the best algorithm due to the use of multiple modes, i.e. uncompressed mode, LZ77 combined with Static Huffman mode, and LZ77 combined with Dynamic Huffman Coding mode. The results also showed that the Deflate algorithm can compress text files and generates an average compression ratio of 38.84%.


Sign in / Sign up

Export Citation Format

Share Document