scholarly journals PENGEMBANGAN DAN ANALISIS KOMBINASI RUN LENGTH ENCODING DAN RELATIVE ENCODING UNTUK KOMPRESI CITRA

2016 ◽  
Vol 12 (2) ◽  
Author(s):  
Yosia Adi Jaya ◽  
Lukas Chrisantyo ◽  
Willy Sudiarto Raharjo

Data Compression can save some storage space and accelerate data transfer. Among many compression algorithm, Run Length Encoding (RLE) is a simple and fast algorithm. RLE can be used to compress many types of data. However, RLE is not very effective for image lossless compression because there are many little differences between neighboring pixels. This research proposes a new lossless compression algorithm called YRL that improve RLE using the idea of Relative Encoding. YRL can treat the value of neighboring pixels as the same value by saving those little differences / relative value separately. The test done by using various standard image test shows that YRL have an average compression ratio of 75.805% for 24-bit bitmap and 82.237% for 8-bit bitmap while RLE have an average compression ratio of 100.847% for 24-bit bitmap and 97.713% for 8-bit bitmap.

Author(s):  
Kamal Al-Khayyat ◽  
Imad Al-Shaikhli ◽  
Mohamad Al-Hagery

This paper details the examination of a particular case of data compression, where the compression algorithm removes the redundancy from data, which occurs when edge-based compression algorithms compress (previously compressed) pixelated images. The newly created redundancy can be removed using another round of compression. This work utilized the JPEG-LS as an example of an edge-based compression algorithm for compressing pixelated images. The output of this process was subjected to another round of compression using a more robust but slower compressor (PAQ8f). The compression ratio of the second compression was, on average,  18%, which is high for random data. The results of the second compression were superior to the lossy JPEG. Under the used data set, lossy JPEG needs to sacrifice  10% on average to realize nearly total lossless compression ratios of the two-successive compressions. To generalize the results, fast general-purpose compression algorithms (7z, bz2, and Gzip) were used too.


2014 ◽  
Vol 926-930 ◽  
pp. 1751-1754
Author(s):  
Hong Mei Song ◽  
Hai Wei Mu ◽  
Dong Yan Zhao

A progressive transmission and decoding nearly lossless compression algorithm is proposed. The image data are grouped according to different frequencies based on DCT transform, then it uses the JPEG-LS core algorithmtexture prediction and Golomb coding on each group of data, in order to achieve progressive image transmission and decoding. Experimentation on the standard test images with this algorithm and comparing with JPEG-LS shows that the compression ratio of this algorithm is very similar to the compression ratio of JPEG-LS, and this algorithm loses a little image information but it has the ability of the progressive transmission and decoding.


2018 ◽  
Vol 12 (11) ◽  
pp. 387
Author(s):  
Evon Abu-Taieh ◽  
Issam AlHadid

Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).


Author(s):  
Rian Syahputra

Text compression is used to reduce the repetition of letters so that the storage space used becomes smaller and transmission becomes faster. Techniques in text compression that can be used are using lossless compression. Lossless compression is a type of compression where when the decompression process occurs no data is lost. The Even-Rodeh Code algorithm is a lossless type of compression algorithm that replaces the initial bit code of the data with a code from the Even-Rodeh Code algorithm so that it can produce smaller data sizes


2020 ◽  
Vol 10 (14) ◽  
pp. 4918
Author(s):  
Shaofei Dai ◽  
Wenbo Liu ◽  
Zhengyi Wang ◽  
Kaiyu Li ◽  
Pengfei Zhu ◽  
...  

This paper reports on an efficient lossless compression method for periodic signals based on adaptive dictionary predictive coding. Some previous methods for data compression, such as difference pulse coding (DPCM), discrete cosine transform (DCT), lifting wavelet transform (LWT) and KL transform (KLT), lack a suitable transformation method to make these data less redundant and better compressed. A new predictive coding approach, basing on the adaptive dictionary, is proposed to improve the compression ratio of the periodic signal. The main criterion of lossless compression is the compression ratio (CR). In order to verify the effectiveness of the adaptive dictionary predictive coding for periodic signal compression, different transform coding technologies, including DPCM, 2-D DCT, and 2-D LWT, are compared. The results obtained prove that the adaptive dictionary predictive coding can effectively improve data compression efficiency compared with traditional transform coding technology.


2003 ◽  
Vol 13 (01) ◽  
pp. 39-45
Author(s):  
AMER AL-NASSIRI

In this paper we considered a theoretical evaluation of data and text compression algorithm based on the Burrows–Wheeler Transform (BWT) and General Bidirectional Associative Memory (GBAM). A new data and text lossless compression method, based on the combination of BWT1 and GBAM2 approaches, is presented. The algorithm was tested on many texts in different formats (ASCII and RTF). The compression ratio achieved is fairly good, on average 28–36%. Decompression is fast.


2018 ◽  
Vol 12 (11) ◽  
pp. 406
Author(s):  
Evon Abu-Taieh ◽  
Issam AlHadid

Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).


Author(s):  
Peter Kroll ◽  
Torsten Radtke ◽  
Volker Zerbe

Data compression, and in particular image compression, plays an important role in today’s information age. Images take up over 90% of the data transfer volume on the Internet and bottlenecks like modems require heavy compression to satisfy the demands of a user. One can divide the subject of data compression into two categories: lossless compression for an exact reconstruction of the original data, and lousy compression for an approximate - as close as possible to the original - reconstruction. In this chapter we address lossless and lossy compression techniques for still images.


2017 ◽  
Vol 34 (11) ◽  
pp. 2499-2508
Author(s):  
Fangjie Yu ◽  
Linhua Li ◽  
Yang Zhao ◽  
Mengmeng Wang ◽  
Guilin Liu ◽  
...  

AbstractUnmanned vehicles represent a significant technical improvement for ocean and atmospheric monitoring. With the increasing number of sensors mounted on the unmanned mobile platforms, the data volume and its rapid growth introduce a new challenge relative to the limited transmission bandwidth. Data compression provides an effective approach. However, installing a lossless compression algorithm in an embedded system, which is in fact limited in computing resources, scale, and energy consumption, is a challenging task. To address this issue, a novel self-adaptive lossless compression algorithm (SALCA) that is focused on the dynamic characteristics of multidisciplinary ocean and atmospheric observation data is proposed that is the extended work of two-model transmission theory. The proposed method uses a second-order linear predictor that can be changed as the input data vary and can achieve better lossless compression performance for dynamic ocean data. More than 200 groups of conductivity–temperature–depth (CTD) profile data from underwater gliders are used as the standard input, and the results show that compared to two state-of-the-art compression methods, the proposed compression algorithm performs better in terms of compression ratio and comprehensive power consumption in an embedded system.


Sign in / Sign up

Export Citation Format

Share Document