scholarly journals ΔRLE

2021 ◽  
Vol 45 (1) ◽  
pp. 329-349
Author(s):  
Branslav Mados ◽  
Zuzana Bilanová ◽  
Ján Hurtuk

Lossless data compression algorithms can use statistical redundancy to represent data using a fewer number of bits in comparison to the original uncompressed data. Run-Length Encoding (RLE) is one of the simplest lossless compression algorithms in terms of understanding its principles and software implementation, as well as in terms of temporal and spatial complexity. If this principle is applied to individual bits of original uncompressed data without respecting the byte boundaries, this approach is referred to as bit-level Run-Length Encoding. Lightweight algorithm for lossless data compression proposed in this paper optimizes bit-level RLE data compression, uses special encoding of repeating data blocks, and, if necessary, combines it with delta data transformation or representation of data in its original form intending to increase compression efficiency compared to a conventional bit-level RLE approach. The advantage of the algorithm proposed in this paper is in its low time and memory consumption which are basic features of RLE, along with the simultaneous increase of compression ratio, compared to the classical bit-level RLE approach.

2020 ◽  
Vol 20 (02) ◽  
pp. 2050007
Author(s):  
Poorva Girishwaingankar ◽  
Sangeeta Milind Joshi

This paper proposes a compression algorithm using octonary repetition tree (ORT) based on run length encoding (RLE). Generally, RLE is one type of lossless data compression method which has duplication problem as a major issue due to the usage of code word or flag. Hence, ORT is offered instead of using a flag or code word to overcome this issue. This method gives better performance by means of compression ratio, i.e. 99.75%. But, the functioning of ORT is not good in terms of compression speed. For that reason, physical- next generation secure computing (PHY-NGSC) is hybridized with ORT to raise the compression speed. It uses an MPI-open MP programming paradigm on ORT to improve the compression speed of encoder. The planned work achieves multiple levels of parallelism within an image such as MPI and open MP for parallelism across a group of pictures level and slice level, respectively. At the same time, wide range of data compression like multimedia, executive files and documents are possible in the proposed method. The performance of the proposed work is compared with other methods like accordian RLE, context adaptive variable length coding (CAVLC) and context-based arithmetic coding (CBAC) through the implementation in Matlab working platform.


Author(s):  
Gody Mostafa ◽  
Abdelhalim Zekry ◽  
Hatem Zakaria

When transmitting the data in digital communication, it is well desired that the transmitting data bits should be as minimal as possible, so many techniques are used to compress the data. In this paper, a Lempel-Ziv algorithm for data compression was implemented through VHDL coding. One of the most lossless data compression algorithms commonly used is Lempel-Ziv. The work in this paper is devoted to improve the compression rate, space-saving, and utilization of the Lempel-Ziv algorithm using a systolic array approach. The developed design is validated with VHDL simulations using Xilinx ISE 14.5 and synthesized on Virtex-6 FPGA chip. The results show that our design is efficient in providing high compression rates and space-saving percentage as well as improved utilization. The Throughput is increased by 50% and the design area is decreased by more than 23% with a high compression ratio compared to comparable previous designs.


2013 ◽  
Vol 21 (2) ◽  
pp. 133-143
Author(s):  
Hiroyuki Okazaki ◽  
Yuichi Futa ◽  
Yasunari Shidama

Summary Huffman coding is one of a most famous entropy encoding methods for lossless data compression [16]. JPEG and ZIP formats employ variants of Huffman encoding as lossless compression algorithms. Huffman coding is a bijective map from source letters into leaves of the Huffman tree constructed by the algorithm. In this article we formalize an algorithm constructing a binary code tree, Huffman tree.


2011 ◽  
Vol 403-408 ◽  
pp. 2441-2444
Author(s):  
Hong Zhi Lu ◽  
Xue Jun Ren

According to the theory of simple linear regression model, this paper designed a lossless sensor data compression algorithm based on one-dimensional linear regression model. The algorithm computes the linear fitting values of sensor data’s differences and fitting residuals, which are input to a normal distribution entropy encoder to perform compression. Compared with two typical lossless compression algorithms, the proposed algorithm indicated better compression ratios.


1997 ◽  
Vol 07 (03) ◽  
pp. 551-567 ◽  
Author(s):  
Michael F. Barnsley ◽  
Anca Deliu ◽  
Ruifeng Xie

It is shown that the invariant measure of a stationary nonatomic stochastic process yields an iterated function system with probabilities and an associated dynamical system that provide the basis for optimal lossless data compression algorithms. The theory is illustrated for the case of finite-order Markov processes: For a zero-order process, it produces the arithmetic compression method; while for higher order processes it yields dynamical systems, constructed from piecewise affine mappings from the interval [0, 1] into itself, that may be used to store information efficiently. The theory leads to a new geometrical approach to the development of compression algorithms.


Sign in / Sign up

Export Citation Format

Share Document