scholarly journals An Efficient Lossless Compression Method for Periodic Signals Based on Adaptive Dictionary Predictive Coding

2020 ◽  
Vol 10 (14) ◽  
pp. 4918
Author(s):  
Shaofei Dai ◽  
Wenbo Liu ◽  
Zhengyi Wang ◽  
Kaiyu Li ◽  
Pengfei Zhu ◽  
...  

This paper reports on an efficient lossless compression method for periodic signals based on adaptive dictionary predictive coding. Some previous methods for data compression, such as difference pulse coding (DPCM), discrete cosine transform (DCT), lifting wavelet transform (LWT) and KL transform (KLT), lack a suitable transformation method to make these data less redundant and better compressed. A new predictive coding approach, basing on the adaptive dictionary, is proposed to improve the compression ratio of the periodic signal. The main criterion of lossless compression is the compression ratio (CR). In order to verify the effectiveness of the adaptive dictionary predictive coding for periodic signal compression, different transform coding technologies, including DPCM, 2-D DCT, and 2-D LWT, are compared. The results obtained prove that the adaptive dictionary predictive coding can effectively improve data compression efficiency compared with traditional transform coding technology.

Author(s):  
Hendra Mesra ◽  
Handayani Tjandrasa ◽  
Chastine Fatichah

<p>In general, the compression method is developed to reduce the redundancy of data. This study uses a different approach to embed some bits of datum in image data into other datum using a Reversible Low Contrast Mapping (RLCM) transformation. Besides using the RLCM for embedding, this method also applies the properties of RLCM to compress the datum before it is embedded. In its algorithm, the proposed method engages Queue and Recursive Indexing. The algorithm encodes the data in a cyclic manner. In contrast to RLCM, the proposed method is a coding method as Huffman coding. This research uses publicly available image data to examine the proposed method. For all testing images, the proposed method has higher compression ratio than the Huffman coding.</p>


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4273 ◽  
Author(s):  
Jianlin Liu ◽  
Fenxiong Chen ◽  
Dianhong Wang

Data compression is very important in wireless sensor networks (WSNs) with the limited energy of sensor nodes. Data communication results in energy consumption most of the time; the lifetime of sensor nodes is usually prolonged by reducing data transmission and reception. In this paper, we propose a new Stacked RBM Auto-Encoder (Stacked RBM-AE) model to compress sensing data, which is composed of a encode layer and a decode layer. In the encode layer, the sensing data is compressed; and in the decode layer, the sensing data is reconstructed. The encode layer and the decode layer are composed of four standard Restricted Boltzmann Machines (RBMs). We also provide an energy optimization method that can further reduce the energy consumption of the model storage and calculation by pruning the parameters of the model. We test the performance of the model by using the environment data collected by Intel Lab. When the compression ratio of the model is 10, the average Percentage RMS Difference value is 10.04%, and the average temperature reconstruction error value is 0.2815 °C. The node communication energy consumption in WSNs can be reduced by 90%. Compared with the traditional method, the proposed model has better compression efficiency and reconstruction accuracy under the same compression ratio. Our experiment results show that the new neural network model can not only apply to data compression for WSNs, but also have high compression efficiency and good transfer learning ability.


Author(s):  
Kamal Al-Khayyat ◽  
Imad Al-Shaikhli ◽  
Mohamad Al-Hagery

This paper details the examination of a particular case of data compression, where the compression algorithm removes the redundancy from data, which occurs when edge-based compression algorithms compress (previously compressed) pixelated images. The newly created redundancy can be removed using another round of compression. This work utilized the JPEG-LS as an example of an edge-based compression algorithm for compressing pixelated images. The output of this process was subjected to another round of compression using a more robust but slower compressor (PAQ8f). The compression ratio of the second compression was, on average,  18%, which is high for random data. The results of the second compression were superior to the lossy JPEG. Under the used data set, lossy JPEG needs to sacrifice  10% on average to realize nearly total lossless compression ratios of the two-successive compressions. To generalize the results, fast general-purpose compression algorithms (7z, bz2, and Gzip) were used too.


2003 ◽  
Vol 13 (01) ◽  
pp. 39-45
Author(s):  
AMER AL-NASSIRI

In this paper we considered a theoretical evaluation of data and text compression algorithm based on the Burrows–Wheeler Transform (BWT) and General Bidirectional Associative Memory (GBAM). A new data and text lossless compression method, based on the combination of BWT1 and GBAM2 approaches, is presented. The algorithm was tested on many texts in different formats (ASCII and RTF). The compression ratio achieved is fairly good, on average 28–36%. Decompression is fast.


2018 ◽  
Vol 1 (2) ◽  
pp. 20-26
Author(s):  
Tommy Tommy ◽  
Rosyidah Siregar ◽  
Amir Mahmud Husein ◽  
Mawaddah Harahap ◽  
Ferdy Riza

ASCII differentiation is a compression method that utilizes the difference value or the difference between the bytes contained in the input character. Technically, the ASCII differentiation method can be done using a coding dictionary or using windowing block instead of the coding dictionary. Previous research that has been carried out shows that the ASCII differentiation compression ratio is good enough but still needs to be analyzed on performance from the perspective of the compression ratio of the method compared to other methods that have been widely used today. In this study an analysis of the comparison of the ASCII Difference method with other compression methods such as LZW will be carried out. The selection of LZW itself is done by reason of the number of data compression applications that use the method so that it can be the right benchmark. Comparison of the compression ratio performed shows the results of ASCII differentiation have advantages compared to LZW, especially in small input characters. Whereas in large input characters, LZW can optimize the probability of pairs of characters that appear compared to ASCII differentiation which is glued to the difference values ​​in each block of input characters so that in large size characters LZW has a greater compression ratio compared to ASCII differentiation.


Author(s):  
Hendra Mesra ◽  
Handayani Tjandrasa ◽  
Chastine Fatichah

<p>In general, the compression method is developed to reduce the redundancy of data. This study uses a different approach to embed some bits of datum in image data into other datum using a Reversible Low Contrast Mapping (RLCM) transformation. Besides using the RLCM for embedding, this method also applies the properties of RLCM to compress the datum before it is embedded. In its algorithm, the proposed method engages Queue and Recursive Indexing. The algorithm encodes the data in a cyclic manner. In contrast to RLCM, the proposed method is a coding method as Huffman coding. This research uses publicly available image data to examine the proposed method. For all testing images, the proposed method has higher compression ratio than the Huffman coding.</p>


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. V219-V228 ◽  
Author(s):  
Ming Cai ◽  
Wenxiao Qiao ◽  
Xiaodong Ju ◽  
Xiaohua Che ◽  
Yuhong Zhao

In well logging, large amounts of data need to be sent from downhole to the surface by means of a very band-limited telemetry system. The limited bandwidth usually results in prolonging of expensive rig time and/or the sacrifice of borehole information. Data compression techniques, to some extent, may relieve this problem. We deduced the adaptive (4, 4) lifting integer-to-integer wavelet transform formula and its inverse transform formula based on the basic principle of wavelet transform, and we explored an appropriate bit-recombination mark coding approach according to the characteristics of wavelet transform coefficients. Then a new lossless compression method for acoustic waveform data based on wavelet transform and bit-recombination mark coding was discovered. The compression method mainly consists of wavelet transform, data type conversion, bit-recombination, and mark coding, whereas the decompression method consists of mark decoding, bit-recovery, data type conversion, and inverse wavelet transform. The compression and decompression programs were developed according to the proposed method. Compression and decompression tests were then applied to field and synthetic acoustic logging waveform data, and the compression performance of our method and several other lossless compression methods were compared and analyzed. Test results validated the correctness of our method and demonstrated its advantages. The new method is potentially applicable to acoustic waveform data compression.


2016 ◽  
Vol 12 (2) ◽  
Author(s):  
Yosia Adi Jaya ◽  
Lukas Chrisantyo ◽  
Willy Sudiarto Raharjo

Data Compression can save some storage space and accelerate data transfer. Among many compression algorithm, Run Length Encoding (RLE) is a simple and fast algorithm. RLE can be used to compress many types of data. However, RLE is not very effective for image lossless compression because there are many little differences between neighboring pixels. This research proposes a new lossless compression algorithm called YRL that improve RLE using the idea of Relative Encoding. YRL can treat the value of neighboring pixels as the same value by saving those little differences / relative value separately. The test done by using various standard image test shows that YRL have an average compression ratio of 75.805% for 24-bit bitmap and 82.237% for 8-bit bitmap while RLE have an average compression ratio of 100.847% for 24-bit bitmap and 97.713% for 8-bit bitmap.


Author(s):  
Emy Setyaningsih ◽  
Agus Harjoko

A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.


Sign in / Sign up

Export Citation Format

Share Document