scholarly journals Improving Data Compression Ratio by the Use of Optimality of LZW & Adaptive Huffman Algorithm (OLZWH)

2015 ◽  
Vol 4 (1) ◽  
pp. 11-19
Author(s):  
Pooja Jain ◽  
Anurag jain ◽  
Chetan Agrawal
2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Qin Jiancheng ◽  
Lu Yiqin ◽  
Zhong Yu

As the wireless network has limited bandwidth and insecure shared media, the data compression and encryption are very useful for the broadcasting transportation of big data in IoT (Internet of Things). However, the traditional techniques of compression and encryption are neither competent nor efficient. In order to solve this problem, this paper presents a combined parallel algorithm named “CZ algorithm” which can compress and encrypt the big data efficiently. CZ algorithm uses a parallel pipeline, mixes the coding of compression and encryption, and supports the data window up to 1 TB (or larger). Moreover, CZ algorithm can encrypt the big data as a chaotic cryptosystem which will not decrease the compression speed. Meanwhile, a shareware named “ComZip” is developed based on CZ algorithm. The experiment results show that ComZip in 64 b system can get better compression ratio than WinRAR and 7-zip, and it can be faster than 7-zip in the big data compression. In addition, ComZip encrypts the big data without extra consumption of computing resources.


2014 ◽  
Vol 986-987 ◽  
pp. 1700-1703
Author(s):  
Li Li Hu ◽  
Sheng Suo Niu ◽  
Zhi Rui Liang

With the Wide Area Measurement applications, a large number of experimental PMU data was generated. In order to analyze, transmit and apply the data efficiently, by understanding the characteristics of the PMU data in this paper, pressing the data with the Waveform difference method, then using Huffman algorithm to compress the data. Compare the data pre-processing before and after, compression ratio has been further improved.


Author(s):  
Ahmad Mohamad Al-Smadi ◽  
Ahmad Al-Smadi ◽  
Roba Mahmoud Ali Aloglah ◽  
Nisrein Abu-darwish ◽  
Ahed Abugabah

The Vernam-cipher is known as a one-time pad of algorithm that is an unbreakable algorithm because it uses a typically random key equal to the length of data to be coded, and a component of the text is encrypted with an element of the encryption key. In this paper, we propose a novel technique to overcome the obstacles that hinder the use of the Vernam algorithm. First, the Vernam and advance encryption standard AES algorithms are used to encrypt the data as well as to hide the encryption key; Second, a password is placed on the file because of the use of the AES algorithm; thus, the protection record becomes very high. The Huffman algorithm is then used for data compression to reduce the size of the output file. A set of files are encrypted and decrypted using our methodology. The experiments demonstrate the flexibility of our method, and it’s successful without losing any information.


Author(s):  
Gody Mostafa ◽  
Abdelhalim Zekry ◽  
Hatem Zakaria

When transmitting the data in digital communication, it is well desired that the transmitting data bits should be as minimal as possible, so many techniques are used to compress the data. In this paper, a Lempel-Ziv algorithm for data compression was implemented through VHDL coding. One of the most lossless data compression algorithms commonly used is Lempel-Ziv. The work in this paper is devoted to improve the compression rate, space-saving, and utilization of the Lempel-Ziv algorithm using a systolic array approach. The developed design is validated with VHDL simulations using Xilinx ISE 14.5 and synthesized on Virtex-6 FPGA chip. The results show that our design is efficient in providing high compression rates and space-saving percentage as well as improved utilization. The Throughput is increased by 50% and the design area is decreased by more than 23% with a high compression ratio compared to comparable previous designs.


2018 ◽  
Author(s):  
Andysah Putera Utama Siahaan

Compression aims to reduce data before storing or moving it into storage media. Huffman and Elias Delta Code are two algorithms used for the compression process in this research. Data compression with both algorithms is used to compress text files. These two algorithms have the same way of working. It starts by sorting characters based on their frequency, binary tree formation and ends with code formation. In the Huffman algorithm, binary trees are formed from leaves to roots and are called tree-forming from the bottom up. In contrast, the Elias Delta Code method has a different technique. Text file compression is done by reading the input string in a text file and encoding the string using both algorithms. The compression results state that the Huffman algorithm is better overall than Elias Delta Code.


2020 ◽  
Vol 10 (14) ◽  
pp. 4918
Author(s):  
Shaofei Dai ◽  
Wenbo Liu ◽  
Zhengyi Wang ◽  
Kaiyu Li ◽  
Pengfei Zhu ◽  
...  

This paper reports on an efficient lossless compression method for periodic signals based on adaptive dictionary predictive coding. Some previous methods for data compression, such as difference pulse coding (DPCM), discrete cosine transform (DCT), lifting wavelet transform (LWT) and KL transform (KLT), lack a suitable transformation method to make these data less redundant and better compressed. A new predictive coding approach, basing on the adaptive dictionary, is proposed to improve the compression ratio of the periodic signal. The main criterion of lossless compression is the compression ratio (CR). In order to verify the effectiveness of the adaptive dictionary predictive coding for periodic signal compression, different transform coding technologies, including DPCM, 2-D DCT, and 2-D LWT, are compared. The results obtained prove that the adaptive dictionary predictive coding can effectively improve data compression efficiency compared with traditional transform coding technology.


2020 ◽  
Vol 2020 ◽  
pp. 1-22
Author(s):  
Qin Jiancheng ◽  
Lu Yiqin ◽  
Zhong Yu

With the advent of IR (Industrial Revolution) 4.0, the spread of sensors in IoT (Internet of Things) may generate massive data, which will challenge the limited sensor storage and network bandwidth. Hence, the study of big data compression is valuable in the field of sensors. A problem is how to compress the long-stream data efficiently with the finite memory of a sensor. To maintain the performance, traditional techniques of compression have to treat the data streams on a small and incompetent scale, which will reduce the compression ratio. To solve this problem, this paper proposes a block-split coding algorithm named “CZ-Array algorithm,” and implements it in the shareware named “ComZip.” CZ-Array can use a relatively small data window to cover a configurable large scale, which benefits the compression ratio. It is fast with the time complexity O(N) and fits the big data compression. The experiment results indicate that ComZip with CZ-Array can obtain a better compression ratio than gzip, lz4, bzip2, and p7zip in the multiple stream data compression, and it also has a competent speed among these general data compression software. Besides, CZ-Array is concise and fits the hardware parallel implementation of sensors.


Sign in / Sign up

Export Citation Format

Share Document