Microscan Imager Logging Data Compression Using improved Huffman Algorithm

Author(s):  
Yanhui Zhang ◽  
He Zhang ◽  
Jun Shi ◽  
Xiaozheng Yin
Author(s):  
Ahmad Mohamad Al-Smadi ◽  
Ahmad Al-Smadi ◽  
Roba Mahmoud Ali Aloglah ◽  
Nisrein Abu-darwish ◽  
Ahed Abugabah

The Vernam-cipher is known as a one-time pad of algorithm that is an unbreakable algorithm because it uses a typically random key equal to the length of data to be coded, and a component of the text is encrypted with an element of the encryption key. In this paper, we propose a novel technique to overcome the obstacles that hinder the use of the Vernam algorithm. First, the Vernam and advance encryption standard AES algorithms are used to encrypt the data as well as to hide the encryption key; Second, a password is placed on the file because of the use of the AES algorithm; thus, the protection record becomes very high. The Huffman algorithm is then used for data compression to reduce the size of the output file. A set of files are encrypted and decrypted using our methodology. The experiments demonstrate the flexibility of our method, and it’s successful without losing any information.


2018 ◽  
Author(s):  
Andysah Putera Utama Siahaan

Compression aims to reduce data before storing or moving it into storage media. Huffman and Elias Delta Code are two algorithms used for the compression process in this research. Data compression with both algorithms is used to compress text files. These two algorithms have the same way of working. It starts by sorting characters based on their frequency, binary tree formation and ends with code formation. In the Huffman algorithm, binary trees are formed from leaves to roots and are called tree-forming from the bottom up. In contrast, the Elias Delta Code method has a different technique. Text file compression is done by reading the input string in a text file and encoding the string using both algorithms. The compression results state that the Huffman algorithm is better overall than Elias Delta Code.


d'CARTESIAN ◽  
2013 ◽  
Vol 2 (2) ◽  
pp. 10 ◽  
Author(s):  
Christine Lamorahan ◽  
Benny Pinontoan ◽  
Nelson Nainggolan

Abstract Communication systems in the world of technology, information and communication are known as data transfer system. Sometimes the information received lost its authenticity, because size of data to be transferred exceeds the capacity of the media used. This problem can be reduced by applying compression process to shrink the size of the data to obtain a smaller size. This study considers compression for data text using Shannon – Fano algorithm and shows how effective these algorithms in compressing it when compared with the Huffman algorithm. This research shows that text data compression using Shannon-Fano algorithm has a same effectiveness with Huffman algorithm when all character in string all repeated and when the statement short and just one character in the statement that repeated, but the Shannon-Fano algorithm more effective then Huffman algorithm when the data has a long statement and data text have more combination character in statement or in string/ word. Keywords: Data compression, Huffman algorithm, Shannon-Fano algorithm Abstrak Sistem komunikasi dalam dunia teknologi informasi dan komunikasi dikenal sebagai sistem transfer data. Informasi yang diterima kadang tidak sesuai dengan aslinya, dan salah satu penyebabnya adalah besarnya ukuran data yang akan ditransfer melebihi kapasitas media yang digunakan. Masalah ini dapat diatasi dengan menerapkan proses kompresi untuk mengecilkan ukuran data yang besar sehingga diperoleh ukuran yang lebih kecil. Penelitian ini menunjukan salah satu kompresi untuk data teks dengan menggunakan algoritma Shannon – Fano serta menunjukan seberapa efektif algoritma tersebut dalam mengkompresi data jika dibandingkan dengan algoritma Huffman. Kompresi untuk data teks dengan algoritma Shannon-Fano menghasilkan suatu data dengan ukuran yang lebih kecil dari data sebelumnya dan perbandingan dengan algoritma Huffman menunjukkan bahwa algoritma Shannon- Fano memiliki keefektifan yang sama dengan algoritma Huffman jika semua karakter yang ada di data berulang dan jika dalam satu kalimat hanya ada satu karakter yang berulang, tapi algoritma Shannon-Fano lebih efektif jika kalimat lebih panjang dan jumlah karakter di dalam kalimat atau kata lebih banyak dan beragam. Kata kunci: Algoritma Huffman, Algoritma Shannon-Fano, Kompresi data


2011 ◽  
Vol 1 (1) ◽  
pp. 16-22 ◽  
Author(s):  
Satpreet Singh ◽  
Harmandeep Singh

In information age, sending the data from one end to another endneed lot of space as well as time. Data compression is atechnique to compress the information source (e.g. a data file, aspeech signal, an image, or a video signal) in possible fewnumbers of bits. One of the major factors that influence the DataCompression technique is the procedure to encode the sourcedata and space required for encoded data. There are many datacompressions methods which are used for data compression andout of which Huffman is mostly used for same. Huffmanalgorithms have two ranges static as well as adaptive. StaticHuffman algorithm is a technique that encoded the data in twopasses. In first pass it requires to calculate the frequency of eachsymbol and in second pass it constructs the Huffman tree.Adaptive Huffman algorithm is expanded on Huffman algorithmthat constructs the Huffman tree but take more space than StaticHuffman algorithm. This paper introduces a new datacompression Algorithm which is based on Huffman coding. Thisalgorithm not only reduces the number of pass but also reducethe storage space in compare to adaptive Huffman algorithm andcomparable to static.


Sign in / Sign up

Export Citation Format

Share Document