scholarly journals Implementasi Algoritma J-Bit Encoding Pada Kompresi File PDF

2020 ◽  
Vol 1 (3) ◽  
pp. 278
Author(s):  
Frida Effelyanti Naibaho

Compression is a reduction in the size of the data to a smaller size than the original. The technique of compression is to replace the repetitive character with a certain pattern so that the data can minimize its size. Therefore the data that will be stored needs to be compressed first so that the size becomes smaller. The advantage of pdf files is that the pdf file format can not only store data in text form but also can save an image (image) or photo. In storing in pdf files, the data that will be stored is not susceptible to viruses. One pdf file compression technique that can compress images well is compressing pdf files using the J-Bit Encoding method. J-Bit Encoding (JBE) is an algorithm that optimizes input for other compression algorithms, thus the compression algorithm will be better if it is combined with the J-Bit Encoding method.

This paper proposes an improved data compression technique compared to existing Lempel-Ziv-Welch (LZW) algorithm. LZW is a dictionary-updation based compression technique which stores elements from the data in the form of codes and uses them when those strings recur again. When the dictionary gets full, every element in the dictionary are removed in order to update dictionary with new entry. Therefore, the conventional method doesn’t consider frequently used strings and removes all the entry. This method is not an effective compression when the data to be compressed are large and when there are more frequently occurring string. This paper presents two new methods which are an improvement for the existing LZW compression algorithm. In this method, when the dictionary gets full, the elements that haven’t been used earlier are removed rather than removing every element of the dictionary which happens in the existing LZW algorithm. This is achieved by adding a flag to every element of the dictionary. Whenever an element is used the flag is set high. Thus, when the dictionary gets full, the dictionary entries where the flag was set high are kept and others are discarded. In the first method, the entries are discarded abruptly, whereas in the second method the unused elements are removed once at a time. Therefore, the second method gives enough time for the nascent elements of the dictionary. These techniques all fetch similar results when data set is small. This happens due to the fact that difference in the way they handle the dictionary when it’s full. Thus these improvements fetch better results only when a relatively large data is used. When all the three techniques' models were used to compare a data set with yields best case scenario, the compression ratios of conventional LZW is small compared to improved LZW method-1 and which in turn is small compared to improved LZW method-2.


2018 ◽  
Vol 173 ◽  
pp. 03071
Author(s):  
Wu Wenbin ◽  
Yue Wu ◽  
Jintao Li

In this paper, we propose a lossless compression algorithm for hyper-spectral images with the help of the K-Means clustering and parallel prediction. We use K-Means clustering algorithm to classify hyper-spectral images, and we obtain a number of two dimensional sub images. We use the adaptive prediction compression algorithm based on the absolute ratio to compress the two dimensional sub images. The traditional prediction algorithm is adopted in the serial processing mode, and the processing time is long. So we improve the efficiency of the parallel prediction compression algorithm, to meet the needs of the rapid compression. In this paper, a variety of hyper-spectral image compression algorithms are compared with the proposed method. The experimental results show that the proposed algorithm can effectively improve the compression ratio of hyper-spectral images and reduce the compression time effectively.


2020 ◽  
Vol 32 ◽  
pp. 03006
Author(s):  
D. Suneetha ◽  
D. Rathna Kishore ◽  
P. Narendra Babu

Data Compression in Cryptography is one of the interesting research topic. The compression process reduces the amount of transferring data as well as storage space which in turn effects the usage of bandwidth. Further, when a plain text is converted to cipher text, the length of the cipher text becomes large. This adds up to tremendous information storing. It is extremely important to address the storage capacity issue along with the security issues of exponentially developing information. This problem can be resolved by compressing the ciphertext based on a some compression algorithm. In this proposed work used the compression technique called palindrome compression technique. The compression ratio of the proposed method is better than the standard method for both colored and gray scaled images. An experimental result for the proposed methods is better than existing methods for different types of image.


2015 ◽  
Vol 731 ◽  
pp. 57-61
Author(s):  
Ju Hua Liu ◽  
Yao Hua Yi ◽  
Yuan Yuan ◽  
Hai Su ◽  
Min Jing Miao

Since the traditional gamut compression algorithms will fail to consider the image spatial characteristics, a novel gamut compression method based on the image spatial characteristics is proposed in this paper. At first, the image is compressed by traditional compression algorithm, then the compensation values of lightness and chroma obtained by a high-pass filter are added to the compressed image. Finally, a gamut clipping processing is carried out. Experimental results indicate that the proposed method can not only guarantee the color features, but also preserve the image spatial characteristics quite well.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Pamela Vinitha Eric ◽  
Gopakumar Gopalakrishnan ◽  
Muralikrishnan Karunakaran

This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms.


2018 ◽  
Vol 16 (05) ◽  
pp. 1850018 ◽  
Author(s):  
Sanjeev Kumar ◽  
Suneeta Agarwal ◽  
Ranvijay

Genomic data nowadays is playing a vital role in number of fields such as personalized medicine, forensic, drug discovery, sequence alignment and agriculture, etc. With the advancements and reduction in the cost of next-generation sequencing (NGS) technology, these data are growing exponentially. NGS data are being generated more rapidly than they could be significantly analyzed. Thus, there is much scope for developing novel data compression algorithms to facilitate data analysis along with data transfer and storage directly. An innovative compression technique is proposed here to address the problem of transmission and storage of large NGS data. This paper presents a lossless non-reference-based FastQ file compression approach, segregating the data into three different streams and then applying appropriate and efficient compression algorithms on each. Experiments show that the proposed approach (WBFQC) outperforms other state-of-the-art approaches for compressing NGS data in terms of compression ratio (CR), and compression and decompression time. It also has random access capability over compressed genomic data. An open source FastQ compression tool is also provided here ( http://www.algorithm-skg.com/wbfqc/home.html ).


2016 ◽  
Vol 78 (6-4) ◽  
Author(s):  
Muhamad Azlan Daud ◽  
Muhammad Rezal Kamel Ariffin ◽  
S. Kularajasingam ◽  
Che Haziqah Che Hussin ◽  
Nurliyana Juhan ◽  
...  

A new compression algorithm used to ensure a modified Baptista symmetric cryptosystem which is based on a chaotic dynamical system to be applicable is proposed. The Baptista symmetric cryptosystem able to produce various ciphers responding to the same message input. This modified Baptista type cryptosystem suffers from message expansion that goes against the conventional methodology of a symmetric cryptosystem. A new lossless data compression algorithm based on theideas from the Huffman coding for data transmission is proposed.This new compression mechanism does not face the problem of mapping elements from a domain which is much larger than its range.Our new algorithm circumvent this problem via a pre-defined codeword list.  The purposed algorithm has fast encoding and decoding mechanism and proven analytically to be a lossless data compression technique.


Modern radiology techniques provide crucial medical information for radiologists to diagnose diseases and determine appropriate treatments. Hence dealing with medical image compression needs to compromise on good perceptual quality (i.e. diagnostically lossless) and high compression rate. The objective also includes finding out an optimum algorithm for medical image compression algorithm. The objective is also focused towards the selection of the developed image compression algorithm, which do not change the characterization behavior of the image.


2018 ◽  
Vol 12 (11) ◽  
pp. 387
Author(s):  
Evon Abu-Taieh ◽  
Issam AlHadid

Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).


Sign in / Sign up

Export Citation Format

Share Document