scholarly journals A Fast and Efficient Lossless Compression Technique for Greyscale Images

The growth of cloud based remote healthcare and diagnosis services has resulted, Medical Service Providers (MSP) to share diagnositic data across diverse environement. This medical data are accessed across diverse platforms, such as, mobile and web services which needs huge memory for storage. Compression technique helps to address and solve storage requirements and provides for sharing medical data over transmission medium. Loss of data is not acceptable for medical image processing. As a result, this work considers lossless compression for medical in particular and in general any greyscale images. Modified Huffman encoding (MH) is one of the widely used technique for achieving lossless compression. However, due to longer bit length of codewords the existing Modified Huffman (MH) encoding technique is not efficient for medical imaging processing. Firstly, this work presents Modified Refined Huffman (MRH) for performing compression of greyscale and binary images by using diagonal scanning method. Secondly, to minimize the computing time parallel encoding method is used. Experiments are conducted for wide variety of images and performance is evaluated in terms of Compression Ratio, Computation Time and Memory Utilization. The proposed MRH achieves significant performance improvement in terms of Compression Ratio, Computation Time and Memory Usage over its state-of-the-art techniques, such as, LZW, CCITT G4, JBIG2 and Levenberg–Marquardt (LM) Neural Network algorithm. The overall results achieved show the applicability of MRH for different application services.

2012 ◽  
Vol 433-440 ◽  
pp. 6540-6545
Author(s):  
Vineet Khanna ◽  
Hari Singh Choudhary ◽  
Prakrati Trivedi

This paper presents a new image lossless compression technique for natural images. The proposed algorithm uses the switching of existing Edge Directed Prediction algorithm and gradient Adaptive Predictor (GAP ) methods. The proposed algorithm is a switching based algorithm and the criteria of switching are based upon the adaptive threshold. We know that EDP has higher computational complexity due to the estimation of LS (Least Square ) based paramter whereas GAP has relatively lower computational complexity. So, in order to reduce the computational complexity we had made a hybrid combination of EDP and GAP Method which the proposed algorithm is a generic algorithm and it produces the best results in different varieties of images in terms of both compression ratio and computational complexity.


A massive volume of medical data is generating through advanced medical image modalities. With advancements in telecommunications, Telemedicine, and Teleradiologyy have become the most common and viable methods for effective health care delivery around the globe. For sufficient storage, medical images should be compressed using lossless compression techniques. In this paper, we aim at developing a lossless compression technique to achieve a better compression ratio with reversible data hiding. The proposed work segments foreground and background area in medical images using semantic segmentation with the Hierarchical Neural Architecture Search (HNAS) Network model. After segmenting the medical image, confidential patient data is hidden in the foreground area using the parity check method. Following data hiding, lossless compression of foreground and background is done using Huffman and Lempel-Ziv-Welch methods. The performance of our proposed method has been compared with those obtained from standard lossless compression algorithms and existing reversible data hiding methods. This proposed method achieves better compression ratio and a hundred percent reversible when data extraction.


Author(s):  
Restu Maulunida ◽  
Achmad Solichin

At present, the need to access the data have been transformed into digital data, and its use has been growing very rapidly. This transformation is due to the use of the Internet is growing very rapidly, and also the development of mobile devices are growing massively. People tend to store a lot of files in their storage and transfer files from one media to another media. When approaching the limit of storage media, the fewer files that can be stored. A compression technique is required to reduce the size of a file. The dictionary coding technique is one of the lossless compression techniques, LZW is an algorithm for applying coding dictionary compression techniques. In the LZW algorithm, the process of forming a dictionary uses a future based dictionary and encoding process using the Fixed Length Code. It allows the encoding process to produce a sequence that is still quite long. This study will modify the process of forming a dictionary and use Variable Length Code, to optimize the compression ratio. Based on the test using the data used in this study, the average compression ratio for LZW algorithm is 42,85%, and our proposed algorithm is 38,35%. It proves that the modification of the formation of the dictionary we proposed has not been able to improve the compression ratio of the LZW algorithm.


Author(s):  
Mumtaz Anwar Hussin ◽  
◽  
Farhana Ahmad Poad ◽  
Ariffuddin Joret ◽  
◽  
...  

Nowadays, the development of technology which involves multimedia data is widely used to help better understanding in spreading information. Image is known as 2D signal which contain huge data especially a high resolution image. This paper shows the comparison of applying lossy and lossless compression on the image data. Image compression is necessary in reducing the size of image for storage or transmission purpose to support most of the application nowadays. The technique applied in this paper is the hybrid of Discrete Wavelet Transform (DWT) technique and Huffman coding technique which are classified as lossy and lossless compression, respectively. The performance of image compression are evaluated in terms of compression ratio, Mean Square Error (MSE), Power Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) and computing time. Several types of evaluation can determine better technique to apply on specific type of application. The stand-alone of each DWT and Huffman technique are evaluated before applying hybrid of DWT and Huffman technique. After conducting a comprehensive observation, the hybrid technique can compress with ratio about 1:17 to 1:27 due to the support from DWT technique that apply filter concept. The MSE value is high with the average about 69 which contributes to low PSNR value with about 29 to 30 dB due to the relation of PSNR equation with MSE value. Besides, the SSIM value is 0.6 or about 40% far from the original image that affect the output image. Despite of that, the computing time is fast with about 3 to 4 seconds which has been improved from stand-alone Huffman technique. Therefore, hybrid compression is capable of supporting each other techniques in stand-alone technique.


Author(s):  
Emy Setyaningsih ◽  
Agus Harjoko

A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.


Author(s):  
Hikka Sartika ◽  
Taronisokhi Zebua

Storage space required by an application is one of the problems on smartphones. This problem can result in a waste of storage space because not all smartphones have a very large storage capacity. One application that has a large file size is the RPUL application and this application is widely accessed by students and the general public. Large file size is what often causes this application can not run effectively on smartphones. One solution that can be used to solve this problem is to compress the application file, so that the size of the storage space needed in the smartphone is much smaller. This study describes how the application of the elias gamma code algorithm as one of the compression technique algorithms to compress the RPUL application database file. This is done so that the RPUL application can run effectively on a smartphone after it is installed. Based on trials conducted on 64 bit of text as samples in this research it was found that compression based on the elias gamma code algorithm is able to compress text from a database file with a ratio of compression is 2 bits, compression ratio is 50% with a redundancy is 50%. Keywords: Compression, RPUL, Smartphone, Elias Gamma Code


Author(s):  
Winda Winda ◽  
Taronisokhi Zebua

The size of the data that is owned by an application today is very influential on the amount of space in the memory needed one of which is a mobile-based application. One mobile application that is widely used by students and the public at this time is the Complete Natural Knowledge Summary (Rangkuman Pengetahuan Alam Lengkap or RPAL) application. The RPAL application requires a large amount of material storage space in the mobile memory after it has been installed, so it can cause this application to be ineffective (slow). Compression of data can be used as a solution to reduce the size of the data so as to minimize the need for space in memory. The levestein algorithm is a compression technique algorithm that can be used to compress material stored in the RPAL application database, so that the database size is small. This study describes how to compress the RPAL application database records, so as to minimize the space needed on memory. Based on tests conducted on 128 characters of data (200 bits), the compression results obtained of 136 bits (17 characters) with a compression ratio is 68% and redundancy is 32%.Keywords: compression, levestein, aplication, RPAL, text, database, mobile


2018 ◽  
Vol 8 (9) ◽  
pp. 1471 ◽  
Author(s):  
Seo-Joon Lee ◽  
Gyoun-Yon Cho ◽  
Fumiaki Ikeno ◽  
Tae-Ro Lee

Due to the development of high-throughput DNA sequencing technology, genome-sequencing costs have been significantly reduced, which has led to a number of revolutionary advances in the genetics industry. However, the problem is that compared to the decrease in time and cost needed for DNA sequencing, the management of such large volumes of data is still an issue. Therefore, this research proposes Blockchain Applied FASTQ and FASTA Lossless Compression (BAQALC), a lossless compression algorithm that allows for the efficient transmission and storage of the immense amounts of DNA sequence data that are being generated by Next Generation Sequencing (NGS). Also, security and reliability issues exist in public sequence databases. For methods, compression ratio comparisons were determined for genetic biomarkers corresponding to the five diseases with the highest mortality rates according to the World Health Organization. The results showed an average compression ratio of approximately 12 for all the genetic datasets used. BAQALC performed especially well for lung cancer genetic markers, with a compression ratio of 17.02. BAQALC performed not only comparatively higher than widely used compression algorithms, but also higher than algorithms described in previously published research. The proposed solution is envisioned to contribute to providing an efficient and secure transmission and storage platform for next-generation medical informatics based on smart devices for both researchers and healthcare users.


Author(s):  
T Kavitha ◽  
K. Jayasankar

<p>Compression technique is adopted to solve various big data problems such as storage and transmission. The growth of cloud computing and smart phone industries has led to generation of huge volume of digital data. Digital data can be in various forms as audio, video, images and documents. These digital data are generally compressed and stored in cloud storage environment. Efficient storing and retrieval mechanism of digital data by adopting good compression technique will result in reducing cost. The compression technique is composed of lossy and lossless compression technique. Here we consider Lossless image compression technique, minimizing the number of bits for encoding will aid in improving the coding efficiency and high compression. Fixed length coding cannot assure in minimizing bit length. In order to minimize the bits variable Length codes with prefix-free codes nature are preferred. However the existing compression model presented induce high computing overhead, to address this issue, this work presents an ideal and efficient modified Huffman technique that improves compression factor up to 33.44% for Bi-level images and 32.578% for Half-tone Images. The average computation time both encoding and decoding shows an improvement of 20.73% for Bi-level images and 28.71% for Half-tone images. The proposed work has achieved overall 2% increase in coding efficiency, reduced memory usage of 0.435% for Bi-level images and 0.19% for Half-tone Images. The overall result achieved shows that the proposed model can be adopted to support ubiquitous access to digital data.</p>


Sign in / Sign up

Export Citation Format

Share Document