scholarly journals Comparative Performance Evaluation of Three Image Compression Algorithms

2017 ◽  
Vol 4 (1) ◽  
pp. 113-126
Author(s):  
Jide Julius Popoola ◽  
Michael Elijah Adekanye

The advent of computer and internet has brought about massive change to the ways images are being managed. This revolution has resulted in changes in image processing and management as well as the huge space requirement for images’ uploading, downloading, transferring and storing nowadays. In guiding against this huge space requirement, images need to be compressed before either storing or transmitting. Several algorithms or techniques on image compression had been developed in literature. In this study, three of these image compression algorithms were developed using MATLAB codes. The three algorithms developed are discrete cosine transform (DCT), discrete wavelet transform (DWT) and set partitioning in hierarchical tree (SPIHT). In order to ascertain which of them is most appropriate for image storing and transmission, comparative performance evaluations were conducted on the three developed algorithms using five performance indices. The results of the comparative performance evaluations show that the three algorithms are effective in image compression but with different efficiency rates. In addition, the comparative performance evaluations results show that DWT has the highest compression ratio and distortion level while the corresponding values for SPIHT is the lowest with those of DCT fall in-between. Also, the results of the study show that the lower the mean square error and the higher the peak signal-to-noise-ratio, the lower the distortion level in the compressed image.

Author(s):  
S. ARIVAZHAGAN ◽  
D. GNANADURAI ◽  
J. R. ANTONY VANCE ◽  
K. M. SAROJINI ◽  
L. GANESAN

With the fast evolution of Multimedia systems, Image compression algorithms are very much needed to achieve effective transmission and compact storage by removing the redundant information of the image data. Wavelet transforms have received significant attention, recently, due to their suitability for a number of important signal and image compression applications and the lapped nature of this transform and the computational simplicity, which comes in the form of filter bank implementations. In this paper, the implementation of image compression algorithms based on discrete wavelet transform such as embedded zero tree wavelet (EZW) coder, set partitioning in hierarchical trees coder without lists (SPIHT — No List) and packetizable zero tree wavelet (PZW) coder in DSP processor is dealt in detail and their performance analysis is carried out in terms of different compression ratios, execution timing and for different packet losses. PSNR is used as the criteria for the measurement of reconstructed image quality.


The research is carried out to find wavelets in image processing of CT(computerized Tomography) JPEG(Joint Photographic Experts Group) medical image for a Lossy Compression. The EZW(Embedded Zerotree Wavelet) and SPIHT(Set Partitioning Hierarchical Trees) algorithms method is implemented to identify the quality of image by DWT(Discrete Wavelet Transform). Quality analysis is processed based on parameters measure such as CR(Compression Ratio), BPP(Bits Per Pixel), PSNR( Peak Signal to Noise Ratio) and MSE(Mean Square Error). Comparison is made to justify having a good image retaining for seven wavelets, how they correlation each other. Using seven wavelets as assigned a new term Sevenlets in this research work. Medical images are very significant to retain exact image with minimizing loss of information at retrieving. The algorithms EZW and SPIHT give better support to wavelets for compression analysis, can be used to diagnosis analysis to have better perception of image corrective measure.


2012 ◽  
Vol 155-156 ◽  
pp. 440-444
Author(s):  
He Yan ◽  
Xiu Feng Wang

JPEG2000 algorithm has been developed based on the DWT techniques, which have shown how the results achieved in different areas in information technology can be applied to enhance the performance. Lossy image compression algorithms sacrifice perfect image reconstruction in favor of decreased storage requirements. Wavelets have become a popular technology for information redistribution for high-performance image compression algorithms. Lossy compression algorithms sacrifice perfect image reconstruction in favor of improved compression rates while minimizing image quality lossy.


Author(s):  
Karri Chiranjeevi ◽  
Umaranjan Jena ◽  
Sonali Dash

Linde-Buzo-Gray (LBG) Vector Quantization (VQ), technically generates local codebook after many runs on different sets of training images for image compression. The key role of VQ is to generate global codebook. In this paper, we present comparative performance analysis of different optimization techniques. Firefly and Cuckoo search generate a near global codebook, but undergoes problem when non-availability of brighter fireflies and convergence time is very high respectively. Hybrid Cuckoo Search (HCS) algorithm was developed and tested on four benchmark functions, that optimizes the LBG codebook with less convergence rate by taking McCulloch's algorithm based levy flight and variant of searching parameters. Practically, we observed that Bat algorithm (BA) peak signal to noise ratio is better than LBG, FA, CS and HCS in between 8 to 256 codebook sizes. The convergence time of BA is 2.4452, 2.734 and 1.5126 times faster than HCS, CS and FA respectively.


Author(s):  
Fangfang Li ◽  
Sergey Krivenko ◽  
Vladimir Lukin

Image information technology has become an important perception technology considering the task of providing lossy image compression with the desired quality using certain encoders Recent researches have shown that the use of a two-step method can perform the compression in a very simple manner and with reduced compression time under the premise of providing a desired visual quality accuracy. However, different encoders have different compression algorithms. These issues involve providing the accuracy of the desired quality. This paper considers the application of the two-step method in an encoder based on a discrete wavelet transform (DWT). In the experiment, bits per pixel (BPP) is used as the control parameter to vary and predict the compressed image quality, and three visual quality evaluation metrics (PSNR, PSNR-HVS, PSNR-HVS-M) are analyzed. In special cases, the two-step method is allowed to be modified. This modification relates to the cases when images subject to lossy compression are either too simple or too complex and linear approximation of dependences is no more valid. Experimental data prove that, compared with the single-step method, after performing the two-step compression method, the mean square error of differences between desired and provided values drops by an order of magnitude. For PSNR-HVS-M, the error of the two-step method does not exceed 3.6 dB. The experiment has been conducted for Set Partitioning in Hierarchical Trees (SPIHT), a typical image encoder based on DWT, but it can be expected that the proposed method applies to other DWT-based image compression techniques. The results show that the application range of the two-step lossy compression method has been expanded. It is not only suitable for encoders based on discrete cosine transform (DCT) but also works well for DWT-based encoders.


2020 ◽  
Vol 20 (02) ◽  
pp. 2050008
Author(s):  
S. P. Raja

This paper presents a complete analysis of wavelet-based image compression encoding techniques. The techniques involved in this paper are embedded zerotree wavelet (EZW), set partitioning in hierarchical trees (SPIHT), wavelet difference reduction (WDR), adaptively scanned wavelet difference reduction (ASWDR), set partitioned embedded block coder (SPECK), compression with reversible embedded wavelet (CREW) and spatial orientation tree wavelet (STW). Experiments are done by varying level of the decomposition, bits per pixel and compression ratio. The evaluation is done by taking parameters like peak signal to noise ratio (PSNR), mean square error (MSE), image quality index (IQI) and structural similarity index (SSIM), average difference (AD), normalized cross-correlation (NK), structural content (SC), maximum difference (MD), Laplacian mean squared error (LMSE) and normalized absolute error (NAE).


2017 ◽  
Vol 2 (4) ◽  
pp. 11-17
Author(s):  
P. S. Jagadeesh Kumar ◽  
Tracy Lin Huan ◽  
Yang Yung

Fashionable and staggering evolution in inferring the parallel processing routine coupled with the necessity to amass and distribute huge magnitude of digital records especially still images has fetched an amount of confronts for researchers and other stakeholders. These disputes exorbitantly outlay and maneuvers the digital information among others, subsists the spotlight of the research civilization in topical days and encompasses the lead to the exploration of image compression methods that can accomplish exceptional outcomes. One of those practices is the parallel processing of a diversity of compression techniques, which facilitates split, an image into ingredients of reverse occurrences and has the benefit of great compression. This manuscript scrutinizes the computational intricacy and the quantitative optimization of diverse still image compression tactics and additional accede to the recital of parallel processing. The computational efficacy is analyzed and estimated with respect to the Central Processing Unit (CPU) as well as Graphical Processing Unit (GPU). The PSNR (Peak Signal to Noise Ratio) is exercised to guesstimate image re-enactment and eminence in harmonization. The moments are obtained and conferred with support on different still image compression algorithms such as Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), Dual Tree Complex Wavelet Transform (DTCWT), Set Partitioning in Hierarchical Trees (SPIHT), Embedded Zero-tree Wavelet (EZW). The evaluation is conceded in provisos of coding efficacy, memory constraints, image quantity and quality.


Author(s):  
Amir Athar Khan ◽  
Amanat Ali ◽  
Sanawar Alam ◽  
N. R. Kidwai

This paper concerns Image compression obtained with wavelet-based compression techniques such as set–partitioning in hierarchical trees (SPIHT)yield very good results The necessity in image compression continuously grows during the last decade, different types of methods is used for this mainly EZW, SPIHT and others. In this paper we used discrete wavelet transform and after this set-partitioning in hierarchical trees (SPIHT) with some improvement in respect of encoding and decoding time with better PSNR with respect to EZW coding.


2018 ◽  
Vol 8 (10) ◽  
pp. 1963 ◽  
Author(s):  
◽  
Jun Sang ◽  
Muhammad Azeem Akbar ◽  
Bin Cai ◽  
Hong Xiang ◽  
...  

Confidentiality and efficient bandwidth utilization require a combination of compression and encryption of digital images. In this paper, a new method for joint image compression and encryption based on set partitioning in hierarchical trees (SPIHT) with optimized Kd-tree and multiple chaotic maps was proposed. First, the lossless compression and encryption of the original images were performed based on integer wavelet transform (IWT) with SPIHT. Wavelet coefficients undergo diffusions and permutations before encoded through SPIHT. Second, maximum confusion, diffusion and compression of the SPIHT output were performed via the modified Kd-tree, wavelet tree and Huffman coding. Finally, the compressed output was further encrypted with varying parameter logistic maps and modified quadratic chaotic maps. The performance of the proposed technique was evaluated through compression ratio (CR) and peak-signal-to-noise ratio (PSNR), key space and histogram analyses. Moreover, this scheme passes several security tests, such as sensitivity, entropy and differential analysis tests. According to the theoretical analysis and experimental results, the proposed method is more secure and decreases the redundant information of the image more than the existing techniques for hybrid compression and encryption.


2014 ◽  
Vol 14 (04) ◽  
pp. 1450020 ◽  
Author(s):  
Ranjan Kumar Senapati ◽  
Prasanth Mankar

In this paper, two simple yet efficient embedded block-based image compression algorithms are presented. These algorithms not only improve the rate distortion performances of set partitioning in hierarchical trees (SPIHT) and set partitioning in embedded block coder (SPECK) at lower bit rates but also reduces the dynamic memory requirement by 91.1% in comparison to SPIHT. The former objective is achieved by better exploiting the coefficient decaying spectrum of the wavelet transformd images and the later objective is realised by improved listless implementation of the algorithms. The proposed algorithms explicitly perform breadth first search like SPECK. Extensive simulation conducted on various standard grayscale and color images indicate significant peak-signal-to-noise-ratio (PSNR) improvement over most of the state-of-the-art wavelet-based embedded coders including JPEG2000 at lower rates. The reduction of encoding and decoding time as well as improvement in coding efficiency at lower bit rates facilitate these coder as better candidates for multimedia applications.


Sign in / Sign up

Export Citation Format

Share Document