scholarly journals Comparison of Effects of Entropy Coding Schemes Cascaded with Set Partitioning in Hierarchical Trees

Author(s):  
Ali Iqbal ◽  
Imran Touqir ◽  
Asim Ashfaque ◽  
Natasha Khan ◽  
Fahim Ashraf

WT (Wavelet Transform) is considered as landmark for image compression because it represents a signal in terms of functions which are localized both in frequency and time domain. Wavelet sub-band coding exploits the self-similarity of pixels in images and arranges resulting coefficients in different sub-bands. A much simpler and fully embedded codec algorithm SPIHT (Set Partitioning in Hierarchical Trees) is widely used for the compression of wavelet transformed images. It encodes the transformed coefficients depending upon their significance comparative to the given threshold. Statistical analysis reveals that the output bit-stream of SPIHT comprises of long trail of zeroes that can be further compressed, therefore SPIHT is not advocated to be used as sole mean of compression. In this paper, wavelet transformed images have been initially compressed by using SPIHT technique and to attain more compression, the output bit streams of SPIHT are then fed to entropy encoders; Huffman and Arithmetic encoders, for further de-correlation. The comparison of two concatenations has been carried out by evaluating few factors like Bit Saving Capability, PSNR (Peak Signal to Noise Ratio), Compression Ratio and Elapsed Time. The experimental results of these cascading demonstrate that SPIHT combined with Arithmetic coding yields better compression ratio as compared to SPIHT cascaded with Huffman coding. Whereas, SPIHT once combined with Huffman coding is proved to be comparatively efficient.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Paula Andrea Ferreira-Mejía ◽  
Wilson Nicolás Andrés Pérez-Cubillos ◽  
Lilia Edith Aparicio-Pico

En medicina la información de las imágenes diagnósticas es vital e imprescindible, por este motivo es necesario procesarlas sin que existan márgenes de error que interfieran con su lectura y análisis. En términos generales: las imágenes presentan redundancia entre píxeles lo cual hace que ocupen un tamaño considerable que va desde los Megabytes (MB) hasta los Gigabytes (GB); el proceso de transmitirlas a través de la red se dificulta en términos de almacenamiento y coste computacional, por ende se deben aplicar procesos de compresión sin pérdidas útiles para reducir el ancho de banda, mejorar la capacidad de almacenamiento e incrementar la velocidad de transmisión sin afectar la calidad de la imagen diagnóstica. La propuesta de este artículo se basa en una revisión sistemática en la que se sintetiza y expone las características, ventajas y desventajas, de las técnicas de extracción de las regiones de interés (ROI), los algoritmos híbridos de compresión sin pérdidas de imágenes de MRI (Magnetic Resonance Imaging) y, por último, se toma como referencia la transformada Wavelet y las aplicaciones propuestas, a futuro, por los investigadores de los artículos revisados; entre las técnicas utilizadas destacan: EWT (Empirical Wavelet Transform), EZW (Embedded Zero Trees of Wavelet), SPIHT (Set partitioning in Hierarchical Trees) y el algoritmo híbrido-derivado como lo es: EWISTARS (Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift) finalmente la selección y extracción automática de una ROI se realiza, mediante operaciones morfológicas, como la operación de apertura y segmentación de nivel. Para evaluar la calidad de estas técnicas se describen las métricas de rendimiento MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio) y CR (Compression Ratio). Los resultados de esta investigación serán de utilidad para que los investigadores, que estén incursionando en el área, puedan ampliar su visión acerca del procesamiento de imágenes médicas.  


2018 ◽  
Vol 8 (10) ◽  
pp. 1963 ◽  
Author(s):  
◽  
Jun Sang ◽  
Muhammad Azeem Akbar ◽  
Bin Cai ◽  
Hong Xiang ◽  
...  

Confidentiality and efficient bandwidth utilization require a combination of compression and encryption of digital images. In this paper, a new method for joint image compression and encryption based on set partitioning in hierarchical trees (SPIHT) with optimized Kd-tree and multiple chaotic maps was proposed. First, the lossless compression and encryption of the original images were performed based on integer wavelet transform (IWT) with SPIHT. Wavelet coefficients undergo diffusions and permutations before encoded through SPIHT. Second, maximum confusion, diffusion and compression of the SPIHT output were performed via the modified Kd-tree, wavelet tree and Huffman coding. Finally, the compressed output was further encrypted with varying parameter logistic maps and modified quadratic chaotic maps. The performance of the proposed technique was evaluated through compression ratio (CR) and peak-signal-to-noise ratio (PSNR), key space and histogram analyses. Moreover, this scheme passes several security tests, such as sensitivity, entropy and differential analysis tests. According to the theoretical analysis and experimental results, the proposed method is more secure and decreases the redundant information of the image more than the existing techniques for hybrid compression and encryption.


2014 ◽  
Vol 626 ◽  
pp. 87-94
Author(s):  
B. Perumal ◽  
M. Pallikonda Rajasekaran

Medical imaging is important in trendy medical aid that gives diagnostic info for clinical management of patients and designing of treatment. Every year, terabytes of medical image data’s square measure used through advanced imaging modalities like Positron Emission Tomography (PET) Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and lots of additional new methodology of medical imaging. Advances in technology have created the chance for radiology systems to use complicated compression algorithms to scale back the file size of every image in an attempt to partly offset the rise in knowledge volume created by new or additional complicated modalities whereas protective the numerous diagnostic info. This paper outlines the various compression strategies like Discrete Cosine Transform (DCT), Fractal Compression and Set Partitioning In hierarchical Trees (SPIHT) applied to numerous medical pictures. Experimental results show that the projected SPIHT approach achieves the next Compression Ratio (CR), Bits Per Pixel (BPP) and Peak Signal to Noise Ratio (PSNR) with less Mean square Error (MSE) in comparison with DCT methodology.


2014 ◽  
Vol 14 (04) ◽  
pp. 1450020 ◽  
Author(s):  
Ranjan Kumar Senapati ◽  
Prasanth Mankar

In this paper, two simple yet efficient embedded block-based image compression algorithms are presented. These algorithms not only improve the rate distortion performances of set partitioning in hierarchical trees (SPIHT) and set partitioning in embedded block coder (SPECK) at lower bit rates but also reduces the dynamic memory requirement by 91.1% in comparison to SPIHT. The former objective is achieved by better exploiting the coefficient decaying spectrum of the wavelet transformd images and the later objective is realised by improved listless implementation of the algorithms. The proposed algorithms explicitly perform breadth first search like SPECK. Extensive simulation conducted on various standard grayscale and color images indicate significant peak-signal-to-noise-ratio (PSNR) improvement over most of the state-of-the-art wavelet-based embedded coders including JPEG2000 at lower rates. The reduction of encoding and decoding time as well as improvement in coding efficiency at lower bit rates facilitate these coder as better candidates for multimedia applications.


Author(s):  
U.R. Padma ◽  
Jayachitra

This paper presents a novel non-blind watermarking algorithm using dual orthogonal complex contourlet transform. The dual orthogonal complex contourlet transform is preferred for watermarking because of its ability to capture the directional edges and contours superior to other transforms such as cosine transform, wavelet transform, etc. Digital image and video in their raw form require an enormous amount of storage capacity and the huge data systems also contain a lot of redundant information.Compression also increases the capacity of the communication channel. Image Compression using SPIHT Set Partitioning in Hierarchical Trees algorithm based on Huffman coding technique. SPIHT algorithm is the lossless compression algorithms reduce file size with no loss in image quality and comparing the final results in terms of bit error rate, PSNR and MSE.


The research is carried out to find wavelets in image processing of CT(computerized Tomography) JPEG(Joint Photographic Experts Group) medical image for a Lossy Compression. The EZW(Embedded Zerotree Wavelet) and SPIHT(Set Partitioning Hierarchical Trees) algorithms method is implemented to identify the quality of image by DWT(Discrete Wavelet Transform). Quality analysis is processed based on parameters measure such as CR(Compression Ratio), BPP(Bits Per Pixel), PSNR( Peak Signal to Noise Ratio) and MSE(Mean Square Error). Comparison is made to justify having a good image retaining for seven wavelets, how they correlation each other. Using seven wavelets as assigned a new term Sevenlets in this research work. Medical images are very significant to retain exact image with minimizing loss of information at retrieving. The algorithms EZW and SPIHT give better support to wavelets for compression analysis, can be used to diagnosis analysis to have better perception of image corrective measure.


2014 ◽  
Vol 984-985 ◽  
pp. 1276-1281
Author(s):  
C. Priya ◽  
T. Kesavamurthy ◽  
M. Uma Priya

Recently many new algorithms for image compression based on wavelets have been developed.This paper gives a detailed explanation of SPIHT algorithm with the combination of Lempel Ziv Welch compression technique for image compression by MATLAB implementation. Set partitioning in Hierarchical trees (SPIHT) is one of the most efficient algorithm known today. Pyramid structures have been created by the SPIHT algorithm based on a wavelet decomposition of an image. Lempel Ziv Welch is a universal lossless data compression algorithm guarantees that the original information can be exactly reproduced from the compressed data.The proposed methods have better compression ratio, computational speed and good reconstruction quality of the image. To analysis the proposed lossless methods here, calculate the performance metrics as Compression ratio, Mean square error, Peak signal to Noise ratio. Key Words-LempelZivWelch (LZW),SPIHT,Wavelet


2020 ◽  
Vol 55 (1) ◽  
Author(s):  
Nassir H. Salman ◽  
S. Rafea

Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurrence through the sub intervals between the range 0 and 1. Finally, the stream of compressed tables is reassembled for decompressing (image restoration). The results showed a compression gain of 10-12% and less time consumption when applying this type of coding to each block rather than the entire image. To improve the compression ratio, the second approach was used based on the YCbCr colour model. In this regard, images were decomposed into four sub-bands (low-low, high-low, low-high, and high-high) by using the discrete wavelet transform compression algorithm. Then, the low-low sub-band was transmuted to frequency components (low and high) via discrete wavelet transform. Next, these components were quantized by using scalar quantization and then scanning in a zigzag way. The compression ratio result is 15.1 to 27.5 for magnetic resonance imaging with a different peak signal to noise ratio and mean square error; 25 to 43 for X-ray images; 32 to 46 for computed tomography scan images; and 19 to 36 for magnetic resonance imaging brain images. The second approach showed an improved compression scheme compared to the first approach considering compression ratio, peak signal to noise ratio, and mean square error.


Author(s):  
A. Suruliandi ◽  
S. P. Raja

This paper discusses about embedded zerotree wavelet (EZW) and other wavelet-based encoding techniques employed in lossy image compression. The objective of this paper is two fold. Primarily wavelet-based encoding techniques such as EZW, set partitioning in hierarchical trees (SPIHT), wavelet difference reduction (WDR), adaptively scanned wavelet difference reduction (ASWDR), set partitioned embedded block (SPECK), compression with reversible embedded wavelet (CREW) and space frequency quantization (SFQ) are implemented and their performance is analyzed. Second, wavelet-based compression schemes such as Haar, Daubechies and Biorthogonal are used to evaluate the performance of encoding techniques. The performance parameters such as peak signal-to-noise ratio (PSNR) and mean square error (MSE) are used for evaluation purpose. From the results it is observed that the performance of SPIHT encoding technique is providing better results when compared to other encoding schemes.


2017 ◽  
Vol 13 (10) ◽  
pp. 6552-6557
Author(s):  
E.Wiselin Kiruba ◽  
Ramar K.

Amalgamation of compression and security is indispensable in the field of multimedia applications. A novel approach to enhance security with compression is discussed in this  research paper. In secure arithmetic coder (SAC), security is provided by input and output permutation methods and compression is done by interval splitting arithmetic coding. Permutation in SAC is susceptible to attacks. Encryption issues associated with SAC is dealt in this research method. The aim of this proposed method is to encrypt the data first by Table Substitution Box (T-box) and then to compress by Interval Splitting Arithmetic Coder (ISAC). This method incorporates dynamic T-box in order to provide better security. T-box is a method, constituting elements based on the random output of Pseudo Random Generator (PRNG), which gets the input from Secure Hash Algorithm-256 (SHA-256) message digest. The current scheme is created, based on the key, which is known to the encoder and decoder. Further, T-boxes are created by using the previous message digest as a key.  Existing interval splitting arithmetic coding of SAC is applied for compression of text data. Interval splitting finds a relative position to split the intervals and this in turn brings out compression. The result divulges that permutation replaced by T-box method provides enhanced security than SAC. Data is not revealed when permutation is replaced by T-box method. Security exploration reveals that the data remains secure to cipher text attacks, known plain text attacks and chosen plain text attacks. This approach results in increased security to Interval ISAC. Additionally the compression ratio  is compared by transferring the outcome of T-box  to traditional  arithmetic coding. The comparison proved that there is a minor reduction in compression ratio in ISAC than arithmetic coding. However the security provided by ISAC overcomes the issues of compression ratio in  arithmetic coding. 


Sign in / Sign up

Export Citation Format

Share Document