scholarly journals TWO-STEP PROVIDING OF DESIRED QUALITY IN LOSSY IMAGE COMPRESSION BY SPIHT

Author(s):  
Fangfang Li ◽  
Sergey Krivenko ◽  
Vladimir Lukin

Image information technology has become an important perception technology considering the task of providing lossy image compression with the desired quality using certain encoders Recent researches have shown that the use of a two-step method can perform the compression in a very simple manner and with reduced compression time under the premise of providing a desired visual quality accuracy. However, different encoders have different compression algorithms. These issues involve providing the accuracy of the desired quality. This paper considers the application of the two-step method in an encoder based on a discrete wavelet transform (DWT). In the experiment, bits per pixel (BPP) is used as the control parameter to vary and predict the compressed image quality, and three visual quality evaluation metrics (PSNR, PSNR-HVS, PSNR-HVS-M) are analyzed. In special cases, the two-step method is allowed to be modified. This modification relates to the cases when images subject to lossy compression are either too simple or too complex and linear approximation of dependences is no more valid. Experimental data prove that, compared with the single-step method, after performing the two-step compression method, the mean square error of differences between desired and provided values drops by an order of magnitude. For PSNR-HVS-M, the error of the two-step method does not exceed 3.6 dB. The experiment has been conducted for Set Partitioning in Hierarchical Trees (SPIHT), a typical image encoder based on DWT, but it can be expected that the proposed method applies to other DWT-based image compression techniques. The results show that the application range of the two-step lossy compression method has been expanded. It is not only suitable for encoders based on discrete cosine transform (DCT) but also works well for DWT-based encoders.

Electronics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 88 ◽  
Author(s):  
Merzak Ferroukhi ◽  
Abdeldjalil Ouahabi ◽  
Mokhtar Attari ◽  
Yassine Habchi ◽  
Abdelmalik Taleb-Ahmed

The operations of digitization, transmission and storage of medical data, particularly images, require increasingly effective encoding methods not only in terms of compression ratio and flow of information but also in terms of visual quality. At first, there was DCT (discrete cosine transform) then DWT (discrete wavelet transform) and their associated standards in terms of coding and image compression. The 2nd-generation wavelets seeks to be positioned and confronted by the image and video coding methods currently used. It is in this context that we suggest a method combining bandelets and the SPIHT (set partitioning in hierarchical trees) algorithm. There are two main reasons for our approach: the first lies in the nature of the bandelet transform to take advantage of capturing the geometrical complexity of the image structure. The second reason is the suitability of encoding the bandelet coefficients by the SPIHT encoder. Quality measurements indicate that in some cases (for low bit rates) the performance of the proposed coding competes with the well-established ones (H.264 or MPEG4 AVC and H.265 or MPEG4 HEVC) and opens up new application prospects in the field of medical imaging.


2017 ◽  
Vol 2 (4) ◽  
pp. 11-17
Author(s):  
P. S. Jagadeesh Kumar ◽  
Tracy Lin Huan ◽  
Yang Yung

Fashionable and staggering evolution in inferring the parallel processing routine coupled with the necessity to amass and distribute huge magnitude of digital records especially still images has fetched an amount of confronts for researchers and other stakeholders. These disputes exorbitantly outlay and maneuvers the digital information among others, subsists the spotlight of the research civilization in topical days and encompasses the lead to the exploration of image compression methods that can accomplish exceptional outcomes. One of those practices is the parallel processing of a diversity of compression techniques, which facilitates split, an image into ingredients of reverse occurrences and has the benefit of great compression. This manuscript scrutinizes the computational intricacy and the quantitative optimization of diverse still image compression tactics and additional accede to the recital of parallel processing. The computational efficacy is analyzed and estimated with respect to the Central Processing Unit (CPU) as well as Graphical Processing Unit (GPU). The PSNR (Peak Signal to Noise Ratio) is exercised to guesstimate image re-enactment and eminence in harmonization. The moments are obtained and conferred with support on different still image compression algorithms such as Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), Dual Tree Complex Wavelet Transform (DTCWT), Set Partitioning in Hierarchical Trees (SPIHT), Embedded Zero-tree Wavelet (EZW). The evaluation is conceded in provisos of coding efficacy, memory constraints, image quantity and quality.


Author(s):  
Amir Athar Khan ◽  
Amanat Ali ◽  
Sanawar Alam ◽  
N. R. Kidwai

This paper concerns Image compression obtained with wavelet-based compression techniques such as set–partitioning in hierarchical trees (SPIHT)yield very good results The necessity in image compression continuously grows during the last decade, different types of methods is used for this mainly EZW, SPIHT and others. In this paper we used discrete wavelet transform and after this set-partitioning in hierarchical trees (SPIHT) with some improvement in respect of encoding and decoding time with better PSNR with respect to EZW coding.


Author(s):  
Merzak Ferroukhi ◽  
Abdeldjalil Ouahabi ◽  
Mokhtar Attari ◽  
Yacine Habchi ◽  
Mohamed Beladgham ◽  
...  

The operations of digitization, transmission and storage of medical data, particularly images require increasingly effective encoding methods not only in terms of compression ratio and flow of information but also in terms of visual quality. At first, there was DCT (discrete cosine transform) then DWT (discrete wavelet transform) and their associated standards in terms of coding and image compression. After that, the 2nd generation wavelets seeks to be positioned and confronted to the image and video coding methods currently used. It is in this context that we suggested a method combining bandelets and SPIHT (set partitioning in hierarchical trees) algorithm. There are two main reasons for our approach: the first lies in the nature of the bandelet transform to take advantage by capturing the geometrical complexity of the image structure. The second reason stems in the suitability of encoding the bandelet coefficients by the SPIHT encoder. Quality measurements shows that in some cases (for low bit rates) the performances of the proposed coding compete with the well-established ones and opens up new application prospects in the field of medical imaging.


2020 ◽  
pp. 50-58
Author(s):  
Fangfang Li ◽  
Sergey S. Krivenko ◽  
Vladimir V. Lukin

considered. Quality is mainly characterized by the peak signal-to-noise ratio (PSNR) but visual quality metrics are briefly studied as well. Potentially, a two-step approach can be used to carry out a compression with providing the desired quality in a quite simple way and with a reduced compression time. However, the two-step approach can run into problems for PSNR metric under conditions that a required PSNR is quite small (about 30 dB). These problems mainly deal with the accuracy of providing a desired quality at the second step. The paper analyzes the reasons why this happens. For this purpose, a set of nine test images of different complexity is analyzed first. Then, the use of the two-step approach is studied for a wide set of complex structure texture test images. The corresponding test experiments are carried out for several values of the desired PSNR. The obtained results show that the two-step approach has limitations in the cases when complex texture images have to be compressed with providing relatively low values of the desired PSNR. The main reason is that the rate-distortion dependence is nonlinear while linear approximation is applied at the second step. To get around the aforementioned shortcomings, a simple but efficient solution is proposed based on the performed analysis. It is shown that, due to the proposed modification, the application range of the two-step method of lossy compression has become considerably wider and it covers PSNR values that are commonly required in practice. The experiments are performed for a typical image encoder AGU based on discrete cosine transform (DCT) but it can be expected that the proposed approach is applicable for other DCT-based image compression techniques.


2018 ◽  
Vol 29 (1) ◽  
pp. 1063-1078
Author(s):  
P. Sreenivasulu ◽  
S. Varadarajan

Abstract Nowadays, medical imaging and telemedicine are increasingly being utilized on a huge scale. The expanding interest in storing and sending medical images brings a lack of adequate memory spaces and transmission bandwidth. To resolve these issues, compression was introduced. The main aim of lossless image compression is to improve accuracy, reduce the bit rate, and improve the compression efficiency for the storage and transmission of medical images while maintaining an acceptable image quality for diagnosis purposes. In this paper, we propose lossless medical image compression using wavelet transform and encoding method. Basically, the proposed image compression system comprises three modules: (i) segmentation, (ii) image compression, and (iii) image decompression. First, the input medical image is segmented into region of interest (ROI) and non-ROI using a modified region growing algorithm. Subsequently, the ROI is compressed by discrete cosine transform and set partitioning in hierarchical tree encoding method, and the non-ROI is compressed by discrete wavelet transform and merging-based Huffman encoding method. Finally, the compressed image combination of the compressed ROI and non-ROI is obtained. Then, in the decompression stage, the original medical image is extracted using the reverse procedure. The experimentation was carried out using different medical images, and the proposed method obtained better results compared to different other methods.


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1559 ◽  
Author(s):  
Fan Zhang ◽  
Zhichao Xu ◽  
Wei Chen ◽  
Zizhe Zhang ◽  
Hao Zhong ◽  
...  

Video surveillance systems play an important role in underground mines. Providing clear surveillance images is the fundamental basis for safe mining and disaster alarming. It is of significance to investigate image compression methods since the underground wireless channels only allow low transmission bandwidth. In this paper, we propose a new image compression method based on residual networks and discrete wavelet transform (DWT) to solve the image compression problem. The residual networks are used to compose the codec network. Further, we propose a novel loss function named discrete wavelet similarity (DW-SSIM) loss to train the network. Because the information of edges in the image is exposed through DWT coefficients, the proposed network can learn to preserve the edges better. Experiments show that the proposed method has an edge over the methods being compared in regards to the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), particularly at low compression ratios. Tests on noise-contaminated images also demonstrate the noise robustness of the proposed method. Our main contribution is that the proposed method is able to compress images at relatively low compression ratios while still preserving sharp edges, which suits the harsh wireless communication environment in underground mines.


Author(s):  
S. ARIVAZHAGAN ◽  
D. GNANADURAI ◽  
J. R. ANTONY VANCE ◽  
K. M. SAROJINI ◽  
L. GANESAN

With the fast evolution of Multimedia systems, Image compression algorithms are very much needed to achieve effective transmission and compact storage by removing the redundant information of the image data. Wavelet transforms have received significant attention, recently, due to their suitability for a number of important signal and image compression applications and the lapped nature of this transform and the computational simplicity, which comes in the form of filter bank implementations. In this paper, the implementation of image compression algorithms based on discrete wavelet transform such as embedded zero tree wavelet (EZW) coder, set partitioning in hierarchical trees coder without lists (SPIHT — No List) and packetizable zero tree wavelet (PZW) coder in DSP processor is dealt in detail and their performance analysis is carried out in terms of different compression ratios, execution timing and for different packet losses. PSNR is used as the criteria for the measurement of reconstructed image quality.


Author(s):  
R. Pandian ◽  
S. LalithaKumari

Notice of Retraction-----------------------------------------------------------------------After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of APTIKOM's Publication Principles.We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.The presenting author of this paper has the option to appeal this decision by contacting ij.aptikom@gmail.com.-----------------------------------------------------------------------Image data usually contain considerable quantity of data that is redundant and much irrelevant, whereas an image compression technique overcomes this by compressing the amount of data required to represent the image. In this work, Discrete Wavelet Transform based image compression algorithm is implemented for decomposing the image. The various encoding schemes such as Embedded Zero wavelet, (EZW), Set Partitioning In Hierarchical Trees(SPIHT) and Spatial orientation Tree Wavelet(STW) are used and their performances in the compression is evaluated and also the effectiveness of different wavelets with various vanishing moments are analyzed based on the values of PSNR, Compression ratio, Means square error and bits per pixel. The optimum compression algorithm is also found based on the results.


Sign in / Sign up

Export Citation Format

Share Document