scholarly journals Usage analysis of SVD, DWT and JPEG compression methods for image compression

2021 ◽  
Vol 14 (2) ◽  
pp. 99
Author(s):  
Dewa Ayu Indah Cahya Dewi ◽  
I Made Oka Widyantara

Through image compression, can save bandwidth usage on telecommunication networks, accelerate image file sending time and can save memory in image file storage. Technique to reduce image size through compression techniques is needed. Image compression is one of the image processing techniques performed on digital images with the aim of reducing the redundancy of the data contained in the image so that it can be stored or transmitted efficiently. This research analyzed the results of image compression and measure the error level of the image compression results. The analysis to be carried out is in the form of an analysis of JPEG compression techniques with various types of images. The method of measuring the compression results uses the MSE and PSNR methods. Meanwhile, to determine the percentage level of compression using the compression ratio calculation. The average ratio for JPEG compression was 0.08605, the compression rate was 91.39%. The average compression ratio for the DWT method was 0.133090833, the compression rate was 86.69%. The average compression ratio of the SVD method was 0.101938833 and the compression rate was 89.80%.

Algorithms ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 255 ◽  
Author(s):  
Walaa Khalaf ◽  
Abeer Al Gburi ◽  
Dhafer Zaghar

Image compression is one of the most important fields of image processing. Because of the rapid development of image acquisition which will increase the image size, and in turn requires bigger storage space. JPEG has been considered as the most famous and applicable algorithm for image compression; however, it has shortfalls for some image types. Hence, new techniques are required to improve the quality of reconstructed images as well as to increase the compression ratio. The work in this paper introduces a scheme to enhance the JPEG algorithm. The proposed scheme is a new method which shrinks and stretches images using a smooth filter. In order to remove the blurring artifact which would be developed from shrinking and stretching the image, a hyperbolic function (tanh) is used to enhance the quality of the reconstructed image. Furthermore, the new approach achieves higher compression ratio for the same image quality, and/or better image quality for the same compression ratio than ordinary JPEG with respect to large size and more complex content images. However, it is an application for optimization to enhance the quality (PSNR and SSIM), of the reconstructed image and to reduce the size of the compressed image, especially for large size images.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1817
Author(s):  
Jiawen Xue ◽  
Li Yin ◽  
Zehua Lan ◽  
Mingzhu Long ◽  
Guolin Li ◽  
...  

This paper proposes a novel 3D discrete cosine transform (DCT) based image compression method for medical endoscopic applications. Due to the high correlation among color components of wireless capsule endoscopy (WCE) images, the original 2D Bayer data pattern is reconstructed into a new 3D data pattern, and 3D DCT is adopted to compress the 3D data for high compression ratio and high quality. For the low computational complexity of 3D-DCT, an optimized 4-point DCT butterfly structure without multiplication operation is proposed. Due to the unique characteristics of the 3D data pattern, the quantization and zigzag scan are ameliorated. To further improve the visual quality of decompressed images, a frequency-domain filter is proposed to eliminate the blocking artifacts adaptively. Experiments show that our method attains an average compression ratio (CR) of 22.94:1 with the peak signal to noise ratio (PSNR) of 40.73 dB, which outperforms state-of-the-art methods.


Author(s):  
Ilia V. Safonov ◽  
Ilya V. Kurilin ◽  
Michael N. Rychagov ◽  
Ekaterina V. Tolstaya

The domain of image signal processing, image compression is the significant technique, which is mainly invented to reduce the redundancy of image data in order to able to transmit the image pixels with high quality resolution. The standard image compression techniques like losseless and lossy compression technique generates high compression ratio image with efficient storage and transmission requirement respectively. There are many image compression technique are available for example JPEG, DWT and DCT based compression algorithms which provides effective results in terms of high compression ratio with clear quality image transformation. But they have more computational complexities in terms of processing, encoding, energy consumption and hardware design. Thus, bringing out these challenges, the proposed paper considers the most prominent research papers and discuses FPGA architecture design and future scope in the state of art of image compression technique. The primary aim to investigate the research challenges toward VLSI designing and image compression. The core section of the proposed study includes three folds viz standard architecture designs, related work and open research challenges in the domain of image compression.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 291 ◽  
Author(s):  
Walaa Khalaf ◽  
Dhafer Zaghar ◽  
Noor Hashim

Image compression is one of the most interesting fields of image processing that is used to reduce image size. 2D curve-fitting is a method that converts the image data (pixel values) to a set of mathematical equations that are used to represent the image. These equations have a fixed form with a few coefficients estimated from the image which has been divided into several blocks. Since the number of coefficients is lower than the original block pixel size, it can be used as a tool for image compression. In this paper, a new curve-fitting model has been proposed to be derived from the symmetric function (hyperbolic tangent) with only three coefficients. The main disadvantages of previous approaches were the additional errors and degradation of edges of the reconstructed image, as well as the blocking effect. To overcome this deficiency, it is proposed that this symmetric hyperbolic tangent (tanh) function be used instead of the classical 1st- and 2nd-order curve-fitting functions which are asymmetric for reformulating the blocks of the image. Depending on the symmetric property of hyperbolic tangent function, this will reduce the reconstruction error and improve fine details and texture of the reconstructed image. The results of this work have been tested and compared with 1st-order curve-fitting, and standard image compression (JPEG) methods. The main advantages of the proposed approach are: strengthening the edges of the image, removing the blocking effect, improving the Structural SIMilarity (SSIM) index, and increasing the Peak Signal-to-Noise Ratio (PSNR) up to 20 dB. Simulation results show that the proposed method has a significant improvement on the objective and subjective quality of the reconstructed image.


Sign in / Sign up

Export Citation Format

Share Document