scholarly journals A Fast Image Compression Algorithm Based on Wavelet Transform

Author(s):  
Xiangjun Li ◽  
Shuili Zhang ◽  
Haibo Zhao

With multimedia becoming widely popular, the conflict between mass data and finite memory devices has been continuously intensified; so, it requires more convenient, efficient and high-quality transmission and storage technology and meanwhile, this is also the researchers’ pursuit for highly efficient compression technology and it is the fast image transmission that is what people really seek. This paper mainly further studies wavelet analysis and fractal compression coding, proposes a fast image compression coding method based on wavelet transform and fractal theory, and provides the theoretical basis and specific operational approaches for the algorithm. It makes use of the smoothness of wavelet, the high compression ratio of fractal compression coding and the high quality of reconstructed image. It firstly processes the image through wavelet transform. Then it introduces fractal features and classifies the image according to the features of image sub-blocks. Each class selects the proper features. In this way, for any sub-block, it only needs to search the best-matched block in a certain class according to the corresponding features. With this method, it can effectively narrow the search in order to speed up coding and build the relation of inequality between the sub-block and the matching mean square error. So, it can effectively combine wavelet transform with fractal theory and further improves the quality of reconstructed image. By comparing the simulation experiment, it objectively analyzes the performance of algorithm and proves that the proposed algorithm has higher efficiency.

2013 ◽  
Vol 380-384 ◽  
pp. 3815-3817
Author(s):  
Yan Yang

This paper presents a new method of scalable image compression coding based on the wavelet transform. This method delimits the region of interest of the original image, and give a high-quality encoding to this region and a rough encoding to the rest. The result shows that in the limited memory space, this algorithm provides the coarser reconstruct image to satisfy the basal subject quality. Using this method, we can give a high quality encoding to the region for interest.


Connectivity ◽  
2020 ◽  
Vol 148 (6) ◽  
Author(s):  
Yu. I. Katkov ◽  
◽  
O. S. Zvenigorodsky ◽  
O. V. Zinchenko ◽  
V. V. Onyshchenko ◽  
...  

The article is devoted to the topical issue of finding new effective and improving existing widespread compression methods in order to reduce computational complexity and improve the quality of image-renewable image compression images, is important for the introduction of cloud technologies. The article presents a problem To increase the efficiency of cloud storage, it is necessary to determine methods for reducing the information redundancy of digital images by fractal compression of video content, to make recommendations on the possibilities of applying these methods to solve various practical problems. The necessity of storing high-quality video information in new HDTV formats 2k, 4k, 8k in cloud storage to meet the existing needs of users has been substantiated. It is shown that when processing and transmitting high quality video information there is a problem of reducing the redundancy of video data (image compression) provided that the desired image quality is preserved, restored by the user. It has been shown that in cloud storage the emergence of such a problem is historically due to the contradiction between consumer requirements for image quality and the necessary volumes and ways to reduce redundancy of video data, which are transmitted over communication channels and processed in data center servers. The solution to this problem is traditionally rooted in the search for effective technologies for compressing, archiving and compressing video information. An analysis of video compression methods and digital video compression technology has been performed, which reduces the amount of data used to represent the video stream. Approaches to image compression in cloud storage under conditions of preservation or a slight reduction in the amount of data that provide the user with the specified quality of the restored image are shown. Classification of special compression methods without loss and with information loss is provided. Based on the analysis, it is concluded that it is advisable to use special methods of compression with loss of information to store high quality video information in the new formats HDTV 2k, 4k, 8k in cloud storage. The application of video image processing and their encoding and compression on the basis of fractal image compression is substantiated. Recommendations for the implementation of these methods are given.


2018 ◽  
Vol 14 (25) ◽  
pp. 1-11
Author(s):  
Satya Prakash Yadav ◽  
Sachin Yadav

Introduction: Image compression is a great instance for operations in the medical domain that leads to better understanding and implementations of treatment, especially in radiology. Discrete wavelet transform (dwt) is used for better and faster implementation of this kind of image fusion.Methodology: To access the great feature of mathematical implementations in the medical domain we use wavelet transform with dwt for image fusion and extraction of features through images.Results: The predicted or expected outcome must help better understanding of any kind of image resolutions and try to compress or fuse the images to decrease the size but not the pixel quality of the image.Conclusions: Implementation of the dwt mathematical approach will help researchers or practitioners in the medical domain to attain better implementation of the image fusion and data transmission, which leads to better treatment procedures and also decreases the data transfer rate as the size will be decreased and data loss will also be manageable.Originality: The idea of using images may decrease the size of the image, which may be useful for reducing bandwidth while transmitting the images. But the thing here is to maintain the same quality while transmitting data and also while compressing the images.Limitations: As this is a new implementation, if we have committed any mistakes in image compression of medical-related information, this may lead to treatment faults for the patient. Image quality must not be reduced with this implementation.


2016 ◽  
Vol 10 (1) ◽  
pp. 34
Author(s):  
Rodrigo da Rosa Righi ◽  
Vinicius F. Rodrigues ◽  
Cristiano A. Costa ◽  
Roberto Q. Gomes

This paper presents a parallel modeling of a lossy image compression method based on the fractal theory and its evaluation over two versions of dual-core processors: with and without simultaneous multithreading (SMT) support. The idea is to observe the speedup on both configurations when changing application parameters and the number of threads at operating system level. Our target application is particularly relevant in the Big Data era. Huge amounts of data often need to be sent over low/medium bandwidth networks, and/or to be saved on devices with limited store capacity, motivating efficient image compression. Especially, the fractal compression presents a CPU-bound coding method known for offering higher indexes of file reduction through highly time-consuming calculus. The structure of the problem allowed us to explore data-parallelism by implementing an embarrassingly parallel version of the algorithm. Despite its simplicity, our modeling is useful for fully exploiting and evaluating the considered architectures. When comparing performance in both processors, the results demonstrated that the SMT-based one presented gains up to 29%. Moreover, they emphasized that a large number of threads does not always represent a reduction in application time. In average, the results showed a curve in which a strong time reduction is achieved when working with 4 and 8 threads when evaluating pure and SMT dual-core processors, respectively. The trend concerns a slow growing of the execution time when enlarging the number of threads due to both task granularity and threads management.


2014 ◽  
Vol 886 ◽  
pp. 650-654
Author(s):  
Bo Hao Xu ◽  
Yong Sheng Hao

Progressive image transmission is a kind of image technology has been widely used in various fields, it can not only save bandwidth but also improve the user experience to meet user demand for different image quality. According to user's demand for image quality, realizing the progress of image compression coding flow can meet the demand of users. This article mainly introduce by means of JPEG and Laplacian pyramid coding principle implement progressive image compression.


2012 ◽  
Vol 241-244 ◽  
pp. 418-422
Author(s):  
Dong Mei Wang ◽  
Jing Yi Lu

The EZW and Fractal Coding were researched and simulated in this paper. And two drawbacks were discovered in these algorithm:the coding time is too long and the effect of reconstructed image is not ideal. Therefore, The paper studied the wavelet transformation in the fractal coding application, The wavelet coefficients of an image present two characteristics when the image is processed by wavelet transform: first characteristic is that the energy of an image is strongly concentrated in low frequency sub-image, second characteristic is that there is a similarity between the same direction in high frequency sub-images.but the fractal coding essence was precisely uses the similarity of wavelet transform image. The paper designed one kind of new Image Compression based on Fractal Coding in wavelet domain. The theoretical analysis and the simulation experiment indicated that, to some extent the method can reduce the coding time and reduce the MSE and enhance compression ratio of the reconstructed image and improve PSNR of the reconstructed image..


2013 ◽  
Vol 860-863 ◽  
pp. 2946-2949
Author(s):  
Yu Li ◽  
Lin He

With the rapid development of the Internet era, in order to improve the speed of image transmission and storage, this paper presents a new method to use Walsh transform on data block which size is not 2n. And this paper focuses on the research of Walsh transform application in color image compression coding. The method is simple and can get good result through the experimental simulation.


2011 ◽  
Vol 11 (03) ◽  
pp. 355-375 ◽  
Author(s):  
MOHAMMAD REZA BONYADI ◽  
MOHSEN EBRAHIMI MOGHADDAM

Most of image compression methods are based on frequency domain transforms that are followed by a quantization and rounding approach to discard some coefficients. It is obvious that the quality of compressed images highly depends on the manner of discarding these coefficients. However, finding a good balance between image quality and compression ratio is an important issue in such manners. In this paper, a new lossy compression method called linear mapping image compression (LMIC) is proposed to compress images with high quality while the user-specified compression ratio is satisfied. This method is based on discrete cosine transform (DCT) and an adaptive zonal mask. The proposed method divides image to equal size blocks and the structure of zonal mask for each block is determined independently by considering its gray-level distance (GLD). The experimental results showed that the presented method had higher pick signal to noise ratio (PSNR) in comparison with some related works in a specified compression ratio. In addition, the results were comparable with JPEG2000.


Sign in / Sign up

Export Citation Format

Share Document