scholarly journals Survey of Hybrid Image Compression Techniques

Author(s):  
Emy Setyaningsih ◽  
Agus Harjoko

A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.

2011 ◽  
Vol 11 (03) ◽  
pp. 355-375 ◽  
Author(s):  
MOHAMMAD REZA BONYADI ◽  
MOHSEN EBRAHIMI MOGHADDAM

Most of image compression methods are based on frequency domain transforms that are followed by a quantization and rounding approach to discard some coefficients. It is obvious that the quality of compressed images highly depends on the manner of discarding these coefficients. However, finding a good balance between image quality and compression ratio is an important issue in such manners. In this paper, a new lossy compression method called linear mapping image compression (LMIC) is proposed to compress images with high quality while the user-specified compression ratio is satisfied. This method is based on discrete cosine transform (DCT) and an adaptive zonal mask. The proposed method divides image to equal size blocks and the structure of zonal mask for each block is determined independently by considering its gray-level distance (GLD). The experimental results showed that the presented method had higher pick signal to noise ratio (PSNR) in comparison with some related works in a specified compression ratio. In addition, the results were comparable with JPEG2000.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1817
Author(s):  
Jiawen Xue ◽  
Li Yin ◽  
Zehua Lan ◽  
Mingzhu Long ◽  
Guolin Li ◽  
...  

This paper proposes a novel 3D discrete cosine transform (DCT) based image compression method for medical endoscopic applications. Due to the high correlation among color components of wireless capsule endoscopy (WCE) images, the original 2D Bayer data pattern is reconstructed into a new 3D data pattern, and 3D DCT is adopted to compress the 3D data for high compression ratio and high quality. For the low computational complexity of 3D-DCT, an optimized 4-point DCT butterfly structure without multiplication operation is proposed. Due to the unique characteristics of the 3D data pattern, the quantization and zigzag scan are ameliorated. To further improve the visual quality of decompressed images, a frequency-domain filter is proposed to eliminate the blocking artifacts adaptively. Experiments show that our method attains an average compression ratio (CR) of 22.94:1 with the peak signal to noise ratio (PSNR) of 40.73 dB, which outperforms state-of-the-art methods.


Author(s):  
Tetti Purnama Sari ◽  
Surya Darma Nasution ◽  
Rivalri Kristianto Hondro

Compressed files require less disk space than files that are not compressed, so compressing is useful for backing up data by using storage space to be small or to send information over the Internet faster. MP3 makes the audio format often used because the data stored resembles the original data when recorded and has a size that is not too large compared to other formats. Users besides storing song files, they also store videos and other files and users also want the highest quality data and minimum quantity (size). So to make a lot of empty space and have a size of data that is not large on storage media, a compression method that means to shorten the size of the bits needed for data is needed. Algorithms used in the compression process, including the Levenstein algorithm which is a type of lossless compression. Applying the Levenstein algorithm to the compression process the author wants to know the performance of compression when done by compressing MP3 files, so that large MP3 files will be compressed into smaller sizes, so that the transmission process is carried out faster and reduces the data storage location.Keywords: MP3, File, Levenstein


2016 ◽  
Vol 13 (10) ◽  
pp. 6671-6679
Author(s):  
H Rajasekhar ◽  
B. Prabhakara Rao

In the previous video compression method, the videos were segmented by using the novel motion estimation algorithm with aid of watershed method. But, the compression ratio (CR) of compression with novel motion estimation algorithm was not giving an adequate result. Moreover this methods performance is needed to be improved in the encoding and decoding processes. Because most of the video compression methods have utilized encoding techniques like JPEG, Run Length, Huffman coding and LSK encoding. The improvement of the encoding techniques in the compression process will improve the compression result. Hence, to overcome these drawbacks, we intended to propose a new video compression method with renowned encoding technique. In this proposed video compression method, the input video frames motion vectors are estimated by applying watershed and ARS-ST (Adaptive Rood Search with Spatio-Temporal) algorithms. After that, the vector blocks which have high difference value are encoded by using the JPEG-LS encoder. JPEG-LS have excellent coding and computational efficiency, and it outperforms JPEG2000 and many other image compression methods. This algorithm is of relatively low complexity, low storage requirement and its compression capability is efficient enough. To get the compressed video, the encoded blocks are subsequently decoded by JPEG-LS. The implementation result shows the effectiveness of proposed method, in compressing more number of videos. The performance of our proposed video compression method is evaluated by comparing the result of proposed method with the existing video compression techniques. The comparison result shows that our proposed method acquires high-quality compression ratio and PSNR for the number of testing videos than the existing techniques.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahmood Al-khassaweneh ◽  
Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.


2020 ◽  
Vol 9 (06) ◽  
pp. 25075-25084
Author(s):  
Mr. Moayad Al Falahi ◽  
Dr. Janaki Sivakumar

The main objective of this project is to develop an application to find the best compression technique to store Muscat College students' photographs in less storage. MATLAB software will be used to develop a Graphical User Interface GUI application and implement two image compression techniques which are lossless compression using the DCT algorithm and lossy compression using the LBG algorithm.  The application shall allow the user to select and test a sample image by applying both these techniques for any student image he\she selects in order to compare the results by display the image after compression and the histogram to find which the most suitable compression technique is. Also, the application shall show the size of images before and after applying the compression process and show the compression ratio and relative data redundancy of compressed image/images. The main functionality is that the application shall allow the user to do bulk processing to apply image enhancement and image compression technique to enhance and compress all the photographs of students and store them in less space.


Connectivity ◽  
2020 ◽  
Vol 148 (6) ◽  
Author(s):  
Yu. I. Katkov ◽  
◽  
O. S. Zvenigorodsky ◽  
O. V. Zinchenko ◽  
V. V. Onyshchenko ◽  
...  

The article is devoted to the topical issue of finding new effective and improving existing widespread compression methods in order to reduce computational complexity and improve the quality of image-renewable image compression images, is important for the introduction of cloud technologies. The article presents a problem To increase the efficiency of cloud storage, it is necessary to determine methods for reducing the information redundancy of digital images by fractal compression of video content, to make recommendations on the possibilities of applying these methods to solve various practical problems. The necessity of storing high-quality video information in new HDTV formats 2k, 4k, 8k in cloud storage to meet the existing needs of users has been substantiated. It is shown that when processing and transmitting high quality video information there is a problem of reducing the redundancy of video data (image compression) provided that the desired image quality is preserved, restored by the user. It has been shown that in cloud storage the emergence of such a problem is historically due to the contradiction between consumer requirements for image quality and the necessary volumes and ways to reduce redundancy of video data, which are transmitted over communication channels and processed in data center servers. The solution to this problem is traditionally rooted in the search for effective technologies for compressing, archiving and compressing video information. An analysis of video compression methods and digital video compression technology has been performed, which reduces the amount of data used to represent the video stream. Approaches to image compression in cloud storage under conditions of preservation or a slight reduction in the amount of data that provide the user with the specified quality of the restored image are shown. Classification of special compression methods without loss and with information loss is provided. Based on the analysis, it is concluded that it is advisable to use special methods of compression with loss of information to store high quality video information in the new formats HDTV 2k, 4k, 8k in cloud storage. The application of video image processing and their encoding and compression on the basis of fractal image compression is substantiated. Recommendations for the implementation of these methods are given.


Author(s):  
Nadine Wiggins ◽  
Brian Stokes

ABSTRACTObjectivesThe Tasmanian Data Linkage Unit (TDLU) was established through the University of Tasmania in 2011 with the first dataset imported to its Master Linkage Map (MLM) during 2014. Tasmania an island state of Australia, has a population of approximately 516,000. From the TDLU’s earliest inception, it was deemed important to build a high quality linkage spine comprising key administrative data representative of significant state health and related datasets to support quality population level research.ApproachThe TDLU has embraced a model of continual quality and process enhancement as a determined strategy to support ongoing business improvement. Initial linkage approaches utilised ‘traditional’ methods of reviewing record pairs within an upper and lower confidence range. This approach resulted in false record pairs with high confidence levels being linked (false positives) and true record pairs at lower confidence levels not linked (false negatives). To improve linkage quality, the TDLU has continually refined and modified its clerical review methodology with a specialist software module developed to identify specific record attributes within groups that require the group to be manually reviewed and resolved. A range of SQL queries have also been developed to identify incorrect links and further enhance the linkage quality of the MLM.ResultsThe linkage quality tools implemented have led to improved clerical review and quality assurance processes which in turn have increased the overall quality of the linkage spine. The ‘targeted’ method of clerical review provides easy identification of false positive records, particularly those with high confidence scores such as twins and husband/wife combinations. The review of groups at lower confidence levels has minimised the rate of false negative pairs however further refinement of tools is required to minimise the time spent on reviewing these groups. The clerical review software module has equipped staff with the necessary information to make informed and timely decisions when reviewing groups of records. Detailed documentation is maintained for each linkage project providing continual feedback for system and process improvements as the linkage spine increases in size.ConclusionThe process of clerical review and quality assurance requires a commitment to continual refinement of tools and techniques resulting in a higher quality linkage spine and a reduction in the total time and resource required to link datasets.


The domain of image signal processing, image compression is the significant technique, which is mainly invented to reduce the redundancy of image data in order to able to transmit the image pixels with high quality resolution. The standard image compression techniques like losseless and lossy compression technique generates high compression ratio image with efficient storage and transmission requirement respectively. There are many image compression technique are available for example JPEG, DWT and DCT based compression algorithms which provides effective results in terms of high compression ratio with clear quality image transformation. But they have more computational complexities in terms of processing, encoding, energy consumption and hardware design. Thus, bringing out these challenges, the proposed paper considers the most prominent research papers and discuses FPGA architecture design and future scope in the state of art of image compression technique. The primary aim to investigate the research challenges toward VLSI designing and image compression. The core section of the proposed study includes three folds viz standard architecture designs, related work and open research challenges in the domain of image compression.


Sign in / Sign up

Export Citation Format

Share Document