Stabilization of the compressed data formation rate in hierarchical image compression

2007 ◽  
Vol 17 (1) ◽  
pp. 79-81 ◽  
Author(s):  
M. V. Gashnikov ◽  
N. I. Glumov ◽  
V. V. Sergeyev
Author(s):  
Hitesh H Vandra

Image compression is used to reduce bandwidth or storage requirement in image application. Mainly two types of image compression: lossy and lossless image compression. A Lossy Image Compression removes some of the source information content along with the redundancy. While the Lossless Image Compression technique the original source data is reconstructed from the compressed data by restoring the removed redundancy. The reconstructed data is an exact replica of the original source data. Many algorithms are present for lossless image compression like Huffman, rice coding, run length, LZW. LZW is referred to as a substitution or dictionary-based encoding algorithm. The algorithm builds a data dictionary of data occurring in an uncompressed data stream. Patterns of data (substrings) are identified in the data stream and are matched to entries in the dictionary. If the substring is not present in the dictionary, a code phrase is created based on the data content of the substring, and it is stored in the dictionary. The phrase is then written to the compressed output stream. In this paper we see the effect of LZW algorithm on the png, jpg, png, gif, bmp image formats.


Author(s):  
Noritaka Shigei ◽  
◽  
Hiromi Miyajima ◽  
Michiharu Maeda ◽  
Lixin Ma ◽  
...  

Multiple-VQ methods generate multiple independent codebooks to compress an image by using a neural network algorithm. In the image restoration, the methods restore low quality images from the multiple codebooks, and then combine the low quality ones into a high quality one. However, the naive implementation of these methods increases the compressed data size too much. This paper proposes two improving techniques to this problem: “index inference” and “ranking based index coding.” It is shown that index inference and ranking based index coding are effective for smaller and larger codebook sizes, respectively.


2021 ◽  
Author(s):  
Enrico Pomarico ◽  
Cédric Schmidt ◽  
Florian Chays ◽  
David Nguyen ◽  
Arielle Planchette ◽  
...  

Abstract The growth of data throughput in optical microscopy has triggered the extensive use of supervised learning (SL) models on compressed datasets for automated analysis. Investigating the effects of image compression on SL predictions is therefore pivotal to assess their reliability, especially for clinical use.We quantify the statistical distortions induced by compression through the comparison of predictions on compressed data to the raw predictive uncertainty, numerically estimated from the raw noise statistics measured via sensor calibration. Predictions on cell segmentation parameters are altered by up to 15% and more than 10 standard deviations after 16-to-8 bits pixel depth reduction and 10:1 JPEG compression. JPEG formats with higher compression ratios show significantly larger distortions. Interestingly, a recent metrologically accurate algorithm, offering up to 10:1 compression ratio, provides a prediction spread equivalent to that stemming from raw noise. The method described here allows to set a lower bound to the predictive uncertainty of a SL task and can be generalized to determine the statistical distortions originated from a variety of processing pipelines in AI-assisted fields.


Author(s):  
I. Manga ◽  
E. J. Garba ◽  
A. S. Ahmadu

Data compression refers to the process of representation of data using fewer number of bits. Data compression can be lossless or lossy. There are many schemes developed and used to perform either lossless or lossy compression. Lossless data compression allows the original data be conveniently reconstructed from the compressed data while lossy compression allow only an approximate of the original data to be constructed. The type of data to compressed can be classified as image data, textual data, audio data or even video content. Various researches are being carried out in the area of image compression. This paper presents various literatures in field of data compression and the techniques used to compress image using lossless type of compression. In conclusion, the paper reviewed some schemes used to compress an image using a single schemes or combination of two or more schemes methods.


The extent of communicated information through internet has augmented speedily over the past few years. Image compression is the preeminent way to lessen the size of the image. JPEG is the one the best technique related to lossy image compression. In this paper a novel JPEG compression algorithm with Fuzzy-Morphology techniques was proposed. The efficacy of the proposed algorithm compared to JPEG is presented with metrics like PSNR, MSE, No of bits transmitted. The proposed approaches lessen the number of encoded bits as a result tumbling the quantity of memory needed. The Planned approaches are best appropriate for the images corrupted with Gaussian, Speckle, Poisson, Salt & Pepper noises. In this paper the effect of compression on classification performance was envisaged, Artificial Neural Network, Support Vector Machine, and, KNN classifiers performance is evaluated with original image data, standard JPEG compressed data and the compressed image data with the proposed method.


Author(s):  
Mudassar Raza ◽  
Ahmed Adnan ◽  
Ahmed Adnan ◽  
Muhammad Sharif ◽  
Syed Waqas Haider ◽  
...  

Space research organizations, hospitals and military air surveillance activities, among others, produce a huge amount of data in the form of images hence a large storage space is required to record this information. In hospitals, data produced during medical examination is in the form of a sequence of images and are very much correlated; because these images have great importance, some kind of lossless image compression technique is needed. Moreover, these images are often required to be transmitted over the network. Since the availability of storage and bandwidth is limited, a compression technique is required to reduce the number of bits to store these images and take less time to transmit them over the network. For this purpose, there are many state-of the-art lossless image compression algorithms like CALIC, LOCO-I, JPEG-LS, JPEG20000; Nevertheless, these compression algorithms take only a single file to compress and cannot exploit the correlation among the sequence frames of MRI or CE images. To exploit the correlation, a new algorithm is proposed in this paper. The primary goals of the proposed compression method are to minimize the memory resource during storage of compressed data as well as minimize the bandwidth requirement during transmission of compressed data. For achieving these goals, the proposed compression method combines the single image compression technique called super spatial structure prediction with inter-frame coding to acquire grater compression ratio. An efficient compression method requires elimination of redundancy of data during compression; therefore, for elimination of redundancy of data, initially, the super spatial structure prediction algorithm is applied with the fast block matching approach and later Huffman coding is applied for reducing the number of bits required for transmitting and storing single pixel value. Also, to speed up the block-matching process during motion estimation, the proposed method compares those blocks that have identical sum and leave the others; therefore, the time taken by the block-matching process is reduced by minimizing the unnecessary overhead during the blockmatching process. Thus, in the proposed fast lossless compression method for medical image sequences, the twostage redundant data elimination process ultimately reduces the memory resource required for storing and transmission. The method is tested on the sequences of MRI and CE images and produces an improved compression rate.


2006 ◽  
Vol 76 (3) ◽  
pp. 111-116 ◽  
Author(s):  
Hiroshi Matsuzaki ◽  
Misao Miwa

The purpose of this study was to clarify the effects of dietary calcium (Ca) supplementation on bone metabolism of magnesium (Mg)-deficient rats. Male Wistar rats were randomized by weight into three groups, and fed a control diet (control group), a Mg-deficient diet (Mg- group) or a Mg-deficient diet having twice the control Ca concentrations (Mg-2Ca group) for 14 days. Trabecular bone volume was significantly lower in the Mg - and Mg-2Ca groups than in the control group. Trabecular number was also significantly lower in the Mg - and Mg-2Ca groups than in the control group. Mineralizing bone surface, mineral apposition rate (MAR), and surface referent bone formation rate (BFR/BS) were significantly lower in the Mg - and Mg-2Ca groups than in the control group. Furthermore, MAR and BFR/BS were significantly lower in the Mg-2Ca group than in the Mg - group. These results suggest that dietary Ca supplementation suppresses bone formation in Mg-deficient rats.


2010 ◽  
Vol 130 (8) ◽  
pp. 1431-1439 ◽  
Author(s):  
Hiroki Matsumoto ◽  
Fumito Kichikawa ◽  
Kazuya Sasazaki ◽  
Junji Maeda ◽  
Yukinori Suzuki

Sign in / Sign up

Export Citation Format

Share Document