entropy encoding
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 12)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Zainab J. Ahmed ◽  
Loay E. George ◽  
Raad Ahmed Hadi

<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low &amp; multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream. The measures Peak signal-to-noise ratio (PSNR) and compression ratio (CR) were used to conduct a comparative analysis for the performance of the whole system. Many audio test samples were utilized to test the performance behavior; the used samples have various sizes and vary in features. The simulation results appear the efficiency of these combined transforms when using LZW within the domain of data compression. The compression results are encouraging and show a remarkable reduction in audio file size with good fidelity.</span>


2021 ◽  
Vol 15 ◽  
Author(s):  
Mickael Zbili ◽  
Sylvain Rama

Calculations of entropy of a signal or mutual information between two variables are valuable analytical tools in the field of neuroscience. They can be applied to all types of data, capture non-linear interactions and are model independent. Yet the limited size and number of recordings one can collect in a series of experiments makes their calculation highly prone to sampling bias. Mathematical methods to overcome this so-called “sampling disaster” exist, but require significant expertise, great time and computational costs. As such, there is a need for a simple, unbiased and computationally efficient tool for estimating the level of entropy and mutual information. In this article, we propose that application of entropy-encoding compression algorithms widely used in text and image compression fulfill these requirements. By simply saving the signal in PNG picture format and measuring the size of the file on the hard drive, we can estimate entropy changes through different conditions. Furthermore, with some simple modifications of the PNG file, we can also estimate the evolution of mutual information between a stimulus and the observed responses through different conditions. We first demonstrate the applicability of this method using white-noise-like signals. Then, while this method can be used in all kind of experimental conditions, we provide examples of its application in patch-clamp recordings, detection of place cells and histological data. Although this method does not give an absolute value of entropy or mutual information, it is mathematically correct, and its simplicity and broad use make it a powerful tool for their estimation through experiments.


Author(s):  
Andrea Berdondini

ABSTRACT: This article describes an optimization method concerning entropy encoding applicable to a source of independent and identically-distributed random variables. The algorithm can be explained with the following example: let us take a source of i.i.d. random variables X with uniform probability density and cardinality 10. With this source, we generate messages of length 1000 which will be encoded in base 10. We call XG the set containing all messages that can be generated from the source. According to Shannon's first theorem, if the average entropy of X, calculated on the set XG, is H(X)≈0.9980, the average length of the encoded messages will be 1000* H(X)=998. Now, we increase the length of the message by one and calculate the average entropy concerning the 10% of the sequences of length 1001 having less entropy. We call this set XG10. The average entropy of X10, calculated on the XG10 set, is H(X10)≈0.9964, consequently, the average length of the encoded messages will be 1001* H(X10)=997.4 . Now, we make the difference between the average length of the encoded sequences belonging to the two sets ( XG and XG10) 998.0-997.4 = 0.6. Therefore, if we use the XG10 set, we reduce the average length of the encoded message by 0.6 values ​​in base ten. Consequently, the average information per symbol becomes 997.4/1000=0.9974, which turns out to be less than the average entropy of X H(X)≈0.998. We can use the XG10 set instead of the X10 set, because we can create a biunivocal correspondence between all the possible sequences generated by our source and ten percent of the sequences with less entropy of the messages having length 1001. In this article, we will show that this transformation can be performed by applying random variations on the sequences generated by the source.


2021 ◽  
Vol 33 (4) ◽  
pp. 640-648
Author(s):  
Hai Huang ◽  
Lin Xing ◽  
Ning Na ◽  
Guoliang Zhang ◽  
Shilei Zhao ◽  
...  

2020 ◽  
Author(s):  
Vinicius Fulber-Garcia ◽  
Sérgio Luis Sardi Mergen

Abstract Prediction-based compression methods, like prediction by partial matching, achieve a remarkable compression ratio, especially for texts written in natural language. However, they are not efficient in terms of speed. Part of the problem concerns the usage of dynamic entropy encoding, which is considerably slower than the static alternatives. In this paper, we propose a prediction-based compression method that decouples the context model from the frequency model. The separation allows static entropy encoding to be used without a significant overhead in the meta-data embedded in the compressed data. The result is a reasonably efficient algorithm that is particularly suited for small textual files, as the experiments show. We also show it is relatively easy to built strategies designed to handle specific cases, like the compression of files whose symbols are only locally frequent.


Author(s):  
Urvashi Sharma ◽  
Meenakshi Sood ◽  
Emjee Puthooran ◽  
Yugal Kumar

The digitization of human body, especially for treatment of diseases can generate a large volume of data. This generated medical data has a large resolution and bit depth. In the field of medical diagnosis, lossless compression techniques are widely adopted for the efficient archiving and transmission of medical images. This article presents an efficient coding solution based on a predictive coding technique. The proposed technique consists of Resolution Independent Gradient Edge Predictor16 (RIGED16) and Block Based Arithmetic Encoding (BAAE). The objective of this technique is to find universal threshold values for prediction and provide an optimum block size for encoding. The validity of the proposed technique is tested on some real images as well as standard images. The simulation results of the proposed technique are compared with some well-known and existing compression techniques. It is revealed that proposed technique gives a higher coding efficiency rate compared to other techniques.


2020 ◽  
Vol 4 (1) ◽  
pp. 155-162
Author(s):  
I Dewa Gede Hardi Rastama ◽  
I Made Oka Widyantara ◽  
Linawati

Medical imaging is a presentment of human organ parts. Medical imaging is saved on a film; therefore, it needs a big saving quota. Compressing is a process to remove redundancy from a piece of information without reducing its quality. This study recommended compressed medical image with DWT (Discrete Wavelet Transform) with adaptive threshold added and entropy copying with the Run Length Encoding (RLE) coding. This study is comparing several parameters, such as compressed ratio and compressed image file size, and PSNR (Peak Signal to Noise Ratio) for analyzing the quality of reconstructive image. The study showed that the comparison of rate, compressed ratio, and PSNR tracing of Haar and Daubechies doesn’t have a significant difference. Comparison of rate, compressed ratio, and PSNR tracing on the hard and soft threshold is the rate of the sold threshold is lower than the hard threshold. The optimal outcome of this study is to use a soft threshold.


2020 ◽  
Vol 15 (1) ◽  
pp. 91-105
Author(s):  
Shree Ram Khaitu ◽  
Sanjeeb Prasad Panday

 Image Compression techniques have become a very important subject with the rapid growth of multimedia application. The main motivations behind the image compression are for the efficient and lossless transmission as well as for storage of digital data. Image Compression techniques are of two types; Lossless and Lossy compression techniques. Lossy compression techniques are applied for the natural images as minor loss of the data are acceptable. Entropy encoding is the lossless compression scheme that is independent with particular features of the media as it has its own unique codes and symbols. Huffman coding is an entropy coding approach for efficient transmission of data. This paper highlights the fractal image compression method based on the fractal features and searching and finding the best replacement blocks for the original image. Canonical Huffman coding which provides good fractal compression than arithmetic coding is used in this paper. The result obtained depicts that Canonical Huffman coding based fractal compression technique increases the speed of the compression and has better PNSR as well as better compression ratio than standard Huffman coding.  


Sign in / Sign up

Export Citation Format

Share Document