entropy estimates
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 18)

H-INDEX

17
(FIVE YEARS 1)

2021 ◽  
Vol 52 (4) ◽  
pp. 31-54
Author(s):  
Andrei Romashchenko ◽  
Alexander Shen ◽  
Marius Zimand

This formula can be informally read as follows: the ith messagemi brings us log(1=pi) "bits of information" (whatever this means), and appears with frequency pi, so H is the expected amount of information provided by one random message (one sample of the random variable). Moreover, we can construct an optimal uniquely decodable code that requires about H (at most H + 1, to be exact) bits per message on average, and it encodes the ith message by approximately log(1=pi) bits, following the natural idea to use short codewords for frequent messages. This fits well the informal reading of the formula given above, and it is tempting to say that the ith message "contains log(1=pi) bits of information." Shannon himself succumbed to this temptation [46, p. 399] when he wrote about entropy estimates and considers Basic English and James Joyces's book "Finnegan's Wake" as two extreme examples of high and low redundancy in English texts. But, strictly speaking, one can speak only of entropies of random variables, not of their individual values, and "Finnegan's Wake" is not a random variable, just a specific string. Can we define the amount of information in individual objects?


2021 ◽  
Author(s):  
Meghan H. Puglia ◽  
Jacqueline S. Slobin ◽  
Cabell L. Williams

It is increasingly understood that moment-to-moment brain signal variability - traditionally modeled out of analyses as mere "noise" - serves a valuable function role and captures properties of brain function related to development, cognitive processing, and psychopathology. Multiscale entropy (MSE) - a measure of signal irregularity across temporal scales - is an increasingly popular analytic technique in human neuroscience. MSE provides insight into the time-structure and (non)linearity of fluctuations in neural activity and network dynamics, capturing the brain's moment-to-moment complexity as it operates on multiple time scales. MSE is emerging as a powerful predictor of developmental processes and outcomes. However, differences in EEG preprocessing and MSE computation make it challenging to compare results across studies. Here, we (1) provide an introduction to MSE for developmental researchers, (2) demonstrate the effect of preprocessing procedures on scale-wise entropy estimates, and (3) establish a standardized preprocessing and entropy estimation pipeline that generates scale-wise entropy estimates that are reliable and capable of differentiating developmental stages and cognitive states. This novel pipeline - the Automated Preprocessing Pipe-Line for the Estimation of Scale-wise Entropy from EEG Data (APPLESEED) is fully automated, customizable, and freely available for download from https://github.com/mhpuglia/APPLESEED. The dataset used herein to develop and validate the pipeline is available for download from https://openneuro.org/datasets/ds003710.


Author(s):  
Jared Elinger ◽  
Jonathan Rogers

Abstract The selection of model structure is an important step in system identification for nonlinear systems in cases where the model form is not known a priori. This process, sometimes called covariate selection or sparsity identification, involves the selection of terms in the dynamic model and is performed prior to parameter estimation. Previous work has shown the applicability of an information theory quantity known as causation entropy in performing sparsity identification. While prior work established the overall feasibility of using causation entropy to eliminate extraneous terms in a model, key questions remained regarding practical implementation. This paper builds on previous work to explore key practical considerations of causation entropy sparsity identification. First, the effect of data size is explored through both analysis and simulation, and general guidance is provided on how much data is necessary to produce accurate causation entropy estimates. Second, the effects of measurement noise and model discretization error are investigated, showing that both cause degradation of the causation entropy estimation accuracy but in opposite ways. These practical effects and trends are illustrated on several example nonlinear systems. Overall, results show that the causation entropy approach is a practical technique for sparsity identification particularly in light of the guidelines presented here for data size selection and handling of error sources.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1146
Author(s):  
Dragana Bajić ◽  
Nataša Mišić ◽  
Tamara Škorić ◽  
Nina Japundžić-Žigon ◽  
Miloš Milovanović

The goal of this paper is to investigate the changes of entropy estimates when the amplitude distribution of the time series is equalized using the probability integral transformation. The data we analyzed were with known properties—pseudo-random signals with known distributions, mutually coupled using statistical or deterministic methods that include generators of statistically dependent distributions, linear and non-linear transforms, and deterministic chaos. The signal pairs were coupled using a correlation coefficient ranging from zero to one. The dependence of the signal samples is achieved by moving average filter and non-linear equations. The applied coupling methods are checked using statistical tests for correlation. The changes in signal regularity are checked by a multifractal spectrum. The probability integral transformation is then applied to cardiovascular time series—systolic blood pressure and pulse interval—acquired from the laboratory animals and represented the results of entropy estimations. We derived an expression for the reference value of entropy in the probability integral transformed signals. We also experimentally evaluated the reliability of entropy estimates concerning the matching probabilities.


Heart rate variability (HRV) is a measure that evaluates cardiac autonomic activity according to the complexity or irregularity of an HRV dataset. At present, among various entropy estimates, the Lyapunov exponent (LE) is not as well described as approximate entropy (ApEn) and sample entropy (SampEn). Therefore, in this study, we investigated the characteristics of the parameters associated with the LE to evaluate whether the LE parameters can replace the frequency-domain parameters for HRV analysis. For the LE analysis in this study, two-dimensional factors were adjusted: length, which determines the size of the dimension vectors and is known as time delay embedding, varied over a range of 1 to 7, and the interval, which determines the distance between two successive embedding vectors, varied over a range of 1 to 3. A new parameter similar to the LA, the accumulation of the LE, was developed along with the LE to characterize the HRV parameters. The high frequency (HF) components dominated when the mean value of the LA was largest for interval 2, with 2.89 ms2 at the low frequency (LF) and 4.32 ms2 at the HF. The root mean square of the successive difference (RMSSD) in the LE decreased with increasing length in interval 1 from 2.6056 for length 1 to 0.2666 for length 7, resulting in a low HRV. The results suggest that the Lyapunov exponent methodology could be used in characterizing HRV analysis and replace power spectral estimates, specifically, HF components.


Soft Matter ◽  
2020 ◽  
Vol 16 (15) ◽  
pp. 3740-3745
Author(s):  
E. F. Walraven ◽  
F. A. M. Leermakers

How to obtain the entropy? (1) Monte Carlo simulation; (2) store snapshots in a file; (3) apply data compression; and (4) get entropy from the compressibility.


2019 ◽  
Vol 18 (2) ◽  
pp. 1-20
Author(s):  
Saeed Darijani ◽  
◽  
Hojatollah Zakerzade ◽  
Hamzeh Torabi ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document