entropy estimate
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 5)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 12 (2) ◽  
pp. 353-376
Author(s):  
Kingstone Nyakurukwa

The purpose of this paper is to determine whether there was information flow between the stock markets of Zimbabwe and South Africa during the time the Zimbabwean economy was dollarized. The author used econophysics-based Shannonian and Rényian transfer entropy estimates to establish the flow of information between the markets in tranquil periods as well as at the tails of return distributions. The only significant Shannonian transfer entropy estimate was from Johannesburg Stock Exchange (JSE) resources index to Zimbabwe Stock Exchange (ZSE) mining index. The findings show that the only significant tail dependence was between JSE All Share Index (JALSH) and ZSE Mining on the one hand, and between JSE Resources and ZSE Mining on the other hand. However, the magnitudes of the effective transfer entropy values are relatively low, showing that there are weak linkages between the Zimbabwe Stock Exchange and the Johannesburg Stock Exchange. The lack of significant information flows between the exchanges of the two countries offer opportunities to fund managers for portfolio diversification. From a government point of view, it is imperative that the tempo of economic and political reform be accelerated so that integration between the markets can be fast-tracked. Integrated markets will benefit Zimbabwe as this will reduce the cost of equity and accelerate economic growth.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 740
Author(s):  
Hoshin V. Gupta ◽  
Mohammad Reza Ehsani ◽  
Tirthankar Roy ◽  
Maria A. Sans-Fuentes ◽  
Uwe Ehret ◽  
...  

We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.25–0.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as -10% and for BC as large as -50%. We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1396
Author(s):  
Chandan Karmakar ◽  
Radhagayathri Udhayakumar ◽  
Marimuthu Palaniswami

Entropy profiling is a recently introduced approach that reduces parametric dependence in traditional Kolmogorov-Sinai (KS) entropy measurement algorithms. The choice of the threshold parameter r of vector distances in traditional entropy computations is crucial in deciding the accuracy of signal irregularity information retrieved by these methods. In addition to making parametric choices completely data-driven, entropy profiling generates a complete profile of entropy information as against a single entropy estimate (seen in traditional algorithms). The benefits of using “profiling” instead of “estimation” are: (a) precursory methods such as approximate and sample entropy that have had the limitation of handling short-term signals (less than 1000 samples) are now made capable of the same; (b) the entropy measure can capture complexity information from short and long-term signals without multi-scaling; and (c) this new approach facilitates enhanced information retrieval from short-term HRV signals. The novel concept of entropy profiling has greatly equipped traditional algorithms to overcome existing limitations and broaden applicability in the field of short-term signal analysis. In this work, we present a review of KS-entropy methods and their limitations in the context of short-term heart rate variability analysis and elucidate the benefits of using entropy profiling as an alternative for the same.


Entropy ◽  
2019 ◽  
Vol 21 (12) ◽  
pp. 1201 ◽  
Author(s):  
Geng Ren ◽  
Shuntaro Takahashi ◽  
Kumiko Tanaka-Ishii

The entropy rate h of a natural language quantifies the complexity underlying the language. While recent studies have used computational approaches to estimate this rate, their results rely fundamentally on the performance of the language model used for prediction. On the other hand, in 1951, Shannon conducted a cognitive experiment to estimate the rate without the use of any such artifact. Shannon’s experiment, however, used only one subject, bringing into question the statistical validity of his value of h = 1.3 bits per character for the English language entropy rate. In this study, we conducted Shannon’s experiment on a much larger scale to reevaluate the entropy rate h via Amazon’s Mechanical Turk, a crowd-sourcing service. The online subjects recruited through Mechanical Turk were each asked to guess the succeeding character after being given the preceding characters until obtaining the correct answer. We collected 172,954 character predictions and analyzed these predictions with a bootstrap technique. The analysis suggests that a large number of character predictions per context length, perhaps as many as 10 3 , would be necessary to obtain a convergent estimate of the entropy rate, and if fewer predictions are used, the resulting h value may be underestimated. Our final entropy estimate was h ≈ 1.22 bits per character.


2018 ◽  
Vol 24 (4) ◽  
pp. 1735-1758 ◽  
Author(s):  
Jana Alkhayal ◽  
Samar Issa ◽  
Mustapha Jazar ◽  
Régis Monneau

In this paper we study a degenerate parabolic system, which is strongly coupled. We prove general existence result, but the uniqueness question remains open. Our proof of existence is based on a crucial entropy estimate which controls the gradient of the solution together with its non-negativity. Our system is of porous medium type which is applicable to models in seawater intrusion.


2018 ◽  
Vol 32 (1) ◽  
pp. 313-318
Author(s):  
Katarzyna Tarchała ◽  
Paweł Walczak

Abstract We provide an entropy estimate from below for a finitely generated group of transformation of a compact metric space which contains a ping-pong game with several players located anywhere in the group.


2016 ◽  
Vol 113 (11) ◽  
pp. 2839-2844 ◽  
Author(s):  
Pratyush Tiwary ◽  
B. J. Berne

In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs.


2016 ◽  
Vol 37 (3) ◽  
pp. 802-823
Author(s):  
TETURO KAMAE

We propose a new criterion for randomness of a word $x_{1}x_{2}\cdots x_{n}\in \mathbb{A}^{n}$ over a finite alphabet $\mathbb{A}$ defined by $$\begin{eqnarray}\unicode[STIX]{x1D6EF}^{n}(x_{1}x_{2}\cdots x_{n})=\mathop{\sum }_{\unicode[STIX]{x1D709}\in \mathbb{A}^{+}}\unicode[STIX]{x1D713}(|x_{1}x_{2}\cdots x_{n}|_{\unicode[STIX]{x1D709}}),\end{eqnarray}$$ where $\mathbb{A}^{+}=\bigcup _{k=1}^{\infty }\mathbb{A}^{k}$ is the set of non-empty finite words over $\mathbb{A}$, for $\unicode[STIX]{x1D709}\in \mathbb{A}^{k}$, $$\begin{eqnarray}|x_{1}x_{2}\cdots x_{n}|_{\unicode[STIX]{x1D709}}=\#\{i;~1\leq i\leq n-k+1,~x_{i}x_{i+1}\cdots x_{i+k-1}=\unicode[STIX]{x1D709}\},\end{eqnarray}$$ and for $t\geq 0$, $\unicode[STIX]{x1D713}(0)=0$ and $\unicode[STIX]{x1D713}(t)=t\log t~(t>0)$. This value represents how random the word $x_{1}x_{2}\cdots x_{n}$ is from the viewpoint of the block frequency. In fact, we define a randomness criterion as $$\begin{eqnarray}Q(x_{1}x_{2}\cdots x_{n})=(1/2)(n\log n)^{2}/\unicode[STIX]{x1D6EF}^{n}(x_{1}x_{2}\cdots x_{n}).\end{eqnarray}$$ Then, $$\begin{eqnarray}\lim _{n\rightarrow \infty }(1/n)Q(X_{1}X_{2}\cdots X_{n})=h(X)\end{eqnarray}$$ holds with probability 1 if $X_{1}X_{2}\cdots \,$ is an ergodic, stationary process over $\mathbb{A}$ either with a finite energy or $h(X)=0$, where $h(X)$ is the entropy of the process. Another criterion for randomness using $t^{2}$ instead of $t\log t$ has already been proposed in Kamae and Xue [An easy criterion for randomness. Sankhya A77(1) (2015), 126–152]. In comparison, our new criterion provides a better fit with the entropy. We also claim that our criterion not only represents the entropy asymptotically but also gives a good representation of the randomness of fixed finite words.


Sign in / Sign up

Export Citation Format

Share Document