Information Loss in Riffle Shuffling

2002 ◽  
Vol 11 (1) ◽  
pp. 79-95 ◽  
Author(s):  
DUDLEY STARK ◽  
A. GANESH ◽  
NEIL O’CONNELL

We study the asymptotic behaviour of the relative entropy (to stationarity) for a commonly used model for riffle shuffling a deck of n cards m times. Our results establish and were motivated by a prediction in a recent numerical study of Trefethen and Trefethen. Loosely speaking, the relative entropy decays approximately linearly (in m) for m < log2n, and approximately exponentially for m > log2n. The deck becomes random in this information-theoretic sense after m = 3/2 log2n shuffles.

2020 ◽  
Vol 9 (5) ◽  
Author(s):  
Anjishnu Bose ◽  
Parthiv Haldar ◽  
Aninda Sinha ◽  
Pritish Sinha ◽  
Shaswat Tiwari

We consider entanglement measures in 2-2 scattering in quantum field theories, focusing on relative entropy which distinguishes two different density matrices. Relative entropy is investigated in several cases which include \phi^4ϕ4 theory, chiral perturbation theory (\chi PTχPT) describing pion scattering and dilaton scattering in type II superstring theory. We derive a high energy bound on the relative entropy using known bounds on the elastic differential cross-sections in massive QFTs. In \chi PTχPT, relative entropy close to threshold has simple expressions in terms of ratios of scattering lengths. Definite sign properties are found for the relative entropy which are over and above the usual positivity of relative entropy in certain cases. We then turn to the recent numerical investigations of the S-matrix bootstrap in the context of pion scattering. By imposing these sign constraints and the \rhoρ resonance, we find restrictions on the allowed S-matrices. By performing hypothesis testing using relative entropy, we isolate two sets of S-matrices living on the boundary which give scattering lengths comparable to experiments but one of which is far from the 1-loop \chi PTχPT Adler zeros. We perform a preliminary analysis to constrain the allowed space further, using ideas involving positivity inside the extended Mandelstam region, and other quantum information theoretic measures based on entanglement in isospin.


Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 209 ◽  
Author(s):  
Francesco Buscemi ◽  
David Sutter ◽  
Marco Tomamichel

Given two pairs of quantum states, we want to decide if there exists a quantum channel that transforms one pair into the other. The theory of quantum statistical comparison and quantum relative majorization provides necessary and sufficient conditions for such a transformation to exist, but such conditions are typically difficult to check in practice. Here, by building upon work by Keiji Matsumoto, we relax the problem by allowing for small errors in one of the transformations. In this way, a simple sufficient condition can be formulated in terms of one-shot relative entropies of the two pairs. In the asymptotic setting where we consider sequences of state pairs, under some mild convergence conditions, this implies that the quantum relative entropy is the only relevant quantity deciding when a pairwise state transformation is possible. More precisely, if the relative entropy of the initial state pair is strictly larger compared to the relative entropy of the target state pair, then a transformation with exponentially vanishing error is possible. On the other hand, if the relative entropy of the target state is strictly larger, then any such transformation will have an error converging exponentially to one. As an immediate consequence, we show that the rate at which pairs of states can be transformed into each other is given by the ratio of their relative entropies. We discuss applications to the resource theories of athermality and coherence, where our results imply an exponential strong converse for general state interconversion.


Author(s):  
Ryotaro Kamimura ◽  

In this paper, we propose new information-theoretic methods to stabilize feature detection. We have introduced information-theoretic methods to realize competitive learning. It turned out that mutual information maximization corresponds to a process of competition among neurons. This means that mutual information can be effective in describing competitive processes. Thus, by using this mutual information, we have introduced information loss to interpret internal representations. By relaxing competitive units by some components such as units and connection weights, a neural network’s information is decreased. If the information loss is sufficiently large, the components play important roles. However, with the information loss, there have been some problems, such as the instability of final representations. This means that final outputs are significantly dependent upon chosen parameters. To stabilize final representations, we introduce two computational methods, that is, <em>relative relaxation</em> and <em>weighted information loss</em>. The relative relaxation is introduced because mutual information is dependent upon the Gaussian width. Thus, we can relax competitive units or softly delete some components, relative only to a predetermined base state. In addition, we introduce weighted information loss to take into account information on related components. We applied the methods to the well-known Iris problem and a problem regarding the extinction of animals and plants. In the Iris problem, experimental results confirmed that final representations were significantly stable if we appropriately chose the parameter for the base state. On the other hand, in the extinction problem, weighted information losses showed better performance, where final outputs were significantly more stable than those by the other methods.


Author(s):  
Bertrand Charpentier ◽  
Thomas Bonald

We introduce the tree sampling divergence (TSD), an information-theoretic metric for assessing the quality of the hierarchical clustering of a graph. Any hierarchical clustering of a graph can be represented as a tree whose nodes correspond to clusters of the graph. The TSD is the Kullback-Leibler divergence between two probability distributions over the nodes of this tree: those induced respectively by sampling at random edges and node pairs of the graph. A fundamental property of the proposed metric is that it is interpretable in terms of graph reconstruction. Specifically, it quantifies the ability to reconstruct the graph from the tree in terms of information loss. In particular, the TSD is maximum when perfect reconstruction is feasible, i.e., when the graph has a complete hierarchical structure. Another key property of TSD is that it applies to any tree, not necessarily binary. In particular, the TSD can be used to compress a binary tree while minimizing the information loss in terms of graph reconstruction, so as to get a compact representation of the hierarchical structure of a graph. We illustrate the behavior of TSD compared to existing metrics on experiments based on both synthetic and real datasets.


Fractals ◽  
1997 ◽  
Vol 05 (01) ◽  
pp. 95-104 ◽  
Author(s):  
A. Cohen ◽  
R. N. Mantegna ◽  
S. Havlin

We perform a numerical study of the statistical properties of natural texts written in English and of two types of artificial texts. As statistical tools we use the conventional Zipf analysis of the distribution of words and the inverse Zipf analysis of the distribution of frequencies of words, the analysis of vocabulary growth, the Shannon entropy and a quantity which is a nonlinear function of frequencies of words, the frequency "entropy". Our numerical results, obtained by investigation of eight complete books and sixteen related artificial texts, suggest that, among these analyses, the analysis of vocabulary growth shows the most striking difference between natural and artificial texts. Our results also suggest that, among these analyses, those who give a greater weight to low frequency words succeed better in distinguishing between natural and artificial texts. The inverse Zipf analysis seems to succeed better than the conventional Zipf analysis and the frequency "entropy" better than the usual word entropy. By studying the scaling behavior of both entropies as a function of the total number of words T of the investigated text, we find that the word relative entropy scales with the same functional form for both natural and artificial texts but with a different parameter, while the frequency relative "entropy" decreases monotonically with T for the artificial texts while having a minimum at T≈104 for the natural texts.


2018 ◽  
Vol 51 (4) ◽  
pp. 1005-1012
Author(s):  
Bernard Croset

An analytical method, the sections method, is developed to build a close link between the singularities of the surface of a body and the asymptotic behaviour of its amplitude form factor at large scattering vector, q. In contrast with a sphere, for which the asymptotic behaviour is in q −2, surface singularities lead to both narrow regions, for which the amplitude form factor exhibits trailing behaviour, and extended regions, for which it exhibits a rapid decrease. A numerical study of a simple example, the fourfold truncated sphere, illustrates the usefulness of these analytical predictions.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1021
Author(s):  
James Fullwood ◽  
Arthur J. Parzygnat

We provide a stochastic extension of the Baez–Fritz–Leinster characterization of the Shannon information loss associated with a measure-preserving function. This recovers the conditional entropy and a closely related information-theoretic measure that we call conditional information loss. Although not functorial, these information measures are semi-functorial, a concept we introduce that is definable in any Markov category. We also introduce the notion of an entropic Bayes’ rule for information measures, and we provide a characterization of conditional entropy in terms of this rule.


Sign in / Sign up

Export Citation Format

Share Document