scholarly journals Quantum Mutual Information and the One-time Pad

Author(s):  
Benjamin Schumacher ◽  
Michael D. Westmoreland
2006 ◽  
Vol 74 (4) ◽  
Author(s):  
Benjamin Schumacher ◽  
Michael D. Westmoreland

2008 ◽  
Vol 06 (supp01) ◽  
pp. 745-750 ◽  
Author(s):  
T. C. DORLAS ◽  
C. MORGAN

We obtain a maximizer for the quantum mutual information for classical information sent over the quantum amplitude damping channel. This is achieved by limiting the ensemble of input states to antipodal states, in the calculation of the product state capacity for the channel. We also consider the product state capacity of a convex combination of two memoryless channels and demonstrate in particular that it is in general not given by the minimum of the capacities of the respective memoryless channels.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 881
Author(s):  
Catalina-Lucia Cocianu ◽  
Alexandru Daniel Stan ◽  
Mihai Avramescu

The main aim of the reported work is to solve the registration problem for recognition purposes. We introduce two new evolutionary algorithms (EA) consisting of population-based search methods, followed by or combined with a local search scheme. We used a variant of the Firefly algorithm to conduct the population-based search, while the local exploration was implemented by the Two-Membered Evolutionary Strategy (2M-ES). Both algorithms use fitness function based on mutual information (MI) to direct the exploration toward an appropriate candidate solution. A good similarity measure is the one that enables us to predict well, and with the symmetric MI we tie similarity between two objects A and B directly to how well A predicts B, and vice versa. Since the search landscape of normalized mutual information proved more amenable for evolutionary computation algorithms than simple MI, we use normalized mutual information (NMI) defined as symmetric uncertainty. The proposed algorithms are tested against the well-known Principal Axes Transformation technique (PAT), a standard evolutionary strategy and a version of the Firefly algorithm developed to align images. The accuracy and the efficiency of the proposed algorithms are experimentally confirmed by our tests, both methods being excellently fitted to registering images.


2015 ◽  
Vol 56 (2) ◽  
pp. 022205 ◽  
Author(s):  
Mario Berta ◽  
Kaushik P. Seshadreesan ◽  
Mark M. Wilde

2021 ◽  
Author(s):  
Benjamin Elbers

The Mutual Information segregation index M can be decomposed into a weighted average of local segregation scores. This useful property can be used to assess whether some units (say, occupations or geographic areas) contribute more to overall segregation than other units. The related segregation index H is a normalized version of the M index, such that the index is constrained to fall between 0 and 1. The question addressed in this paper is whether local segregation scores of the M index can be normalized in a similar way, to arrive at useful local segregation scores for the H index. The paper shows that it is not possible to obtain normalized local segregation scores that fall between 0 and 1 and that also aggregate to the H index. The one exception to this is the situation when all groups in the population are exactly of equal size. It is also (trivially) possible to decompose the H index into weighted local segregation scores, however, they have the same problems of interpretation as the local segregation scores of the M index.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 623 ◽  
Author(s):  
Damián G. Hernández ◽  
Inés Samengo

Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences.


2008 ◽  
pp. 371-380
Author(s):  
Takao Ito

One of the most important issues in data mining is to discover an implicit relationship between words in a large corpus and labels in a large database. The relationship between words and labels often is expressed as a function of distance measures. An effective measure would be useful not only for getting the high precision of data mining, but also for time saving of the operation in data mining. In previous research, many measures for calculating the one-to-many relationship have been proposed, such as the complementary similarity measure, the mutual information, and the phi coefficient. Some research showed that the complementary similarity measure is the most effective. The author reviewed previous research related to the measures in one-to-many relationships and proposed a new idea to get an effective one, based on the heuristic approach in this article.


Author(s):  
Takao Ito

One of the most important issues in data mining is to discover an implicit relationship between words in a large corpus and labels in a large database. The relationship between words and labels often is expressed as a function of distance measures. An effective measure would be useful not only for getting the high precision of data mining, but also for time saving of the operation in data mining. In previous research, many measures for calculating the one-to-many relationship have been proposed, such as the complementary similarity measure, the mutual information, and the phi coefficient. Some research showed that the complementary similarity measure is the most effective. The author reviewed previous research related to the measures in one-to-many relationships and proposed a new idea to get an effective one, based on the heuristic approach in this article.


Author(s):  
Frédéric Dupuis ◽  
Jan Florjanczyk ◽  
Patrick Hayden ◽  
Debbie Leung

It is known that the maximum classical mutual information, which can be achieved between measurements on pairs of quantum systems, can drastically underestimate the quantum mutual information between them. In this article, we quantify this distinction between classical and quantum information by demonstrating that after removing a logarithmic-sized quantum system from one half of a pair of perfectly correlated bitstrings, even the most sensitive pair of measurements might yield only outcomes essentially independent of each other. This effect is a form of information locking but the definition we use is strictly stronger than those used previously. Moreover, we find that this property is generic, in the sense that it occurs when removing a random subsystem. As such, the effect might be relevant to statistical mechanics or black hole physics. While previous works had always assumed a uniform message, we assume only a min-entropy bound and also explore the effect of entanglement. We find that classical information is strongly locked almost until it can be completely decoded. Finally, we exhibit a quantum key distribution protocol that is ‘secure’ in the sense of accessible information but in which leakage of even a logarithmic number of bits compromises the secrecy of all others.


Sign in / Sign up

Export Citation Format

Share Document