scholarly journals Quantum Mutual Information Capacity for High-Dimensional Entangled States

2012 ◽  
Vol 108 (14) ◽  
Author(s):  
P. Ben Dixon ◽  
Gregory A. Howland ◽  
James Schneeloch ◽  
John C. Howell
2001 ◽  
Vol 1 (3) ◽  
pp. 70-78
Author(s):  
M Horodecki ◽  
P Horodecki ◽  
R l Horodecki ◽  
D Leung ◽  
B Terha

We derive the general formula for the capacity of a noiseless quantum channel assisted by an arbitrary amount of noisy entanglement. In this capacity formula, the ratio of the quantum mutual information and the von Neumann entropy of the sender's share of the noisy entanglement plays the role of mutual information in the completely classical case. A consequence of our results is that bound entangled states cannot increase the capacity of a noiseless quantum channel.


2008 ◽  
Vol 06 (supp01) ◽  
pp. 745-750 ◽  
Author(s):  
T. C. DORLAS ◽  
C. MORGAN

We obtain a maximizer for the quantum mutual information for classical information sent over the quantum amplitude damping channel. This is achieved by limiting the ensemble of input states to antipodal states, in the calculation of the product state capacity for the channel. We also consider the product state capacity of a convex combination of two memoryless channels and demonstrate in particular that it is in general not given by the minimum of the capacities of the respective memoryless channels.


Entropy ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. 727 ◽  
Author(s):  
Hlynur Jónsson ◽  
Giovanni Cherubini ◽  
Evangelos Eleftheriou

Information theory concepts are leveraged with the goal of better understanding and improving Deep Neural Networks (DNNs). The information plane of neural networks describes the behavior during training of the mutual information at various depths between input/output and hidden-layer variables. Previous analysis revealed that most of the training epochs are spent on compressing the input, in some networks where finiteness of the mutual information can be established. However, the estimation of mutual information is nontrivial for high-dimensional continuous random variables. Therefore, the computation of the mutual information for DNNs and its visualization on the information plane mostly focused on low-complexity fully connected networks. In fact, even the existence of the compression phase in complex DNNs has been questioned and viewed as an open problem. In this paper, we present the convergence of mutual information on the information plane for a high-dimensional VGG-16 Convolutional Neural Network (CNN) by resorting to Mutual Information Neural Estimation (MINE), thus confirming and extending the results obtained with low-dimensional fully connected networks. Furthermore, we demonstrate the benefits of regularizing a network, especially for a large number of training epochs, by adopting mutual information estimates as additional terms in the loss function characteristic of the network. Experimental results show that the regularization stabilizes the test accuracy and significantly reduces its variance.


2015 ◽  
Vol 56 (2) ◽  
pp. 022205 ◽  
Author(s):  
Mario Berta ◽  
Kaushik P. Seshadreesan ◽  
Mark M. Wilde

2006 ◽  
Vol 20 (01) ◽  
pp. 1-23 ◽  
Author(s):  
LEONARDO NEVES ◽  
G. LIMA ◽  
J. G. AGUIRRE GÓMEZ ◽  
C. H. MONKEN ◽  
C. SAAVEDRA ◽  
...  

We review recent theoretical and experimental works where are proposed and demonstrated how to use photon pairs created by spontaneous parametric down-conversion to generate entangled states of D-dimensional quantum systems, or qudits. This is the first demonstration of high-dimensional entanglement based on the intrinsic transverse momentum entanglement of the type-II down-converted photons. The qudit space is defined by an aperture made up of an opaque screen with D slits (paths), placed in the arms of the twin photons. By manipulating the pump beam profile we can prepare different entangled states of these possible paths. We focus our attention on an important case for applications in quantum information: the maximally entangled states. Experimental results for qudits with D=4 and D=8 are shown and measuring a two-photon conditional interference, we also demonstrate the nonclassical character of the correlations.


2008 ◽  
Vol 11 (3-4) ◽  
pp. 309-319 ◽  
Author(s):  
Boyan Bonev ◽  
Francisco Escolano ◽  
Miguel Cazorla

2021 ◽  
Vol 36 ◽  
pp. 01014
Author(s):  
Fung Yuen Chin ◽  
Yong Kheng Goh

Feature selection is a process of selecting a group of relevant features by removing unnecessary features for use in constructing the predictive model. However, high dimensional data increases the difficulty of feature selection due to the curse of dimensionality. From the past research, the performance of the predictive model is always compared with the existing results. When attempting to model a new dataset, the current practice is to benchmark for the dataset obtained by including all the features, including redundant features and noise. Here we propose a new optimal baseline for the dataset by mean of ranked features using a mutual information score. The quality of a dataset depends on the information contained in the dataset, and the more information contains in the dataset, the better the performance of the predictive model. The number of features to achieve this new optimal baseline will be obtained at the same time, and serve as the guideline on the number of features needed in a feature selection method. We will also show some experimental results that the proposed method provides a better baseline with fewer features compared to the existing benchmark using all the features.


Sign in / Sign up

Export Citation Format

Share Document