error exponent
Recently Published Documents


TOTAL DOCUMENTS

155
(FIVE YEARS 20)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Giuseppe Cocco ◽  
Albert Guillen i Fabregas ◽  
Josep Font-Segura

Author(s):  
Mario Berta ◽  
Fernando G. S. L. Brandão ◽  
Christoph Hirche

AbstractWe extend quantum Stein’s lemma in asymmetric quantum hypothesis testing to composite null and alternative hypotheses. As our main result, we show that the asymptotic error exponent for testing convex combinations of quantum states $$\rho ^{\otimes n}$$ ρ ⊗ n against convex combinations of quantum states $$\sigma ^{\otimes n}$$ σ ⊗ n can be written as a regularized quantum relative entropy formula. We prove that in general such a regularization is needed but also discuss various settings where our formula as well as extensions thereof become single-letter. This includes an operational interpretation of the relative entropy of coherence in terms of hypothesis testing. For our proof, we start from the composite Stein’s lemma for classical probability distributions and lift the result to the non-commutative setting by using elementary properties of quantum entropy. Finally, our findings also imply an improved recoverability lower bound on the conditional quantum mutual information in terms of the regularized quantum relative entropy—featuring an explicit and universal recovery map.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 265
Author(s):  
Ran Tamir (Averbuch) ◽  
Neri Merhav

Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 253
Author(s):  
Pavel Rybin ◽  
Kirill Andreev ◽  
Victor Zyablov

This paper deals with the specific construction of binary low-density parity-check (LDPC) codes. We derive lower bounds on the error exponents for these codes transmitted over the memoryless binary symmetric channel (BSC) for both the well-known maximum-likelihood (ML) and proposed low-complexity decoding algorithms. We prove the existence of such LDPC codes that the probability of erroneous decoding decreases exponentially with the growth of the code length while keeping coding rates below the corresponding channel capacity. We also show that an obtained error exponent lower bound under ML decoding almost coincide with the error exponents of good linear codes.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 199
Author(s):  
Sergio Verdú

Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint.


2020 ◽  
Vol 66 (12) ◽  
pp. 7602-7614
Author(s):  
Anshoo Tandon ◽  
Vincent Y. F. Tan ◽  
Lav R. Varshney

Entropy ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. 762
Author(s):  
Yunus Can Gültekin ◽  
Alex Alvarado ◽  
Frans M. J. Willems

Probabilistic amplitude shaping (PAS) is a coded modulation strategy in which constellation shaping and channel coding are combined. PAS has attracted considerable attention in both wireless and optical communications. Achievable information rates (AIRs) of PAS have been investigated in the literature using Gallager’s error exponent approach. In particular, it has been shown that PAS achieves the capacity of the additive white Gaussian noise channel (Böcherer, 2018). In this work, we revisit the capacity-achieving property of PAS and derive AIRs using weak typicality. Our objective is to provide alternative proofs based on random sign-coding arguments that are as constructive as possible. Accordingly, in our proofs, only some signs of the channel inputs are drawn from a random code, while the remaining signs and amplitudes are produced constructively. We consider both symbol-metric and bit-metric decoding.


Sign in / Sign up

Export Citation Format

Share Document