error exponents
Recently Published Documents


TOTAL DOCUMENTS

228
(FIVE YEARS 29)

H-INDEX

22
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Lan V. Truong ◽  
Giuseppe Cocco ◽  
Josep Font-Segura ◽  
Albert Guillen i Fabregas

2021 ◽  
Author(s):  
Kenneth Palacio-Baus ◽  
Natasha Devroye
Keyword(s):  

Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 268
Author(s):  
Sadaf Salehkalaibar ◽  
Michèle Wigger

This paper studies binary hypothesis testing with a single sensor that communicates with two decision centers over a memoryless broadcast channel. The main focus lies on the tradeoff between the two type-II error exponents achievable at the two decision centers. In our proposed scheme, we can partially mitigate this tradeoff when the transmitter has a probability larger than 1/2 to distinguish the alternate hypotheses at the decision centers, i.e., the hypotheses under which the decision centers wish to maximize their error exponents. In the cases where these hypotheses cannot be distinguished at the transmitter (because both decision centers have the same alternative hypothesis or because the transmitter’s observations have the same marginal distribution under both hypotheses), our scheme shows an important tradeoff between the two exponents. The results in this paper thus reinforce the previous conclusions drawn for a setup where communication is over a common noiseless link. Compared to such a noiseless scenario, here, however, we observe that even when the transmitter can distinguish the two hypotheses, a small exponent tradeoff can persist, simply because the noise in the channel prevents the transmitter to perfectly describe its guess of the hypothesis to the two decision centers.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 265
Author(s):  
Ran Tamir (Averbuch) ◽  
Neri Merhav

Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 253
Author(s):  
Pavel Rybin ◽  
Kirill Andreev ◽  
Victor Zyablov

This paper deals with the specific construction of binary low-density parity-check (LDPC) codes. We derive lower bounds on the error exponents for these codes transmitted over the memoryless binary symmetric channel (BSC) for both the well-known maximum-likelihood (ML) and proposed low-complexity decoding algorithms. We prove the existence of such LDPC codes that the probability of erroneous decoding decreases exponentially with the growth of the code length while keeping coding rates below the corresponding channel capacity. We also show that an obtained error exponent lower bound under ML decoding almost coincide with the error exponents of good linear codes.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 199
Author(s):  
Sergio Verdú

Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint.


Sign in / Sign up

Export Citation Format

Share Document