On the error exponent of Markov channels with ISI and feedback

Author(s):  
Giacomo Como ◽  
Serdar Yuksel ◽  
Sekhar Tatikonda
Keyword(s):  
2013 ◽  
Vol 59 (8) ◽  
pp. 4757-4766 ◽  
Author(s):  
Manish Agarwal ◽  
Dongning Guo ◽  
Michael L. Honig

2016 ◽  
Vol 2016 ◽  
pp. 1-14
Author(s):  
Hongbo Zhao ◽  
Lei Chen ◽  
Wenquan Feng ◽  
Chuan Lei

Recently, the problem of detecting unknown and arbitrary sparse signals has attracted much attention from researchers in various fields. However, there remains a peck of difficulties and challenges as the key information is only contained in a small fraction of the signal and due to the absence of prior information. In this paper, we consider a more general and practical scenario of multiple observations with no prior information except for the sparsity of the signal. A new detection scheme referred to as the likelihood ratio test with sparse estimation (LRT-SE) is presented. Under the Neyman-Pearson testing framework, LRT-SE estimates the unknown signal by employing thel1-minimization technique from compressive sensing theory. The detection performance of LRT-SE is preliminarily analyzed in terms of error probabilities in finite size and Chernoff consistency in high dimensional condition. The error exponent is introduced to describe the decay rate of the error probability as observations number grows. Finally, these properties of LRT-SE are demonstrated based on the experimental results of synthetic sparse signals and sparse signals from real satellite telemetry data. It could be concluded that the proposed detection scheme performs very close to the optimal detector.


Author(s):  
Arezou Rezazadeh ◽  
Josep Font-Segura ◽  
Alfonso Martinez ◽  
Albert Guillen i Fabregas

2015 ◽  
Vol 47 (1) ◽  
pp. 1-26 ◽  
Author(s):  
Venkat Anantharam ◽  
François Baccelli

Consider a real-valued discrete-time stationary and ergodic stochastic process, called the noise process. For each dimension n, we can choose a stationary point process in ℝn and a translation invariant tessellation of ℝn. Each point is randomly displaced, with a displacement vector being a section of length n of the noise process, independent from point to point. The aim is to find a point process and a tessellation that minimizes the probability of decoding error, defined as the probability that the displaced version of the typical point does not belong to the cell of this point. We consider the Shannon regime, in which the dimension n tends to ∞, while the logarithm of the intensity of the point processes, normalized by dimension, tends to a constant. We first show that this problem exhibits a sharp threshold: if the sum of the asymptotic normalized logarithmic intensity and of the differential entropy rate of the noise process is positive, then the probability of error tends to 1 with n for all point processes and all tessellations. If it is negative then there exist point processes and tessellations for which this probability tends to 0. The error exponent function, which denotes how quickly the probability of error goes to 0 in n, is then derived using large deviations theory. If the entropy spectrum of the noise satisfies a large deviations principle, then, below the threshold, the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is obtained for two classes of point processes: the Poisson process and a Matérn hard-core point process. New lower bounds on error exponents are derived from this for Shannon's additive noise channel in the high signal-to-noise ratio limit that hold for all stationary and ergodic noises with the above properties and that match the best known bounds in the white Gaussian noise case.


Sign in / Sign up

Export Citation Format

Share Document