scholarly journals Error Exponents and α-Mutual Information

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 199
Author(s):  
Sergio Verdú

Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint.

Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 778 ◽  
Author(s):  
Amos Lapidoth ◽  
Christoph Pfister

Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon’s mutual information when their order α is one. The first measure shares many properties with the mutual information, including the data-processing inequality, and can be related to the optimal error exponents in composite hypothesis testing. The second measure does not satisfy the data-processing inequality, but appears naturally in the context of distributed task encoding.


2016 ◽  
Vol 57 ◽  
pp. 421-464 ◽  
Author(s):  
Arnaud Malapert ◽  
Jean-Charles Régin ◽  
Mohamed Rezgui

We introduce an Embarrassingly Parallel Search (EPS) method for solving constraint problems in parallel, and we show that this method matches or even outperforms state-of-the-art algorithms on a number of problems using various computing infrastructures. EPS is a simple method in which a master decomposes the problem into many disjoint subproblems which are then solved independently by workers. Our approach has three advantages: it is an efficient method; it involves almost no communication or synchronization between workers; and its implementation is made easy because the master and the workers rely on an underlying constraint solver, but does not require to modify it. This paper describes the method, and its applications to various constraint problems (satisfaction, enumeration, optimization). We show that our method can be adapted to different underlying solvers (Gecode, Choco2, OR-tools) on different computing infrastructures (multi-core, data centers, cloud computing). The experiments cover unsatisfiable, enumeration and optimization problems, but do not cover first solution search because it makes the results hard to analyze. The same variability can be observed for optimization problems, but at a lesser extent because the optimality proof is required. EPS offers good average performance, and matches or outperforms other available parallel implementations of Gecode as well as some solvers portfolios. Moreover, we perform an in-depth analysis of the various factors that make this approach efficient as well as the anomalies that can occur. Last, we show that the decomposition is a key component for efficiency and load balancing.


2015 ◽  
Vol 47 (1) ◽  
pp. 1-26 ◽  
Author(s):  
Venkat Anantharam ◽  
François Baccelli

Consider a real-valued discrete-time stationary and ergodic stochastic process, called the noise process. For each dimension n, we can choose a stationary point process in ℝn and a translation invariant tessellation of ℝn. Each point is randomly displaced, with a displacement vector being a section of length n of the noise process, independent from point to point. The aim is to find a point process and a tessellation that minimizes the probability of decoding error, defined as the probability that the displaced version of the typical point does not belong to the cell of this point. We consider the Shannon regime, in which the dimension n tends to ∞, while the logarithm of the intensity of the point processes, normalized by dimension, tends to a constant. We first show that this problem exhibits a sharp threshold: if the sum of the asymptotic normalized logarithmic intensity and of the differential entropy rate of the noise process is positive, then the probability of error tends to 1 with n for all point processes and all tessellations. If it is negative then there exist point processes and tessellations for which this probability tends to 0. The error exponent function, which denotes how quickly the probability of error goes to 0 in n, is then derived using large deviations theory. If the entropy spectrum of the noise satisfies a large deviations principle, then, below the threshold, the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is obtained for two classes of point processes: the Poisson process and a Matérn hard-core point process. New lower bounds on error exponents are derived from this for Shannon's additive noise channel in the high signal-to-noise ratio limit that hold for all stationary and ergodic noises with the above properties and that match the best known bounds in the white Gaussian noise case.


2020 ◽  
Vol 15 ◽  
Author(s):  
Xiaogeng Wan ◽  
Xinying Tan

Aims: This paper presents a simple method that is efficient for protein evolutionary classification. Background: Proteins are diverse with their sequences, structures and functions. It is important to understand the relations between the sequences, structures and functions of proteins. Many methods have been developed for protein evolutionaryclassifications, these methods include machine learning methods such as the LibSVM, feature methods such as the natural vector method and the protein map. Machine learning methods use pre-labeled training sets to classify protein sequences into disjoint classes. Feature methods such as the natural vector and the protein map convert protein sequences into feature vectors and use polygenetic-trees to classify on the distance between the feature vectors. In this paper, we propose a simple method that classify the evolutionary relations of protein sequences using the distance maps on the mutual relations between protein sequences. The new method is unsupervised and model-free, which is efficient in the evolutionary classifications of proteins. Objective: In this paper, we propose a simple method that classify the evolutionary relations of protein sequences using the distance maps on the mutual relations between protein sequences. The new method is unsupervised and model-free, which is efficient in the evolutionary classifications of proteins.methodTo quantify the mutual relations and the homology of protein sequences, we use the normalized mutual information rates on protein sequences, and we define two distance maps that convert the normalized mutual information rates into 'distances', and use UPGMA trees to present the evolutionary classifications of proteins. Method:: To quantify the mutual relations and the homology of protein sequences, we use the normalized mutual information rates on protein sequences, and we define two distance maps that convert the normalized mutual information rates into 'distances', and use UPGMA trees to present the evolutionary classifications of proteins. Result: We use four classifical protein evolutionary classification examples to demonstrate the new method, where the results are compared with traditional methods such as the natural vector and the protein maps. We use the AUPRC curves to evaluate the classification qualities of the new method and the traditional methods. We found that the new method with the two distance maps is efficient in the evolutionary classification of the classical examples, and it outperforms the natural vector and the protein maps in the evolutionary classifications. Conclusion: The normalized mutual information rates with the two distance maps are efficient in protein evolutionary classifications, which outperform some classifical methods in the evolutionary classifications. Other: The results are compared with traditional protein evolutionary classification methods such as the natural vector and the protein map, and the method of AUPRC curves is applied to the new method and the traditional methods to inspect the classification accuracies.


Author(s):  
James Richard Forbes ◽  
Christopher John Damaren

The design of gain-scheduled strictly positive real (SPR) controllers using numerical optimization is considered. Our motivation is robust, yet accurate motion control of flexible robotic systems via the passivity theorem. It is proven that a family of very strictly passive compensators scheduled via time- or state-dependent scheduling signals is also very strictly passive. Two optimization problems are posed; we first present a simple method to optimize the linear SPR controllers, which compose the gain-scheduled controller. Second, we formulate the optimization problem associated with the gain-scheduled controller itself. Restricting our investigation to time-dependent scheduling signals, the signals are parameterized, and the optimization objective function seeks to find the form of the scheduling signals, which minimizes a combination of the manipulator tip tracking error and the control effort. A numerical example employing a two-link flexible manipulator is used to demonstrate the effectiveness of the optimal gain-scheduling algorithm. The closed-loop system performance is improved, and it is shown that the optimal scheduling signals are not necessarily linear.


2005 ◽  
Vol 15 (04) ◽  
pp. 1503-1514 ◽  
Author(s):  
RADHAKRISHNAN NAGARAJAN ◽  
JANE E. AUBIN ◽  
CHARLOTTE A. PETERSON

Cell differentiation is a complex process governed by the timely activation of genes resulting in a specific phenotype or observable physical change. Recent reports have indicated heterogeneity in gene expression even amongst identical colonies (clones). While some genes are always expressed, others are expressed with a finite probability. In this report, a mathematical framework is provided to understand the mechanism of osteoblast (bone forming cell) differentiation. A systematic approach using a combination of entropy, pair-wise dependency and Bayesian approach is used to gain insight into the dependencies and underlying network structure. Pairwise dependencies are estimated using linear correlation and mutual information. An algorithm is proposed to identify statistically significant mutual information estimates. The robustness of the dependencies and the network structure to decreasing number of colonies (colony size) and perturbation is investigated. Perturbation is achieved by generating bootstrap samples. The methods discussed are generic in nature and can be extended to similar experimental paradigms.


Sign in / Sign up

Export Citation Format

Share Document