scholarly journals Distributed Hypothesis Testing with Privacy Constraints

Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 478 ◽  
Author(s):  
Atefeh Gilani ◽  
Selma Belhadj Amor ◽  
Sadaf Salehkalaibar ◽  
Vincent Y. F. Tan

We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example.

Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 268
Author(s):  
Sadaf Salehkalaibar ◽  
Michèle Wigger

This paper studies binary hypothesis testing with a single sensor that communicates with two decision centers over a memoryless broadcast channel. The main focus lies on the tradeoff between the two type-II error exponents achievable at the two decision centers. In our proposed scheme, we can partially mitigate this tradeoff when the transmitter has a probability larger than 1/2 to distinguish the alternate hypotheses at the decision centers, i.e., the hypotheses under which the decision centers wish to maximize their error exponents. In the cases where these hypotheses cannot be distinguished at the transmitter (because both decision centers have the same alternative hypothesis or because the transmitter’s observations have the same marginal distribution under both hypotheses), our scheme shows an important tradeoff between the two exponents. The results in this paper thus reinforce the previous conclusions drawn for a setup where communication is over a common noiseless link. Compared to such a noiseless scenario, here, however, we observe that even when the transmitter can distinguish the two hypotheses, a small exponent tradeoff can persist, simply because the noise in the channel prevents the transmitter to perfectly describe its guess of the hypothesis to the two decision centers.


2010 ◽  
Vol 57 (4) ◽  
pp. 309-317 ◽  
Author(s):  
DAVID SALTZ

Since the formulation of hypothesis testing by Neyman and Pearson in 1933, the approach has been subject to continuous criticism. Yet, until recently this criticism, for the most part, has gone unheeded. The negative appraisal focuses mainly on the fact thatP-valuesprovide no evidential support for either the null hypothesis (H0) or the alternative hypothesis (Ha). Although hypothesis testing done under tightly controlled conditions can provide some insight regarding the alternative hypothesis based on the uncertainty ofH0, strictly speaking, this does not constitute evidence. More importantly, well controlled research environments rarely exist in field-centered sciences such as ecology. These problems are manifestly more acute in applied field sciences, such as conservation biology, that are expected to support decision making, often under crisis conditions. In conservation biology, the consequences of a Type II error are often far worse than a Type I error. The "advantage" afforded toH0by setting the probability of committing a Type I error (α) to a low value (0.05), in effect, increases the probability of committing a Type II error, which can lead to disastrous practical consequences. In the past decade, multi-model inference using information-theoretic or Bayesian approaches have been offered as better alternatives. These techniques allow comparing a series of models on equal grounds. Using these approaches, it is unnecessary to select a single "best" model. Rather, the parameters needed for decision making can be averaged across all models, weighted according to the support accorded each model. Here, I present a hypothetical example of animal counts that suggest a possible population decline, and analyze the data using hypothesis testing and an information-theoretic approach. A comparison between the two approaches highlights the shortcomings of hypothesis testing and advantages of multi-model inference.


Author(s):  
Patrick W. Kraft ◽  
Ellen M. Key ◽  
Matthew J. Lebo

Abstract Grant and Lebo (2016) and Keele et al. (2016) clarify the conditions under which the popular general error correction model (GECM) can be used and interpreted easily: In a bivariate GECM the data must be integrated in order to rely on the error correction coefficient, $\alpha _1^\ast$ , to test cointegration and measure the rate of error correction between a single exogenous x and a dependent variable, y. Here we demonstrate that even if the data are all integrated, the test on $\alpha _1^\ast$ is misunderstood when there is more than a single independent variable. The null hypothesis is that there is no cointegration between y and any x but the correct alternative hypothesis is that y is cointegrated with at least one—but not necessarily more than one—of the x's. A significant $\alpha _1^\ast$ can occur when some I(1) regressors are not cointegrated and the equation is not balanced. Thus, the correct limiting distributions of the right-hand-side long-run coefficients may be unknown. We use simulations to demonstrate the problem and then discuss implications for applied examples.


2021 ◽  
Vol 111 (4) ◽  
Author(s):  
Gergely Bunth ◽  
Péter Vrana

AbstractPairs of states, or “boxes” are the basic objects in the resource theory of asymmetric distinguishability (Wang and Wilde in Phys Rev Res 1(3):033170, 2019. 10.1103/PhysRevResearch.1.033170), where free operations are arbitrary quantum channels that are applied to both states. From this point of view, hypothesis testing is seen as a process by which a standard form of distinguishability is distilled. Motivated by the more general problem of quantum state discrimination, we consider boxes of a fixed finite number of states and study an extension of the relative submajorization preorder to such objects. In this relation, a tuple of positive operators is greater than another if there is a completely positive trace nonincreasing map under which the image of the first tuple satisfies certain semidefinite constraints relative to the other one. This preorder characterizes error probabilities in the case of testing a composite null hypothesis against a simple alternative hypothesis, as well as certain error probabilities in state discrimination. We present a sufficient condition for the existence of catalytic transformations between boxes, and a characterization of an associated asymptotic preorder, both expressed in terms of sandwiched Rényi divergences. This characterization of the asymptotic preorder directly shows that the strong converse exponent for a composite null hypothesis is equal to the maximum of the corresponding exponents for the pairwise simple hypothesis testing tasks.


Author(s):  
Alexander Ly ◽  
Eric-Jan Wagenmakers

AbstractThe “Full Bayesian Significance Test e-value”, henceforth FBST ev, has received increasing attention across a range of disciplines including psychology. We show that the FBST ev leads to four problems: (1) the FBST ev cannot quantify evidence in favor of a null hypothesis and therefore also cannot discriminate “evidence of absence” from “absence of evidence”; (2) the FBST ev is susceptible to sampling to a foregone conclusion; (3) the FBST ev violates the principle of predictive irrelevance, such that it is affected by data that are equally likely to occur under the null hypothesis and the alternative hypothesis; (4) the FBST ev suffers from the Jeffreys-Lindley paradox in that it does not include a correction for selection. These problems also plague the frequentist p-value. We conclude that although the FBST ev may be an improvement over the p-value, it does not provide a reasonable measure of evidence against the null hypothesis.


Author(s):  
C. Pandit ◽  
Jianyi Huang ◽  
S. Meyn ◽  
V. Veeravalli

Sign in / Sign up

Export Citation Format

Share Document