scholarly journals Asymptotic relative submajorization of multiple-state boxes

2021 ◽  
Vol 111 (4) ◽  
Author(s):  
Gergely Bunth ◽  
Péter Vrana

AbstractPairs of states, or “boxes” are the basic objects in the resource theory of asymmetric distinguishability (Wang and Wilde in Phys Rev Res 1(3):033170, 2019. 10.1103/PhysRevResearch.1.033170), where free operations are arbitrary quantum channels that are applied to both states. From this point of view, hypothesis testing is seen as a process by which a standard form of distinguishability is distilled. Motivated by the more general problem of quantum state discrimination, we consider boxes of a fixed finite number of states and study an extension of the relative submajorization preorder to such objects. In this relation, a tuple of positive operators is greater than another if there is a completely positive trace nonincreasing map under which the image of the first tuple satisfies certain semidefinite constraints relative to the other one. This preorder characterizes error probabilities in the case of testing a composite null hypothesis against a simple alternative hypothesis, as well as certain error probabilities in state discrimination. We present a sufficient condition for the existence of catalytic transformations between boxes, and a characterization of an associated asymptotic preorder, both expressed in terms of sandwiched Rényi divergences. This characterization of the asymptotic preorder directly shows that the strong converse exponent for a composite null hypothesis is equal to the maximum of the corresponding exponents for the pairwise simple hypothesis testing tasks.

Author(s):  
Patrick W. Kraft ◽  
Ellen M. Key ◽  
Matthew J. Lebo

Abstract Grant and Lebo (2016) and Keele et al. (2016) clarify the conditions under which the popular general error correction model (GECM) can be used and interpreted easily: In a bivariate GECM the data must be integrated in order to rely on the error correction coefficient, $\alpha _1^\ast$ , to test cointegration and measure the rate of error correction between a single exogenous x and a dependent variable, y. Here we demonstrate that even if the data are all integrated, the test on $\alpha _1^\ast$ is misunderstood when there is more than a single independent variable. The null hypothesis is that there is no cointegration between y and any x but the correct alternative hypothesis is that y is cointegrated with at least one—but not necessarily more than one—of the x's. A significant $\alpha _1^\ast$ can occur when some I(1) regressors are not cointegrated and the equation is not balanced. Thus, the correct limiting distributions of the right-hand-side long-run coefficients may be unknown. We use simulations to demonstrate the problem and then discuss implications for applied examples.


Author(s):  
Alexander Ly ◽  
Eric-Jan Wagenmakers

AbstractThe “Full Bayesian Significance Test e-value”, henceforth FBST ev, has received increasing attention across a range of disciplines including psychology. We show that the FBST ev leads to four problems: (1) the FBST ev cannot quantify evidence in favor of a null hypothesis and therefore also cannot discriminate “evidence of absence” from “absence of evidence”; (2) the FBST ev is susceptible to sampling to a foregone conclusion; (3) the FBST ev violates the principle of predictive irrelevance, such that it is affected by data that are equally likely to occur under the null hypothesis and the alternative hypothesis; (4) the FBST ev suffers from the Jeffreys-Lindley paradox in that it does not include a correction for selection. These problems also plague the frequentist p-value. We conclude that although the FBST ev may be an improvement over the p-value, it does not provide a reasonable measure of evidence against the null hypothesis.


Econometrics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 21
Author(s):  
Jae H. Kim ◽  
Andrew P. Robinson

This paper presents a brief review of interval-based hypothesis testing, widely used in bio-statistics, medical science, and psychology, namely, tests for minimum-effect, equivalence, and non-inferiority. We present the methods in the contexts of a one-sample t-test and a test for linear restrictions in a regression. We present applications in testing for market efficiency, validity of asset-pricing models, and persistence of economic time series. We argue that, from the point of view of economics and finance, interval-based hypothesis testing provides more sensible inferential outcomes than those based on point-null hypothesis. We propose that interval-based tests be routinely employed in empirical research in business, as an alternative to point null hypothesis testing, especially in the new era of big data.


Author(s):  
M. D. Edge

Interval estimation is the attempt to define intervals that quantify the degree of uncertainty in an estimate. The standard deviation of an estimate is called a standard error. Confidence intervals are designed to cover the true value of an estimand with a specified probability. Hypothesis testing is the attempt to assess the degree of evidence for or against a specific hypothesis. One tool for frequentist hypothesis testing is the p value, or the probability that if the null hypothesis is in fact true, the data would depart as extremely or more extremely from expectations under the null hypothesis than they were observed to do. In Neyman–Pearson hypothesis testing, the null hypothesis is rejected if p is less than a pre-specified value, often chosen to be 0.05. A test’s power function gives the probability that the null hypothesis is rejected given the significance level γ‎, a sample size n, and a specified alternative hypothesis. This chapter discusses some limitations of hypothesis testing as commonly practiced in the research literature.


2014 ◽  
Vol 8 (4) ◽  
Author(s):  
Yin Zhang ◽  
Ingo Neumann

AbstractDeformation monitoring usually focuses on the detection of whether the monitored objects satisfy the given properties (e.g. being stable or not), and makes further decisions to minimise the risks, for example, the consequences and costs in case of collapse of artificial objects and/or natural hazards. With this intention, a methodology relying on hypothesis testing and utility theory is reviewed in this paper. The main idea of utility theory is to judge each possible outcome with a utility value. The presented methodology makes it possible to minimise the risk of an individual monitoring project by considering the costs and consequences of overall possible situations within the decision process. It is not the danger that the monitored object may collapse that can be reduced. The risk (based on the utility values multiplied by the danger) can be described more appropriately and therefore more valuable decisions can be made. Especially, the opportunity for the measurement process to minimise the risk is an important key issue. In this paper, application of the methodology to two of the classical cases in hypothesis testing will be discussed in detail: 1) both probability density functions (pdfs) of tested objects under null and alternative hypotheses are known; 2) only the pdf under the null hypothesis is known and the alternative hypothesis is treated as the pure negation of the null hypothesis. Afterwards, a practical example in deformation monitoring is introduced and analysed. Additionally, the way in which the magnitudes of utility values (consequences of a decision) influence the decision will be considered and discussed at the end.


2017 ◽  
Vol 6 (6) ◽  
pp. 158
Author(s):  
Louis Mutter ◽  
Steven B. Kim

There are numerous statistical hypothesis tests for categorical data including Pearson's Chi-Square goodness-of-fit test and other discrete versions of goodness-of-fit tests. For these hypothesis tests, the null hypothesis is simple, and the alternative hypothesis is composite which negates the simple null hypothesis. For power calculation, a researcher specifies a significance level, a sample size, a simple null hypothesis, and a simple alternative hypothesis. In practice, there are cases when an experienced researcher has deep and broad scientific knowledge, but the researcher may suffer from a lack of statistical power due to a small sample size being available. In such a case, we may formulate hypothesis testing based on a simple alternative hypothesis instead of the composite alternative hypothesis. In this article, we investigate how much statistical power can be gained via a correctly specified simple alternative hypothesis and how much statistical power can be lost under a misspecified alternative hypothesis, particularly when an available sample size is small.


2021 ◽  
Author(s):  
Martin Schnuerch ◽  
Daniel W. Heck ◽  
Edgar Erdfelder

Bayesian t tests have become an increasingly popular alternative to null-hypothesis significance testing (NHST) in psychological research. In contrast to NHST, they allow for the quantification of evidence in favor of the null hypothesis and for optional stopping. A major drawback of Bayesian t tests, however, is that error probabilities of statistical decisions remain uncontrolled. Previous approaches in the literature to remedy this problem either include time-consuming simulations or the specification of prior distributions without a substantive meaning. In this article, we propose a sequential probability ratio test that combines Bayesian t tests with simple decision criteria developed by Abraham Wald in 1947. We discuss this sequential procedure, which we call Waldian t test, in the context of default and informed Bayesian t tests. We show that Waldian t tests reliably control frequentist error probabilities, with the nominal Type I and Type II error probabilities serving as upper bounds to the actual error rates. At the same time, the prior distributions of Bayesian t tests are preserved. Thus, Waldian t tests are fully justified from both a frequentist and a Bayesian point of view. We highlight the relationship between frequentist and Bayesian error probabilities and critically discuss the implications of conventional stopping criteria for sequential Bayesian t tests. Finally, we provide a user-friendly web application which implements the proposed procedure for substantive researchers.


2021 ◽  
Author(s):  
Alexander Ly ◽  
Eric-Jan Wagenmakers

he “Full Bayesian Significance Test e-value”, henceforth FBST ev, has received increasing attention across a range of disciplines including psychology. We show that the FBST ev leads to four problems: (1) the FBST ev cannot quantify evidence in favor of a null hypothesis and therefore also cannot discriminate “evidence of absence” from “absence of evidence”; (2) the FBST ev is susceptible to sampling to a foregone conclusion; (3) the FBST ev violates the principle of predictive irrelevance, such that it is affected by data that are equally likely to occur under the null hypothesis and the alternative hypothesis; (4) the FBST ev suffers from the Jeffreys-Lindley paradox in that it does not include a correction for selection. These problems also plague the frequentist p-value. We conclude that although the FBST ev may be an improvement over the p-value, it does not provide a reasonable measure of evidence against the null hypothesis.


Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 551
Author(s):  
Jung-Lin Hung ◽  
Cheng-Che Chen ◽  
Chun-Mei Lai

Taking advantage of the possibility of fuzzy test statistic falling in the rejection region, a statistical hypothesis testing approach for fuzzy data is proposed in this study. In contrast to classical statistical testing, which yields a binary decision to reject or to accept a null hypothesis, the proposed approach is to determine the possibility of accepting a null hypothesis (or alternative hypothesis). When data are crisp, the proposed approach reduces to the classical hypothesis testing approach.


Sign in / Sign up

Export Citation Format

Share Document