scholarly journals Landmark-free, parametric hypothesis tests regarding two-dimensional contour shapes using coherent point drift registration and statistical parametric mapping

2021 ◽  
Vol 7 ◽  
pp. e542
Author(s):  
Todd C. Pataky ◽  
Masahide Yagi ◽  
Noriaki Ichihashi ◽  
Philip G. Cox

This paper proposes a computational framework for automated, landmark-free hypothesis testing of 2D contour shapes (i.e., shape outlines), and implements one realization of that framework. The proposed framework consists of point set registration, point correspondence determination, and parametric full-shape hypothesis testing. The results are calculated quickly (<2 s), yield morphologically rich detail in an easy-to-understand visualization, and are complimented by parametrically (or nonparametrically) calculated probability values. These probability values represent the likelihood that, in the absence of a true shape effect, smooth, random Gaussian shape changes would yield an effect as large as the observed one. This proposed framework nevertheless possesses a number of limitations, including sensitivity to algorithm parameters. As a number of algorithms and algorithm parameters could be substituted at each stage in the proposed data processing chain, sensitivity analysis would be necessary for robust statistical conclusions. In this paper, the proposed technique is applied to nine public datasets using a two-sample design, and an ANCOVA design is then applied to a synthetic dataset to demonstrate how the proposed method generalizes to the family of classical hypothesis tests. Extension to the analysis of 3D shapes is discussed.

2016 ◽  
Vol 21 (2) ◽  
pp. 136-147 ◽  
Author(s):  
James Nicholson ◽  
Sean Mccusker

This paper is a response to Gorard's article, ‘Damaging real lives through obstinacy: re-emphasising why significance testing is wrong’ in Sociological Research Online 21(1). For many years Gorard has criticised the way hypothesis tests are used in social science, but recently he has gone much further and argued that the logical basis for hypothesis testing is flawed: that hypothesis testing does not work, even when used properly. We have sympathy with the view that hypothesis testing is often carried out in social science contexts when it should not be, and that outcomes are often described in inappropriate terms, but this does not mean the theory of hypothesis testing, or its use, is flawed per se. There needs to be evidence to support such a contention. Gorard claims that: ‘Anyone knowing the problems, as described over one hundred years, who continues to teach, use or publish significance tests is acting unethically, and knowingly risking the damage that ensues.’ This is a very strong statement which impugns the integrity, not just the competence, of a large number of highly respected academics. We argue that the evidence he puts forward in this paper does not stand up to scrutiny: that the paper misrepresents what hypothesis tests claim to do, and uses a sample size which is far too small to discriminate properly a 10% difference in means in a simulation he constructs. He then claims that this simulates emotive contexts in which a 10% difference would be important to detect, implicitly misrepresenting the simulation as a reasonable model of those contexts.


2007 ◽  
Vol 22 (3) ◽  
pp. 637-650 ◽  
Author(s):  
Ian T. Jolliffe

Abstract When a forecast is assessed, a single value for a verification measure is often quoted. This is of limited use, as it needs to be complemented by some idea of the uncertainty associated with the value. If this uncertainty can be quantified, it is then possible to make statistical inferences based on the value observed. There are two main types of inference: confidence intervals can be constructed for an underlying “population” value of the measure, or hypotheses can be tested regarding the underlying value. This paper will review the main ideas of confidence intervals and hypothesis tests, together with the less well known “prediction intervals,” concentrating on aspects that are often poorly understood. Comparisons will be made between different methods of constructing confidence intervals—exact, asymptotic, bootstrap, and Bayesian—and the difference between prediction intervals and confidence intervals will be explained. For hypothesis testing, multiple testing will be briefly discussed, together with connections between hypothesis testing, prediction intervals, and confidence intervals.


Entropy ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. 883 ◽  
Author(s):  
Luis Gustavo Esteves ◽  
Rafael Izbicki ◽  
Julio Michael Stern ◽  
Rafael Bassi Stern

This paper introduces pragmatic hypotheses and relates this concept to the spiral of scientific evolution. Previous works determined a characterization of logically consistent statistical hypothesis tests and showed that the modal operators obtained from this test can be represented in the hexagon of oppositions. However, despite the importance of precise hypothesis in science, they cannot be accepted by logically consistent tests. Here, we show that this dilemma can be overcome by the use of pragmatic versions of precise hypotheses. These pragmatic versions allow a level of imprecision in the hypothesis that is small relative to other experimental conditions. The introduction of pragmatic hypotheses allows the evolution of scientific theories based on statistical hypothesis testing to be interpreted using the narratological structure of hexagonal spirals, as defined by Pierre Gallais.


1999 ◽  
Vol 85 (1) ◽  
pp. 3-18 ◽  
Author(s):  
Les Leventhal

Two generations of methodologists have criticized hypothesis testing by claiming that most point null hypotheses are false and that hypothesis tests do not provide the probability that the null hypothesis is true. These criticisms are answered. (1) The point-null criticism, if correct, undermines only the traditional two-tailed test, not the one-tailed test or the little-known directional two-tailed test. The directional two-tailed test is the only hypothesis test that, properly used, provides for deciding the direction of a parameter, that is, deciding whether a parameter is positive or negative or whether it falls above or below some interesting nonzero value. The point-null criticism becomes unimportant if we replace traditional one- and two-tailed tests with the directional two-tailed test, a replacement already recommended for most purposes by previous writers. (2) If one interprets probability as a relative frequency, as most textbooks do, then the concept of probability cannot meaningfully be attached to the truth of an hypothesis; hence, it is meaningless to ask for the probability that the null is true. (3) Hypothesis tests provide the next best thing, namely, a relative frequency probability that the decision about the statistical hypotheses is correct. Two arguments are offered.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

According to Popper and other influential philosophers and scientists, scientific knowledge grows by repeatedly testing our best hypotheses. However, the interpretation of non-significant results—those that do not lead to a “rejection” of the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis or provide a reason to accept it? In this chapter, we prove two impossibility results for measures of corroboration that follow Popper’s criterion of measuring both predictive success and the testability of a hypothesis. Then we provide an axiomatic characterization of a more promising and scientifically useful concept of corroboration and discuss implications for the practice of hypothesis testing and the concept of statistical significance.


1998 ◽  
Vol 21 (2) ◽  
pp. 215-216 ◽  
Author(s):  
David Rindskopf

Unfortunately, reading Chow's work is likely to leave the reader more confused than enlightened. My preferred solutions to the “controversy” about null- hypothesis testing are: (1) recognize that we really want to test the hypothesis that an effect is “small,” not null, and (2) use Bayesian methods, which are much more in keeping with the way humans naturally think than are classical statistical methods.


Sign in / Sign up

Export Citation Format

Share Document