scholarly journals A Fuzzy Take on the Logical Issues of Statistical Hypothesis Testing

Philosophies ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 21
Author(s):  
Matthew Booth ◽  
Fabien Paillusson

Statistical Hypothesis Testing (SHT) is a class of inference methods whereby one makes use of empirical data to test a hypothesis and often emit a judgment about whether to reject it or not. In this paper, we focus on the logical aspect of this strategy, which is largely independent of the adopted school of thought, at least within the various frequentist approaches. We identify SHT as taking the form of an unsound argument from Modus Tollens in classical logic, and, in order to rescue SHT from this difficulty, we propose that it can instead be grounded in t-norm based fuzzy logics. We reformulate the frequentists’ SHT logic by making use of a fuzzy extension of Modus Tollens to develop a model of truth valuation for its premises. Importantly, we show that it is possible to preserve the soundness of Modus Tollens by exploring the various conventions involved with constructing fuzzy negations and fuzzy implications (namely, the S and R conventions). We find that under the S convention, it is possible to conduct the Modus Tollens inference argument using Zadeh’s compositional extension and any possible t-norm. Under the R convention we find that this is not necessarily the case, but that by mixing R-implication with S-negation we can salvage the product t-norm, for example. In conclusion, we have shown that fuzzy logic is a legitimate framework to discuss and address the difficulties plaguing frequentist interpretations of SHT.

Author(s):  
Sach Mukherjee

A number of important problems in data mining can be usefully addressed within the framework of statistical hypothesis testing. However, while the conventional treatment of statistical significance deals with error probabilities at the level of a single variable, practical data mining tasks tend to involve thousands, if not millions, of variables. This Chapter looks at some of the issues that arise in the application of hypothesis tests to multi-variable data mining problems, and describes two computationally efficient procedures by which these issues can be addressed.


2019 ◽  
pp. 245-264
Author(s):  
Steven J. Osterlind

This chapter describes quantification during the late nineteenth century. Then, most ordinary people were gaining an overt awareness, and probability notions were seeping into everyday conversation and decision-making. However, new forms of abstract mathematics were being developed, albeit with some opposition from Lewis Carroll (Charles Dodgson), who wanted to preserve traditionalist views of Euclidian geometry. The chapter introduces William Gossett, who worked in the laboratory of the Guinness brewery and developed “t-distribution,” which was published as “Student’s t-test.” It also describes his friendship with Sir Ronald Fisher, who developed many statistical hypothesis testing methods, published in The Design of Experiments, such as the ANOVA procedure, and the F ratio. Fisher also developed many research designs for hypothesis testing, both simple and complex, including the Latin squares design, as well as providing a classic description of inferential testing in the thought experiment called “the lady tasting tea.”


Sign in / Sign up

Export Citation Format

Share Document