foundations of statistics
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 6)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Alan Agresti ◽  
Maria Kateri

2021 ◽  
Vol 35 (3) ◽  
pp. 175-192
Author(s):  
Maximilian Kasy

A key challenge for interpreting published empirical research is the fact that published findings might be selected by researchers or by journals. Selection might be based on criteria such as significance, consistency with theory, or the surprisingness of findings or their plausibility. Selection leads to biased estimates, reduced coverage of confidence intervals, and distorted posterior beliefs. I review methods for detecting and quantifying selection based on the distribution of p-values, systematic replication studies, and meta-studies. I then discuss the conflicting recommendations regarding selection result ing from alternative objectives, in particular, the validity of inference versus the relevance of findings for decision-makers. Based on this discussion, I consider various reform proposals, such as deemphasizing significance, pre-analysis plans, journals for null results and replication studies, and a functionally differentiated publication system. In conclusion, I argue that we need alternative foundations of statistics that go beyond the single-agent model of decision theory.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 310
Author(s):  
Michael S. Harré

This review covers some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, and the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called `small worlds’ but cannot work in `large worlds’. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions still need to be answered in all three fields in order to make progress on these problems.


2020 ◽  
Author(s):  
Thomas Edward Gladwin

This evolving document is my combination essay-tutorial-manifesto on foundational concepts of statistics for experimental research, primarily meant to help strengthen statistical thinking using programming and simulated experiments to make concepts concrete, rather than formal mathematics. It further aims to explain and justify the role of null hypothesis significance testing in experimental research. It’s not an introductory textbook, but more something to read alongside or after undergraduate modules. It also provides an introduction to data analysis and simulation using Python and NumPy.


2020 ◽  
Vol 18 (1) ◽  
pp. 2-35
Author(s):  
Miodrag M. Lovric

The Jeffreys-Lindley paradox is the most quoted divergence between the frequentist and Bayesian approaches to statistical inference. It is embedded in the very foundations of statistics and divides frequentist and Bayesian inference in an irreconcilable way. This paradox is the Gordian Knot of statistical inference and Data Science in the Zettabyte Era. If statistical science is ready for revolution confronted by the challenges of massive data sets analysis, the first step is to finally solve this anomaly. For more than sixty years, the Jeffreys-Lindley paradox has been under active discussion and debate. Many solutions have been proposed, none entirely satisfactory. The Jeffreys-Lindley paradox and its extent have been frequently misunderstood by many statisticians and non-statisticians. This paper aims to reassess this paradox, shed new light on it, and indicates how often it occurs in practice when dealing with Big data.


Sign in / Sign up

Export Citation Format

Share Document