scholarly journals Of Forking Paths and Tied Hands: Selective Publication of Findings, and What Economists Should Do about It

2021 ◽  
Vol 35 (3) ◽  
pp. 175-192
Author(s):  
Maximilian Kasy

A key challenge for interpreting published empirical research is the fact that published findings might be selected by researchers or by journals. Selection might be based on criteria such as significance, consistency with theory, or the surprisingness of findings or their plausibility. Selection leads to biased estimates, reduced coverage of confidence intervals, and distorted posterior beliefs. I review methods for detecting and quantifying selection based on the distribution of p-values, systematic replication studies, and meta-studies. I then discuss the conflicting recommendations regarding selection result ing from alternative objectives, in particular, the validity of inference versus the relevance of findings for decision-makers. Based on this discussion, I consider various reform proposals, such as deemphasizing significance, pre-analysis plans, journals for null results and replication studies, and a functionally differentiated publication system. In conclusion, I argue that we need alternative foundations of statistics that go beyond the single-agent model of decision theory.

2021 ◽  
Vol 35 (3) ◽  
pp. 157-174
Author(s):  
Guido W. Imbens

The use of statistical significance and p-values has become a matter of substantial controversy in various fields using statistical methods. This has gone as far as some journals banning the use of indicators for statistical significance, or even any reports of p-values, and, in one case, any mention of confidence intervals. I discuss three of the issues that have led to these often-heated debates. First, I argue that in many cases, p-values and indicators of statistical significance do not answer the questions of primary interest. Such questions typically involve making (recommendations on) decisions under uncertainty. In that case, point estimates and measures of uncertainty in the form of confidence intervals or even better, Bayesian intervals, are often more informative summary statistics. In fact, in that case, the presence or absence of statistical significance is essentially irrelevant, and including them in the discussion may confuse the matter at hand. Second, I argue that there are also cases where testing null hypotheses is a natural goal and where p-values are reasonable and appropriate summary statistics. I conclude that banning them in general is counterproductive. Third, I discuss that the overemphasis in empirical work on statistical significance has led to abuse of p-values in the form of p-hacking and publication bias. The use of pre-analysis plans and replication studies, in combination with lowering the emphasis on statistical significance may help address these problems.


1997 ◽  
Vol 80 (1) ◽  
pp. 337-338 ◽  
Author(s):  
Raymond Hubbard ◽  
J. Scott Armstrong

Studies suggest a bias against the publication of null ( p >.05) results. Instead of significance, we advocate reporting effect sizes and confidence intervals, and using replication studies. If statistical tests are used, power tests should accompany them.


Econometrics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 26 ◽  
Author(s):  
David Trafimow

There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone.


2015 ◽  
Vol 40 (1) ◽  
pp. 53-58 ◽  
Author(s):  
Camiel L.M. de Roij van Zuijdewijn ◽  
Menso J. Nubé ◽  
Piet M. ter Wee ◽  
Peter J. Blankestijn ◽  
Renée Lévesque ◽  
...  

Background/Aims: Treatment time is associated with survival in hemodialysis (HD) patients and with convection volume in hemodiafiltration (HDF) patients. High-volume HDF is associated with improved survival. Therefore, we investigated whether this survival benefit is explained by treatment time. Methods: Participants were subdivided into four groups: HD and tertiles of convection volume in HDF. Three Cox regression models were fitted to calculate hazard ratios (HRs) for mortality of HDF subgroups versus HD: (1) crude, (2) adjusted for confounders, (3) model 2 plus mean treatment time. As the only difference between the latter models is treatment time, any change in HRs is due to this variable. Results: 114/700 analyzed individuals were treated with high-volume HDF. HRs of high-volume HDF are 0.61, 0.62 and 0.64 in the three models, respectively (p values <0.05). Confidence intervals of models 2 and 3 overlap. Conclusion: The survival benefit of high-volume HDF over HD is independent of treatment time.


Game Theory ◽  
2015 ◽  
Vol 2015 ◽  
pp. 1-20 ◽  
Author(s):  
Nicholas S. Kovach ◽  
Alan S. Gibson ◽  
Gary B. Lamont

When dealing with conflicts, game theory and decision theory can be used to model the interactions of the decision-makers. To date, game theory and decision theory have received considerable modeling focus, while hypergame theory has not. A metagame, known as a hypergame, occurs when one player does not know or fully understand all the strategies of a game. Hypergame theory extends the advantages of game theory by allowing a player to outmaneuver an opponent and obtaining a more preferred outcome with a higher utility. The ability to outmaneuver an opponent occurs in the hypergame because the different views (perception or deception) of opponents are captured in the model, through the incorporation of information unknown to other players (misperception or intentional deception). The hypergame model more accurately provides solutions for complex theoretic modeling of conflicts than those modeled by game theory and excels where perception or information differences exist between players. This paper explores the current research in hypergame theory and presents a broad overview of the historical literature on hypergame theory.


Sign in / Sign up

Export Citation Format

Share Document