scholarly journals Peer Review #2 of "Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential value (v0.2)"

Author(s):  
M Head
2015 ◽  
Author(s):  
Dorothy V Bishop ◽  
Paul A Thompson

Background: The p-curve is a plot of the distribution of p-values below .05 reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. We argue that binomial tests on the p-curve are not robust enough to be used for this purpose. Methods: P-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results: We first show that a p-curve suggestive of p-hacking can be obtained if researchers misapply parametric tests to data that depart from normality, even when no p-hacking occurs. We go on to show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the "p-hacking bump" just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions: A significant bump in the p-curve just below .05 is not necessarily evidence of p-hacking, and lack of a bump is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e1715 ◽  
Author(s):  
Dorothy V.M. Bishop ◽  
Paul A. Thompson

Background.Thep-curve is a plot of the distribution ofp-values reported in a set of scientific studies. Comparisons between ranges ofp-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication,p-hacking.Methods.p-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigatingp-hacking.Results.We show that when there is ghostp-hacking, the shape of thep-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulatedp-hacked data do not give the “p-hacking bump” just below .05 that is regarded as evidence ofp-hacking, though there is a negative skew when simulated variables are inter-correlated. The wayp-curves vary according to features of underlying data poses problems when automated text mining is used to detectp-values in heterogeneous sets of published papers.Conclusions.The absence of a bump in thep-curve is not indicative of lack ofp-hacking. Furthermore, while studies with evidential value will usually generate a right-skewedp-curve, we cannot treat a right-skewedp-curve as an indicator of the extent of evidential value, unless we have a model specific to the type ofp-values entered into the analysis. We conclude that it is not feasible to use thep-curve to estimate the extent ofp-hacking and evidential value unless there is considerable control over the type of data entered into the analysis. In particular,p-hacking with ghost variables is likely to be missed.


2016 ◽  
Author(s):  
Dorothy V Bishop ◽  
Paul A Thompson

Background: The p-curve is a plot of the distribution of p-values reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. Methods: P-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results: We first show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the "p-hacking bump" just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions: The absence of a bump in the p-curve is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis. In particular, p-hacking with ghost variables is likely to be missed.


2016 ◽  
Author(s):  
Dorothy V Bishop ◽  
Paul A Thompson

Background: The p-curve is a plot of the distribution of p-values reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. Methods: P-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results: We first show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the "p-hacking bump" just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions: The absence of a bump in the p-curve is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis. In particular, p-hacking with ghost variables is likely to be missed.


Author(s):  
Stephen L. Murphy ◽  
Richard P. Steel

AbstractExtant literature consistently demonstrates the level of self-determination individuals experience or demonstrate during an activity can be primed. However, considering most of this literature comes from a period wherein p-hacking was prevalent (pre-2015), it may be that these effects reflect false positives. The aim of the present study was to investigate whether published literature showing autonomous and controlling motivation priming effects contain evidential value or not. A systematic literature search was conducted to identify relevant priming research, while set rules determined which effects from each study would be used in p-curve analysis. Two p-curves including 33 effects each were constructed. P-curve analyses, even after excluding surprising effects (e.g., effects large in magnitude), demonstrated that literature showing autonomous and controlling motivation priming effects contained evidential value. The present findings support prior literature suggesting the effects of autonomous and controlling motivation primes exist at the population level. They also reduce (but do not eliminate) concerns from broader psychology that p-hacking may underlie reported effects.


Sign in / Sign up

Export Citation Format

Share Document