scholarly journals Questionable Research Practices in Single-Case Experimental Designs: Examples and Possible Solutions

2021 ◽  
Author(s):  
Matt Tincani ◽  
Jason C Travers

Questionable research practices (QRPs) are a variety of research choices that introduce bias into the body of scientific literature. Researchers have documented widespread presence of QRPs across disciplines and promoted practices aimed at preventing them. More recently, Single-Case Experimental Design (SCED) researchers have explored how QRPs could manifest in SCED research. In the chapter, we describe QRPs in participant selection, independent variable selection, procedural fidelity documentation, graphical depictions of behavior, and effect size measures and statistics. We also discuss QRPs in relation to the file drawer effect, publication bias, and meta-analyses of SCED research. We provide recommendations for researchers and the research community to promote practices for preventing QRPs in SCED.

2021 ◽  
Author(s):  
Sean Grant ◽  
Evan Mayo-Wilson ◽  
Lauren Supplee

The credibility of Prevention Services Clearinghouse designations of programs and services as “promising,” “supported,” and “well supported” are threatened by the prevalence of questionable research practices (e.g., selective non-reporting of results) in the bodies of evidence that the Clearinghouse reviews. Internationally accepted standards for reporting and interpreting the results of systematic reviews of evidence—including the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Grading of Recommendations Assessment, Development and Evaluation (GRADE)—recommend that reviews take steps to mitigate bias associated with these questionable research practices. Moreover, Department of Health and Human Services (HHS) policies require that contractors and grantees engage in transparent, open, and reproducible research. We propose that the Clearinghouse adopt standards to mitigate the effects of these questionable research practices, which would be consistent with international guidelines and with complementary HHS policies and procedures.


2017 ◽  
Author(s):  
Freya Acar ◽  
Ruth Seurinck ◽  
Simon B. Eickhoff ◽  
Beatrijs Moerkerke

AbstractThe importance of integrating research findings is incontrovertible and coordinate based meta-analyses have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. Similar to classical meta-analyses, coordinate based meta-analyses may be subject to different forms of publication bias which impacts results and possibly invalidates findings. We develop a tool that assesses the robustness to potential publication bias on cluster level. We investigate the possible influence of the file-drawer effect, where studies that do not report certain results fail to get published, by determining the number of noise studies that can be added to an existing fMRI meta-analysis before the results are no longer statistically significant. In this paper we illustrate this tool through an example and test the effect of several parameters through extensive simulations. We provide an algorithm for which code is freely available to generate noise studies and enables users to determine the robustness of meta-analytical results.


2021 ◽  
Author(s):  
Sean Grant ◽  
Evan Mayo-Wilson ◽  
Lauren Supplee

Questionable research practices threaten the credibility of HomVEE designations for “evidence-based early childhood home visiting service delivery models”. Internationally accepted standards for reporting and interpreting the results of reviews—including the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Grading of Recommendations Assessment, Development and Evaluation (GRADE)—recommend that reviews take steps to mitigate bias associated with selective non-reporting of results and other questionable research practices. Moreover, Department of Health and Human Services (HHS) policies require that contractors and grantees engage in transparent and open sciences practices. We propose that HomVEE adopt standards to mitigate the effects of questionable research practices, which would be consistent with international guidelines and with complementary HHS policies and procedures.


2018 ◽  
Author(s):  
Caleb Z. Marshall

We discuss how questionable research practices in behavioral science (such as p-hacking) effect meta-analyses. Moreover, we argue that abandoning meta-analytic techniques is an overreaction to concerns of type I errors.


2017 ◽  
Author(s):  
Pantelis Samartsidis ◽  
Silvia Montagna ◽  
Angela R. Laird ◽  
Peter T. Fox ◽  
Timothy D. Johnson ◽  
...  

AbstractCoordinate-based meta-analyses (CBMA) allow researchers to combine the results from multiple fMRI experiments with the goal of obtaining results that are more likely to generalise. However, the interpretation of CBMA findings can be impaired by the file drawer problem, a type of publications bias that refers to experiments that are carried out but are not published. Using foci per contrast count data from the BrainMap database, we propose a zero-truncated modelling approach that allows us to estimate the prevalence of non-significant experiments. We validate our method with simulations and real coordinate data generated from the Human Connectome Project. Application of our method to the data from BrainMap provides evidence for the existence of a file drawer effect, with the rate of missing experiments estimated as at least 6 per 100 reported.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2018 ◽  
Author(s):  
Dick Bierman ◽  
Jacob Jolij

We have tested the feasibility of a method to prevent the occurrence of so-called Questionable Research Practices (QRP). A part from embedded pre-registration the major aspect of the system is real-time uploading of data on a secure server. We outline the method, discuss the drop-out treatment and compare it to the Born-open data method, and report on our preliminary experiences. We also discuss the extension of the data-integrity system from secure server to use of blockchain technology.


Sign in / Sign up

Export Citation Format

Share Document