scholarly journals Seven Steps Toward Transparency and Replicability in Psychological Science

2020 ◽  
Author(s):  
D. Stephen Lindsay

Psychological scientists strive to advance understanding of how and why we animals do and think and feel as we do. This is difficult, in part because flukes of chance and measurement error obscure researchers’ perceptions. Many psychologists use inferential statistical tests to peer through the murk of chance and discern relationships between variables. Those tests are powerful tools, but they must be wielded with skill. Moreover, research reports must convey to readers a detailed and accurate understanding of how the data were obtained and analyzed. Research psychologists often fall short in those regards. This paper attempts to motivate and explain ways to enhance the transparency and replicability of psychological science. Specifically, I speak to how publication bias and p hacking contribute to effect-size exaggeration in the published literature, and how effect-size exaggeration contributes, in turn, to replication failures. Then I present seven steps toward addressing these problems: Telling the truth; upgrading statistical knowledge; standardizing aspects of research practices; documenting lab procedures in a lab manual; making materials, data, and analysis scripts transparent; addressing constraints on generality; and collaborating.

2022 ◽  
Author(s):  
Alice Winter ◽  
Carolin Dudschig ◽  
Barbara Kaup

The embodied account of language comprehension has been one of the most influentialtheoretical developments in the recent decades addressing the question how humanscomprehend and represent language. To examine its assumptions, many studies havemade use of behavioral paradigms involving basic compatibility effects. Theaction–sentence compatibility effect (ACE) is one of the most influential of thesecompatibility effects and is the most widely cited evidence for the assumptions of theembodied account of language comprehension. However, recently there have beendifficulties to extend or even to reliably replicate the ACE. The conflicting findingsconcerning the ACE and its extensions lead to the discussion whether the ACE isindeed a reliable effect or whether it might be the product of publication bias or otherdistorting research practices. In a first step we conducted a meta-analysis using arandom-effects model. This analysis revealed a small but significant effect size of theACE (d = .129, p = .007). A second meta-analytic approach supports these findings ofthe existence of an ACE (Fisher’s method: χ2 = 124.379, p < .001). Furthermore, thetask-parameter Delay occurred as a factor of interest in whether the ACE appears withpositive or negative effect direction. This meta-analysis further assessed for potentialpublication bias and suggests that there is bias in the ACE literature.


Author(s):  
Michael D. Jennions ◽  
Christopher J. Lortie ◽  
Michael S. Rosenberg ◽  
Hannah R. Rothstein

This chapter discusses the increased occurrence of publication bias in the scientific literature. Publication bias is associated with the inaccurate representation of the merit of a hypothesis or idea. A strict definition is that it occurs when the published literature reports results that systematically differ from those of all studies and statistical tests conducted; the result is that false conclusions are drawn. The chapter presents five main approaches used to either detect potential narrow sense publication bias or assess how sensitive the results of a meta-analysis are to the possible exclusion. These include funnel plots, tests for relationships between effect size and sample size using nonparametric correlation or regression, trim and fill method, fail-safe numbers, and model selection.


2020 ◽  
Author(s):  
Daryl Brian O'Connor

There has been much talk of psychological science undergoing a renaissance with recent years being marked by dramatic changes in research practices and to the publishing landscape. This article briefly summarises a number of the ways in which psychological science can improve its rigor, lessen use of questionable research practices and reduce publication bias. The importance of pre-registration as a useful tool to increase transparency of science and improve the robustness of our evidence base, especially in COVID-19 times, is presented. In particular, the case for the increased adoption of Registered Reports, the article format that allows peer review of research studies before the results are known, is outlined. Finally, the article argues that the scientific architecture and the academic reward structure need to change with a move towards “slow science” and away from the “publish or perish” culture.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2021 ◽  
Vol 4 (2) ◽  
pp. 251524592110181
Author(s):  
Manikya Alister ◽  
Raine Vickers-Jones ◽  
David K. Sewell ◽  
Timothy Ballard

Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear on the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability. We found evidence for six reasonably distinct underlying factors that influenced these judgments and individual differences in the degree to which people’s judgments were influenced by these factors. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability and, by extension, the cumulative progress of psychological science.


2021 ◽  
Author(s):  
Michelle Renee Ellefson ◽  
Daniel Oppenheimer

Failure of replication attempts in experimental psychology might extend beyond p-hacking, publication bias or hidden moderators; reductions in experimental power can be caused by violations of fidelity to a set of experimental protocols. In this paper, we run a series of simulations to systematically explore how manipulating fidelity influences effect size. We find statistical patterns that mimic those found in ManyLabs style replications and meta-analyses, suggesting that fidelity violations are present in many replication attempts in psychology. Scholars in intervention science, medicine, and education have developed methods of improving and measuring fidelity, and as replication becomes more mainstream in psychology, the field would benefit from adopting such approaches as well.


2019 ◽  
Author(s):  
Gregory Francis ◽  
Evelina Thunell

Based on findings from six experiments, Dallas, Liu & Ubel (2019) concluded that placing calorie labels to the left of menu items influences consumers to choose lower calorie food options. Contrary to previously reported findings, they suggested that calorie labels do influence food choices, but only when placed to the left because they are in this case read first. If true, these findings have important implications for the design of menus and may help address the obesity pandemic. However, an analysis of the reported results indicates that they seem too good to be true. We show that if the effect sizes in Dallas et al. (2019) are representative of the populations, a replication of the six studies (with the same sample sizes) has a probability of only 0.014 of producing uniformly significant outcomes. Such a low success rate suggests that the original findings might be the result of questionable research practices or publication bias. We therefore caution readers and policy makers to be skeptical about the results and conclusions reported by Dallas et al. (2019).


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Fushun Zhang ◽  
Yuanyuan Zhang ◽  
Nan Jiang ◽  
Qiao Zhai ◽  
Juanjuan Hu ◽  
...  

Background. Some studies published previously have shown a strong correlation between hypertension and psychological nature including impulsion emotion or mindfulness and relaxation temperament, among which mindfulness and relaxation temperament might have a benign influence on blood pressure, ameliorating the hypertension. However, the conclusion was not confirmed. Objective. The meta-analysis was performed to investigate the influence of mindfulness and relaxation on essential hypertension interventions and confirm the effects. Methods. Systematic searches were conducted in common English and Chinese electronic databases (i.e., PubMed/MEDLINE, EMBASE, Web of Science, CINAHL, PsycINFO, Cochrane Library, and Chinese Biomedical Literature Database) from 1980 to 2020. A meta-analysis including 5 studies was performed using Rev Man 5.4.1 software to estimate the influence of mindfulness and relaxation on blood pressure, ameliorating the hypertension. Publication bias and heterogeneity of samples were tested using a funnel plot. Studies were analyzed using either a random-effect model or a fixed-effect model. Results. All the 5 studies investigated the influence of mindfulness and relaxation on diastolic and systolic blood pressure, with total 205 participants in the control group and 204 in the intervention group. The random-effects model (REM) was used to calculate the pooled effect for mindfulness and relaxation on diastolic blood pressure (I2 = 0%, t2 = 0.000, P = 0.41 ). The random pooled effect size (MD) was 0.30 (95% CI = −0.81–1.42, P = 0.59 ). REM was used to calculate the pooled effect for mindfulness and relaxation on systolic blood pressure (I2 = 49%, t2 = 3.05, P = 0.10 ). The random pooled effect size (MD) was −1.05 (95% CI = −3.29–1.18, P = 0.36 ). The results of this meta-analysis were influenced by publication bias to some degree. Conclusion. All the results showed less influence of mindfulness and relaxation might act on diastolic or systolic blood pressure, when mindfulness and relaxation are used to intervene in treating CVD and hypertension.


2021 ◽  
Author(s):  
Alessandro Sparacio ◽  
Ivan Ropovik ◽  
Gabriela M. Jiga-Boy ◽  
Hans IJzerman

This meta-analysis explored whether being in nature and emotional social support are effective in reducing levels of stress through a Registered Report. We retrieved all the relevant articles that investigated a connection between one of these two strategies and various components of stress (physiological, affective and cognitive) as well as affective consequences of stress. We followed a stringent analysis workflow (including permutation-based selection models and multilevel regression-based models) to provide publication bias-corrected estimates. We found [no evidence for the efficacy of either strategy/evidence for one of the two strategies/evidence for both strategies] with an estimated mean effect size of [xx/xx] and we recommend [recommendation will be provided if necessary].


Sign in / Sign up

Export Citation Format

Share Document