scholarly journals Replicability, Robustness, and Reproducibility in Psychological Science

2021 ◽  
Author(s):  
Brian A. Nosek ◽  
Tom Elis Hardwicke ◽  
Hannah Moshontz ◽  
Aurélien Allard ◽  
Katherine S. Corker ◽  
...  

Replication, an important, uncommon, and misunderstood practice, is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understanding to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understanding and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges, such as disincentives to conduct replications, framing of replication as personal attack rather than healthy scientific practice, and headwinds for replication contributing to self-correction. Nevertheless, innovation in doing and understanding replication, and its cousins, reproducibility and robustness, have positioned psychology to improve research practices and accelerate progress.

2021 ◽  
Vol 73 (1) ◽  
Author(s):  
Brian A. Nosek ◽  
Tom E. Hardwicke ◽  
Hannah Moshontz ◽  
Aurélien Allard ◽  
Katherine S. Corker ◽  
...  

Replication—an important, uncommon, and misunderstood practice—is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2019 ◽  
Author(s):  
Richard Ramsey

The credibility of psychological science has been questioned recently, due to low levels of reproducibility and the routine use of inadequate research practices (Chambers, 2017; Open Science Collaboration, 2015; Simmons, Nelson, & Simonsohn, 2011). In response, wide-ranging reform to scientific practice has been proposed (e.g., Munafò et al., 2017), which has been dubbed a “credibility revolution” (Vazire, 2018). My aim here is to advocate why and how we should embrace such reform, and discuss the likely implications.


2022 ◽  
Author(s):  
Joshua W. Clegg

Good Science is an account of psychological research emphasizing the moral foundations of inquiry. This volume brings together existing disciplinary critiques of scientism, objectivism, and instrumentalism, and then discusses how these contribute to institutionalized privilege and to less morally responsive research practices. The author draws on historical, critical, feminist, and science studies traditions to provide an alternative account of psychological science and to highlight the irreducibly moral foundations of everyday scientific practice. This work outlines a theoretical framework for thinking about and practicing psychology in ways that center moral responsibility, collective commitment, and justice. The book then applies this framework, describing psychological research practices in terms of the their moral dilemmas. Also included are materials meant to aid in methods instruction and mentoring.


2021 ◽  
Vol 4 (2) ◽  
pp. 251524592110181
Author(s):  
Manikya Alister ◽  
Raine Vickers-Jones ◽  
David K. Sewell ◽  
Timothy Ballard

Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear on the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability. We found evidence for six reasonably distinct underlying factors that influenced these judgments and individual differences in the degree to which people’s judgments were influenced by these factors. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability and, by extension, the cumulative progress of psychological science.


2020 ◽  
Author(s):  
D. Stephen Lindsay

Psychological scientists strive to advance understanding of how and why we animals do and think and feel as we do. This is difficult, in part because flukes of chance and measurement error obscure researchers’ perceptions. Many psychologists use inferential statistical tests to peer through the murk of chance and discern relationships between variables. Those tests are powerful tools, but they must be wielded with skill. Moreover, research reports must convey to readers a detailed and accurate understanding of how the data were obtained and analyzed. Research psychologists often fall short in those regards. This paper attempts to motivate and explain ways to enhance the transparency and replicability of psychological science. Specifically, I speak to how publication bias and p hacking contribute to effect-size exaggeration in the published literature, and how effect-size exaggeration contributes, in turn, to replication failures. Then I present seven steps toward addressing these problems: Telling the truth; upgrading statistical knowledge; standardizing aspects of research practices; documenting lab procedures in a lab manual; making materials, data, and analysis scripts transparent; addressing constraints on generality; and collaborating.


2020 ◽  
Author(s):  
Cameron Brick ◽  
Bruce Hood ◽  
Vebjørn Ekroll ◽  
Lee de-Wit

The reliance in psychology on verbal definitions means that psychological research is unusually moored to how humans think and communicate about categories. Psychological concepts (e.g., intelligence; attention) are easily assumed to represent objective, definable categories with an underlying essence. Like the 'vital forces' previously thought to animate life, these assumed essences can create an illusion of understanding. We describe a pervasive tendency across psychological science to assume that essences explain phenomena by synthesizing a wide range of research lines from cognitive, clinical, and biological psychology and neuroscience. Labeling a complex phenomenon can appear as theoretical progress before sufficient evidence that the described category has a definable essence or known boundary conditions. Category labels can further undermine progress by masking contingent and contextual relationships and obscuring the need to specify mechanisms. Finally, we highlight examples of promising methods that circumvent the lure of essences and we suggest four concrete strategies to identify and avoid essentialist intuitions in theory development.


2019 ◽  
Vol 33 (6) ◽  
pp. 633-642 ◽  
Author(s):  
Rebekah Russell-Bennett ◽  
Raymond P. Fisk ◽  
Mark S. Rosenbaum ◽  
Nadia Zainuddin

Purpose The purpose of this paper is to discuss two parallel but distinct subfields of marketing that share common interests (enhancing consumers’ lives and improving well-being): social marketing and transformative service research. The authors also suggest a research agenda. Design/methodology/approach The paper offers a conceptual approach and research agenda by comparing and contrasting the two marketing fields of transformative service research and social marketing. Findings Specifically, this paper proposes three opportunities to propel both fields forward: 1) breaking boundaries that inhibit research progress, which includes collaboration between public, private and nonprofit sectors to improve well-being; 2) adopting more customer-oriented approaches that go beyond the organizational and individual levels; and 3) taking a non-linear approach to theory development that innovates and co-creates solutions. Originality/value This paper presents the challenges and structural barriers for two subfields seeking to improve human well-being. This paper is the first to bring these subfields together and propose a way for them to move forward together.


2021 ◽  
Author(s):  
Taym Alsalti

Concern has been mounting over the reproducibility of findings in psychology and other empiri-cal sciences. Large scale replication attempts found worrying results. The high rate of false find-ings in the published research has been partly attributed to scientists’ engagement in questionable research practices (QRPs). I discuss reasons and solutions for this problem. Employing a content analysis of empirical studies published in the years 2007 and 2017, I found a decrease in the prevalence of QRPs in the investigated decade. I subsequently discuss possible explanations for the improvement as well as further potential contributors to the high rate of false findings in sci-ence. Most scientists agree that a change towards more open and transparent scientific practice on part of both scientists and publishers is necessary. Debate exists as to how this should be achieved.


Sign in / Sign up

Export Citation Format

Share Document