scholarly journals The Replicability Crisis and Public Trust in Psychological Science

2019 ◽  
Author(s):  
Farid Anvari ◽  
Daniel Lakens

Replication failures of past findings in several scientific disciplines, including psychology, medicine, and experimental economics, have created a ‘crisis of confidence’ among scientists. Psychological science has been at the forefront of tackling these issues, with discussions about replication failures and scientific self-criticisms of questionable research practices (QRPs) increasingly taking place in public forums. How this replicability crisis impacts the public’s trust is a question yet to be answered by research. Whereas some researchers believe that the public’s trust will be positively impacted or maintained, others believe trust will be diminished. Because it is our field of expertise, we focus on trust in psychological science. We performed a study testing how public trust in past and future psychological research would be impacted by being informed about i) replication failures, ii) replication failures and criticisms of QRPs, and iii) replication failures, criticisms of QRPs, and proposed reforms. Results from a mostly European sample (N = 1129) showed that, compared to a control group, whereas trust in past research was reduced when people were informed about the aspects of the replication crisis, trust in future research was maintained except when they were also informed about proposed reforms. Potential explanations are discussed.

2021 ◽  
Author(s):  
Benjamin George Farrar ◽  
Christopher Krupenye ◽  
Alba Motes Rodrigo ◽  
Claudio Tennie ◽  
Julia Fischer ◽  
...  

Replication is an important tool used to test and develop scientific theories. Areas of biomedical and psychological research have experienced a replication crisis, in which many published findings failed to replicate. Following this, many other scientific disciplines have been interested in the robustness of their own findings. This chapter examines replication in primate cognitive studies. First, it discusses the frequency and success of replication studies in primate cognition and explores the challenges researchers face when designing and interpreting replication studies across the wide range of research designs used across the field. Next, it discusses the type of research that can probe the robustness of published findings, especially when replication studies are difficult to perform. The chapter concludes with a discussion of different roles that replication can have in primate cognition research.


2020 ◽  
pp. 070674372096173
Author(s):  
Keith S. Dobson ◽  
Veronika Markova ◽  
Alainna Wen ◽  
Laura M. Smith

Objectives: The Working Mind is a program designed to reduce stigmatizing attitudes toward mental illness, improve resilience, and promote mental health in the general workplace. Previous research has revealed positive program effects in a variety of workplace settings. This study advances previous work in implementing randomization and a control group to assess the intervention’s efficacy. Methods: The program was evaluated using a cluster-randomized design, with pretest, posttest, and a 3-month follow-up in 2 implementation groups across 4 sites. Results: The Working Mind program was effective at decreasing mental health stigma and increasing self-reported resilience and coping skills at the pre–post assessment in both delivery groups. The program’s effects were maintained to the time of 3-month follow-up. Qualitative data provided further evidence that participants benefited from the program. Conclusions: This study represents an advancement over past research and provides further support for efficacy of the Working Mind program. Directions for future research, including replication using rigorous methodological procedures and examination of program effects over longer follow-up intervals, are discussed.


2019 ◽  
Author(s):  
Dustin Fife ◽  
Joseph Lee Rodgers

In light of the “replication crisis,” some (e.g., Nelson, Simmons, & Simonsohn, 2018) advocate for greater policing and transparency in research methods. Others (Baumeister, 2016; Finkel, Eastwick, & Reis, 2017; Goldin-meadow, 2016; Levenson, 2017) argue against rigid requirements that may inadvertently restrict discovery. We embrace both positions and argue that proper understanding and implementation of the well-established paradigm of Exploratory Data Analysis (EDA; Tukey, 1977) is necessary to push beyond the replication crisis. Unfortunately, many don’t realize EDA exists (Goldin-Meadow, 2016), fail to understand the philosophy and proper tools for exploration (Baumeister, 2016), or reject EDA as unscientific (Lindsay, 2015). EDA’s mistreatment is unfortunate, and is usually based on misunderstanding the nature and goal of EDA. We develop an expanded typology that situates EDA, CDA, and rough CDA in the same framework with fishing, p-hacking, and HARKing, and argue that most, if not all, questionable research practices (QRPs) would be resolved by understanding and implementing the EDA/CDA gradient. We argue most psychological research is “rough CDA,” which has often and inadvertently used the wrong tools. We conclude with guidelines about how these typologies can be integrated into a cumulative research program that is necessary to move beyond the replication crisis.


2013 ◽  
Vol 112 (3) ◽  
pp. 872-886 ◽  
Author(s):  
Martin Voracek ◽  
Elisabeth Mohr ◽  
Michael Hagmann

Even small group-mean differences (whether combined with variance differences or not) or variance differences alone (absent mean differences) can generate marked and sometimes surprising imbalances in the representation of the respective groups compared in the distributional tail regions. Such imbalances in group representation, quantified as tail ratios, have general importance in the context of any threshold, susceptibility, diathesis-stress, selection, or similar models (including the study of sex differences), as widely conceptualized and applied in the psychological, social, medical, and biological sciences. However, commonly used effect-size measures, such as Cohen's d, largely exploit data information around the center of distributions, rather than from the tails, thereby missing potentially important patterns found in the tail regions. This account reviews the background and history of tail ratios, emphasizes their importance for psychological research, proposes a consensus approach for defining and interpreting them, introduces a tail-ratio calculator, and outlines future research agenda.


Author(s):  
Leon Schjoedt ◽  
Justin B. Craig

Purpose Given the nature of entrepreneurship, a domain-specific self-efficacy scale should pertain to venture creation, be unidimensional, and be developed and validated using nascent entrepreneurs – persons for whom self-efficacy may be most important. Extant measures employed in entrepreneurship research do not meet all these criteria. The purpose of this paper is to develop and validate a unidimensional entrepreneurial self-efficacy (ESE) scale based on samples of nascent entrepreneurs. Design/methodology/approach Data from a sample of nascent entrepreneurs and items from PSED I were used to develop and assess the validity of a new ESE scale. To further establish scale validity, a comparison group from PSED I along with a sample of nascent entrepreneurs from PSED II were employed. Findings A unidimensional three-item self-efficacy scale for assessing a person’s belief that s/he can create a new business successfully is developed and validated using samples of nascent entrepreneurs and a control group. Research limitations/implications The scale offers opportunity to enhance research-based assessment using a parsimonious, reliable, and valid unidimensional measure of ESE. The scale may enhance future research findings, as well as promoting reconsideration of past research findings, on many issues in the entrepreneurship literature. Originality/value This research uses a sample of nascent entrepreneurs to provide a new three-item scale for assessment of ESE that is parsimonious, valid, and unidimensional.


2021 ◽  
Author(s):  
Michele B. Nuijten

Increasing evidence indicates that many published findings in psychology may be overestimated or even false. An often-heard response to this “replication crisis” is to replicate more: replication studies should weed out false positives over time and increase robustness of psychological science. However, replications take time and money – resources that are often scarce. In this chapter, I propose an efficient alternative strategy: a four-step robustness check that first focuses on verifying reported numbers through reanalysis before replicating studies in a new sample.


2018 ◽  
Author(s):  
Jonathon McPhetres

Concerns about the generalizability, veracity, and relevance of social psychological research often resurface within psychology. While many changes are being implemented to improve the integrity of published research and to clarify the publication record, less attention has been given to the questions of relevance. In this short commentary, I offer my perspective on questions of relevance and present some data from the website Reddit. The data show that people care greatly about psychological research—social psychology studies being among the highest upvoted on the subreddit r/science. However, upvotes on Reddit are unrelated to metrics used by researchers to gauge importance (e.g., impact factor, journal rankings and citations), suggesting a disconnect between what psychologists and lay-audiences may see as relevant. I interpret these data in light of the replication crisis and suggest that the spotlight on our field puts greater importance on the need for reform. Whether we like it or not, people care about, share, and use psychological research in their lives, which means we should ensure that our findings are reported accurately and transparently.


Author(s):  
Anđela Keljanović

At the time when social psychologists believed they could be proud of their discipline, there was the devastating news that Diederik Stapel had committed a major scientific fraud. This event coincided with the start of the discussion on trust in psychological findings. It was soon followed by the report of a series of nine studies that failed to replicate the 'professor's study'. These replication results were astounding due to earlier reports of successful replications. Due to the crisis of confidence in the results of field research, the Open Science Collaboration subsequently replicated 100 correlation and experimental studies published in 2008 in Psychological Science, Journal of Personality and Social Psychology, and Journal of Experimental Psychology: Learning, Memory, and Cognition. Of the 97% of the original studies that had a positive effect, 36% were replicated. However, their findings have also been called into question by calculating the Bayesian factor. In addition to fraud, questionable research practices resulting from publication bias that results in false positives undermine confidence in the validity of psychological research findings. Perhaps the most costly mistake of false-positive findings is to erroneously reject the null hypothesis. However, that Stapel (2011) confirmed the null hypothesis, or that Bargh (1996) found that admission of participants did not affect walking speed, or that Dijksterhuis and van Knipenberg (1998) reported that participants received with the word 'professor' did not improve their performance on task, no one would be interested in their findings. Zero findings are only interesting if they contradict the main hypothesis derived from the theory or contradict a number of previous studies. The fact that good experimental research is usually conducted in order to test theories, researchers can never be sure whether they have chosen the optimal operationalization of a given construct. As researchers can never be sure that they have properly operationalized the theoretical constructs they are evaluating and whether they have been successful in controlling the third variables that may be responsible for their results, the theory can never be proven true.


2019 ◽  
Author(s):  
Tobias Wingen ◽  
Jana Berkessel ◽  
Birte Englich

In the current psychological debate, low replicability of psychological findings is a central topic. While the discussion about the replication crisis has a huge impact on psychological research, we know less about how it impacts public trust in psychology. In this paper, we examine whether low replicability damages public trust and how this damage can be repaired. Studies 1, 2 and 3 provide correlational and experimental evidence that low replicability reduces public trust in psychology. Additionally, Studies 3, 4, and 5 evaluate the effectiveness of commonly used trust-repair strategies, such as information about increased transparency (Study 3), explanations for low replicability (Study 4), or recovered replicability (Study 5). We found no evidence that these strategies significantly repair trust. However, it remains possible that they have small but potentially meaningful effects, which could be detected with larger samples. Overall, our studies highlight the importance of replicability for public trust in psychology.


2017 ◽  
Vol 12 (4) ◽  
pp. 660-664 ◽  
Author(s):  
Scott O. Lilienfeld

The past several years have been a time for soul searching in psychology, as we have gradually come to grips with the reality that some of our cherished findings are less robust than we had assumed. Nevertheless, the replication crisis highlights the operation of psychological science at its best, as it reflects our growing humility. At the same time, institutional variables, especially the growing emphasis on external funding as an expectation or de facto requirement for faculty tenure and promotion, pose largely unappreciated hazards for psychological science, including (a) incentives for engaging in questionable research practices, (b) a single-minded focus on programmatic research, (c) intellectual hyperspecialization, (d) disincentives for conducting direct replications, (e) stifling of creativity and intellectual risk taking, (f) researchers promising more than they can deliver, and (g) diminished time for thinking deeply. Preregistration should assist with (a), but will do little about (b) through (g). Psychology is beginning to right the ship, but it will need to confront the increasingly deleterious impact of the grant culture on scientific inquiry.


Sign in / Sign up

Export Citation Format

Share Document