scholarly journals How Do We Choose Our Giants? Perceptions of Replicability in Psychological Science

2021 ◽  
Vol 4 (2) ◽  
pp. 251524592110181
Author(s):  
Manikya Alister ◽  
Raine Vickers-Jones ◽  
David K. Sewell ◽  
Timothy Ballard

Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear on the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability. We found evidence for six reasonably distinct underlying factors that influenced these judgments and individual differences in the degree to which people’s judgments were influenced by these factors. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability and, by extension, the cumulative progress of psychological science.

2018 ◽  
Vol 41 ◽  
Author(s):  
Michał Białek

AbstractIf we want psychological science to have a meaningful real-world impact, it has to be trusted by the public. Scientific progress is noisy; accordingly, replications sometimes fail even for true findings. We need to communicate the acceptability of uncertainty to the public and our peers, to prevent psychology from being perceived as having nothing to say about reality.


Psychology ◽  
2013 ◽  
Author(s):  
Robert D. Latzman ◽  
Yuri Shishido

The title of “Godfather of Personality” may well be ascribed to Gordon Allport, who was the first to make public efforts to promote the “field of personality” in the 1930s (see Allport and Vernon 1930, cited under Gordon Allport). Personality psychology—located within what many argue is the broadest, most encompassing branch of psychological science—can be defined as the study of the dynamic organization, within the individual, of psychological systems that create the person’s characteristic patterns of behaviors, thoughts, and feelings (see Allport 1961, also cited under Gordon Allport). The field of personality psychology is concerned with both individual differences—that is, the way in which people differ from one another—and intrapersonal functioning, the set of processes taking place within any individual person. The area of personality psychology is often grouped with social psychology in research programs at universities; however, these are quite different approaches to understanding individuals. While social psychology attempts to understand the individual in interpersonal or group contexts (i.e., “when placed in Situation A, how do people, in general, respond?”), personality psychology investigates individual differences (i.e., “how are people similar and different in how they respond to the same situation?”). Personality psychology has a long history and, as such, is an extremely large and broad field that includes a large number of approaches. Discerning readers will quickly note that the current chapter is largely focused on what has come to be the most commonly studied perspective, the trait approach. Those readers interested in other approaches are referred to a number of resources focusing on Other Approaches within the diverse field.


2020 ◽  
Author(s):  
D. Stephen Lindsay

Psychological scientists strive to advance understanding of how and why we animals do and think and feel as we do. This is difficult, in part because flukes of chance and measurement error obscure researchers’ perceptions. Many psychologists use inferential statistical tests to peer through the murk of chance and discern relationships between variables. Those tests are powerful tools, but they must be wielded with skill. Moreover, research reports must convey to readers a detailed and accurate understanding of how the data were obtained and analyzed. Research psychologists often fall short in those regards. This paper attempts to motivate and explain ways to enhance the transparency and replicability of psychological science. Specifically, I speak to how publication bias and p hacking contribute to effect-size exaggeration in the published literature, and how effect-size exaggeration contributes, in turn, to replication failures. Then I present seven steps toward addressing these problems: Telling the truth; upgrading statistical knowledge; standardizing aspects of research practices; documenting lab procedures in a lab manual; making materials, data, and analysis scripts transparent; addressing constraints on generality; and collaborating.


2020 ◽  
Author(s):  
Chia-Lung Shih ◽  
Te-Yu Hung

Abstract Background A small sample size (n < 30 for each treatment group) is usually enrolled to investigate the differences in efficacy between treatments for knee osteoarthritis (OA). The objective of this study was to use simulation for comparing the power of four statistical methods for analysis of small sample size for detecting the differences in efficacy between two treatments for knee OA. Methods A total of 10,000 replicates of 5 sample sizes (n=10, 15, 20, 25, and 30 for each group) were generated based on the previous reported measures of treatment efficacy. Four statistical methods were used to compare the differences in efficacy between treatments, including the two-sample t-test (t-test), the Mann-Whitney U-test (M-W test), the Kolmogorov-Smirnov test (K-S test), and the permutation test (perm-test). Results The bias of simulated parameter means showed a decreased trend with sample size but the CV% of simulated parameter means varied with sample sizes for all parameters. For the largest sample size (n=30), the CV% could achieve a small level (<20%) for almost all parameters but the bias could not. Among the non-parametric tests for analysis of small sample size, the perm-test had the highest statistical power, and its false positive rate was not affected by sample size. However, the power of the perm-test could not achieve a high value (80%) even using the largest sample size (n=30). Conclusion The perm-test is suggested for analysis of small sample size to compare the differences in efficacy between two treatments for knee OA.


2020 ◽  
Author(s):  
Kyoshiro Sasaki ◽  
Yuki Yamada

A Registered Reports system is key to preventing questionable research practices. Under this system, manuscripts, including their detailed protocols (i.e., hypothesis, experimental design, sample size, and methods of statistical analysis), are reviewed prior to data collection. If a protocol manuscript is accepted, publication of the full manuscript including the results and discussion is guaranteed in principle regardless of whether the collected data support the registered hypothesis. However, this assurance of publication might be broken under the impact of the COVID-19 pandemic. The present paper reports the first author’s real-life experience related to the collapse of the assurance of publication in the Registered Reports system and discusses the disbenefits of this collapse. Furthermore, we propose the implementation of a journal section specific to protocol manuscripts as a solution to the crisis of the Registered Reports system.


2019 ◽  
Author(s):  
Daniel Lakens

For over two centuries researchers have been criticized for using research practices that makes it easier to present data in line with what they wish to be true. With the rise of the internet it has become easier to preregister the theoretical and empirical basis for predictions, the experimental design, the materials, and the analysis code. Whether the practice of preregistration is valuable depends on your philosophy of science. Here, I provide a conceptual analysis of the value of preregistration for psychological science from an error statistical philosophy (Mayo, 2018). Preregistration has the goal to allow others to transparently evaluate the capacity of a test to falsify a prediction, or the severity of a test. Researchers who aim to test predictions with severity should find value in the practice of preregistration. I differentiate the goal of preregistration from positive externalities, discuss how preregistration itself does not make a study better or worse compared to a non-preregistered study, and highlight the importance of evaluating the usefulness of a tool such as preregistration based on an explicit consideration of your philosophy of science.


Sign in / Sign up

Export Citation Format

Share Document