Advances in Methods and Practices in Psychological Science
Latest Publications


TOTAL DOCUMENTS

151
(FIVE YEARS 105)

H-INDEX

20
(FIVE YEARS 10)

Published By Sage Publications

2515-2467, 2515-2459

2021 ◽  
Vol 4 (4) ◽  
pp. 251524592110453
Author(s):  
Eric Hehman ◽  
Sally Y. Xie

Methods in data visualization have rapidly advanced over the past decade. Although social scientists regularly need to visualize the results of their analyses, they receive little training in how to best design their visualizations. This tutorial is for individuals whose goal is to communicate patterns in data as clearly as possible to other consumers of science and is designed to be accessible to both experienced and relatively new users of R and ggplot2. In this article, we assume some basic statistical and visualization knowledge and focus on how to visualize rather than what to visualize. We distill the science and wisdom of data-visualization expertise from books, blogs, and online forum discussion threads into recommendations for social scientists looking to convey their results to other scientists. Overarching design philosophies and color decisions are discussed before giving specific examples of code in R for visualizing central tendencies, proportions, and relationships between variables.


2021 ◽  
Vol 4 (4) ◽  
pp. 251524592110459
Author(s):  
Marton Kovacs ◽  
Rink Hoekstra ◽  
Balazs Aczel

Errors are an inevitable consequence of human fallibility, and researchers are no exception. Most researchers can recall major frustrations or serious time delays due to human errors while collecting, analyzing, or reporting data. The present study is an exploration of mistakes made during the data-management process in psychological research. We surveyed 488 researchers regarding the type, frequency, seriousness, and outcome of mistakes that have occurred in their research team during the last 5 years. The majority of respondents suggested that mistakes occurred with very low or low frequency. Most respondents reported that the most frequent mistakes led to insignificant or minor consequences, such as time loss or frustration. The most serious mistakes caused insignificant or minor consequences for about a third of respondents, moderate consequences for almost half of respondents, and major or extreme consequences for about one fifth of respondents. The most frequently reported types of mistakes were ambiguous naming/defining of data, version control error, and wrong data processing/analysis. Most mistakes were reportedly due to poor project preparation or management and/or personal difficulties (physical or cognitive constraints). With these initial exploratory findings, we do not aim to provide a description representative for psychological scientists but, rather, to lay the groundwork for a systematic investigation of human fallibility in research data management and the development of solutions to reduce errors and mitigate their impact.


2021 ◽  
Vol 4 (4) ◽  
pp. 251524592110472
Author(s):  
John G. Bullock ◽  
Donald P. Green

Scholars routinely test mediation claims by using some form of measurement-of-mediation analysis whereby outcomes are regressed on treatments and mediators to assess direct and indirect effects. Indeed, it is rare for an issue of any leading journal of social or personality psychology not to include such an analysis. Statisticians have for decades criticized this method on the grounds that it relies on implausible assumptions, but these criticisms have been largely ignored. After presenting examples and simulations that dramatize the weaknesses of the measurement-of-mediation approach, we suggest that scholars instead use an approach that is rooted in experimental design. We propose implicit-mediation analysis, which adds and subtracts features of the treatment in ways that implicate some mediators and not others. We illustrate the approach with examples from recently published articles, explain the differences between the approach and other experimental approaches to mediation, and formalize the assumptions and statistical procedures that allow researchers to learn from experiments that encourage changes in mediators.


2021 ◽  
Vol 4 (4) ◽  
pp. 251524592110472
Author(s):  
Andrea L. Howard

This tutorial is aimed at researchers working with repeated measures or longitudinal data who are interested in enhancing their visualizations of model-implied mean-level trajectories plotted over time with confidence bands and raw data. The intended audience is researchers who are already modeling their experimental, observational, or other repeated measures data over time using random-effects regression or latent curve modeling but who lack a comprehensive guide to visualize trajectories over time. This tutorial uses an example plotting trajectories from two groups, as seen in random-effects models that include Time × Group interactions and latent curve models that regress the latent time slope factor onto a grouping variable. This tutorial is also geared toward researchers who are satisfied with their current software environment for modeling repeated measures data but who want to make graphics using R software. Prior knowledge of R is not assumed, and readers can follow along using data and other supporting materials available via OSF at https://osf.io/78bk5/ . Readers should come away from this tutorial with the tools needed to begin visualizing mean trajectories over time from their own models and enhancing those plots with graphical estimates of uncertainty and raw data that adhere to transparent practices in research reporting.


2021 ◽  
Vol 4 (3) ◽  
pp. 251524592110268
Author(s):  
Roberta Rocca ◽  
Tal Yarkoni

Consensus on standards for evaluating models and theories is an integral part of every science. Nonetheless, in psychology, relatively little focus has been placed on defining reliable communal metrics to assess model performance. Evaluation practices are often idiosyncratic and are affected by a number of shortcomings (e.g., failure to assess models’ ability to generalize to unseen data) that make it difficult to discriminate between good and bad models. Drawing inspiration from fields such as machine learning and statistical genetics, we argue in favor of introducing common benchmarks as a means of overcoming the lack of reliable model evaluation criteria currently observed in psychology. We discuss a number of principles benchmarks should satisfy to achieve maximal utility, identify concrete steps the community could take to promote the development of such benchmarks, and address a number of potential pitfalls and concerns that may arise in the course of implementation. We argue that reaching consensus on common evaluation benchmarks will foster cumulative progress in psychology and encourage researchers to place heavier emphasis on the practical utility of scientific models.


2021 ◽  
Vol 4 (3) ◽  
pp. 251524592110392
Author(s):  
Franca Agnoli ◽  
Hannah Fraser ◽  
Felix Singleton Thorn ◽  
Fiona Fidler

Solutions to the crisis in confidence in the psychological literature have been proposed in many recent articles, including increased publication of replication studies, a solution that requires engagement by the psychology research community. We surveyed Australian and Italian academic research psychologists about the meaning and role of replication in psychology. When asked what they consider to be a replication study, nearly all participants (98% of Australians and 96% of Italians) selected options that correspond to a direct replication. Only 14% of Australians and 8% of Italians selected any options that included changing the experimental method. Majorities of psychologists from both countries agreed that replications are very important, that more replications should be done, that more resources should be allocated to them, and that they should be published more often. Majorities of psychologists from both countries reported that they or their students sometimes or often replicate studies, yet they also reported having no replication studies published in the prior 5 years. When asked to estimate the percentage of published studies in psychology that are replications, both Australians (with a median estimate of 13%) and Italians (with a median estimate of 20%) substantially overestimated the actual rate. When asked what constitute the main obstacles to replications, difficulty publishing replications was the most frequently cited obstacle, coupled with the high value given to innovative or novel research and the low value given to replication studies.


2021 ◽  
Vol 4 (3) ◽  
pp. 251524592110275
Author(s):  
Emily R. Fyfe ◽  
Joshua R. de Leeuw ◽  
Paulo F. Carvalho ◽  
Robert L. Goldstone ◽  
Janelle Sherman ◽  
...  

Psychology researchers have long attempted to identify educational practices that improve student learning. However, experimental research on these practices is often conducted in laboratory contexts or in a single course, which threatens the external validity of the results. In this article, we establish an experimental paradigm for evaluating the benefits of recommended practices across a variety of authentic educational contexts—a model we call ManyClasses. The core feature is that researchers examine the same research question and measure the same experimental effect across many classes spanning a range of topics, institutions, teacher implementations, and student populations. We report the first ManyClasses study, in which we examined how the timing of feedback on class assignments, either immediate or delayed by a few days, affected subsequent performance on class assessments. Across 38 classes, the overall estimate for the effect of feedback timing was 0.002 (95% highest density interval = [−0.05, 0.05]), which indicates that there was no effect of immediate feedback compared with delayed feedback on student learning that generalizes across classes. Furthermore, there were no credibly nonzero effects for 40 preregistered moderators related to class-level and student-level characteristics. Yet our results provide hints that in certain kinds of classes, which were undersampled in the current study, there may be modest advantages for delayed feedback. More broadly, these findings provide insights regarding the feasibility of conducting within-class randomized experiments across a range of naturally occurring learning environments.


2021 ◽  
Vol 4 (3) ◽  
pp. 251524592110408
Author(s):  
Tom E. Hardwicke ◽  
Dénes Szűcs ◽  
Robert T. Thibault ◽  
Sophia Crüwell ◽  
Olmo R. van den Akker ◽  
...  

Replication studies that contradict prior findings may facilitate scientific self-correction by triggering a reappraisal of the original studies; however, the research community’s response to replication results has not been studied systematically. One approach for gauging responses to replication results is to examine how they affect citations to original studies. In this study, we explored postreplication citation patterns in the context of four prominent multilaboratory replication attempts published in the field of psychology that strongly contradicted and outweighed prior findings. Generally, we observed a small postreplication decline in the number of favorable citations and a small increase in unfavorable citations. This indicates only modest corrective effects and implies considerable perpetuation of belief in the original findings. Replication results that strongly contradict an original finding do not necessarily nullify its credibility; however, one might at least expect the replication results to be acknowledged and explicitly debated in subsequent literature. By contrast, we found substantial citation bias: The majority of articles citing the original studies neglected to cite relevant replication results. Of those articles that did cite the replication but continued to cite the original study favorably, approximately half offered an explicit defense of the original study. Our findings suggest that even replication results that strongly contradict original findings do not necessarily prompt a corrective response from the research community.


2021 ◽  
Vol 4 (3) ◽  
pp. 251524592110351
Author(s):  
Denis Cousineau ◽  
Marc-André Goulet ◽  
Bradley Harding

Plotting the data of an experiment allows researchers to illustrate the main results of a study, show effect sizes, compare conditions, and guide interpretations. To achieve all this, it is necessary to show point estimates of the results and their precision using error bars. Often, and potentially unbeknownst to them, researchers use a type of error bars—the confidence intervals—that convey limited information. For instance, confidence intervals do not allow comparing results (a) between groups, (b) between repeated measures, (c) when participants are sampled in clusters, and (d) when the population size is finite. The use of such stand-alone error bars can lead to discrepancies between the plot’s display and the conclusions derived from statistical tests. To overcome this problem, we propose to generalize the precision of the results (the confidence intervals) by adjusting them so that they take into account the experimental design and the sampling methodology. Unfortunately, most software dedicated to statistical analyses do not offer options to adjust error bars. As a solution, we developed an open-access, open-source library for R— superb—that allows users to create summary plots with easily adjusted error bars.


Sign in / Sign up

Export Citation Format

Share Document