scholarly journals Time to overcome the neglect of effect sizes in teaching psychological research findings.

2019 ◽  
Vol 5 (2) ◽  
pp. 128-139
Author(s):  
Johannes Hönekopp ◽  
Joanna Greer
2016 ◽  
Vol 18 (2) ◽  
pp. 155-182 ◽  
Author(s):  
Hannah Proctor

Alexander Luria played a prominent role in the psychoanalytic community that flourished briefly in Soviet Russia in the decade following the 1917 October Revolution. In 1925 he co-wrote an introduction to Sigmund Freud's Beyond the Pleasure Principle with Lev Vygotsky, which argued that the conservatism of the instincts that Freud described might be overcome through the kind of radical social transformation then taking place in Russia. In attempting to bypass the backward looking aspects of Freud's theory, however, Luria and Vygotsky also did away with the tension between Eros and the death drive; precisely the element of Freud's essay they praised for being ‘dialectical’. This article theoretically unpicks Luria and Vygotsky's critique of psychoanalysis. It concludes by considering their optimistic ideological argument against the death drive with Luria's contemporaneous psychological research findings, proposing that Freud's ostensibly conservative theory may not have been as antithetical to revolutionary goals as Luria and Vygotsky assumed.


1998 ◽  
Vol 15 (2) ◽  
pp. 103-118 ◽  
Author(s):  
Vinson H. Sutlive ◽  
Dale A. Ulrich

The unqualified use of statistical significance tests for interpreting the results of empirical research has been called into question by researchers in a number of behavioral disciplines. This paper reviews what statistical significance tells us and what it does not, with particular attention paid to criticisms of using the results of these tests as the sole basis for evaluating the overall significance of research findings. In addition, implications for adapted physical activity research are discussed. Based on the recent literature of other disciplines, several recommendations for evaluating and reporting research findings are made. They include calculating and reporting effect sizes, selecting an alpha level larger than the conventional .05 level, placing greater emphasis on replication of results, evaluating results in a sample size context, and employing simple research designs. Adapted physical activity researchers are encouraged to use specific modifiers when describing findings as significant.


2018 ◽  
Vol 43 (1) ◽  
pp. 80-89 ◽  
Author(s):  
Noel A. Card

Longitudinal data are common and essential to understanding human development. This paper introduces an approach to synthesizing longitudinal research findings called lag as moderator meta-analysis (LAMMA). This approach capitalizes on between-study variability in time lags studied in order to identify the impact of lag on estimates of stability and longitudinal prediction. The paper introduces linear, nonlinear, and mixed-effects approaches to LAMMA, and presents an illustrative example (with syntax and annotated output available as online Supplementary Materials). Several extensions of the basic LAMMA are considered, including artifact correction, multiple effect sizes from studies, and incorporating age as a predictor. It is hoped that LAMMA provides a framework for synthesizing longitudinal data to promote greater accumulation of knowledge in developmental science.


2018 ◽  
Vol 8 (1) ◽  
pp. 3-19 ◽  
Author(s):  
Yuanyuan Zhou ◽  
Susan Troncoso Skidmore

Historically, ANOVA has been the most prevalent statistical method used in educational and psychological research and today ANOVA continues to be widely used.  A comprehensive review published in 1998 examined several APA journals and discovered persistent concerns in ANOVA reporting practices.  The present authors examined all articles published in 2012 in three APA journals (Journal of Applied Psychology, Journal of Counseling Psychology, and Journal of Personality and Social Psychology) to review ANOVA reporting practices including p values and effect sizes.  Results indicated that ANOVA continues to be prevalent in the reviewed journals as a test of the primary research question, as well as to test conditional assumptions prior to the primary analysis.  Still, ANOVA reporting practices are essentially unchanged from what was previously reported.  However, effect size reporting has improved.


2016 ◽  
Vol 20 (4) ◽  
pp. 639-664 ◽  
Author(s):  
Christopher D. Nye ◽  
Paul R. Sackett

Moderator hypotheses involving categorical variables are prevalent in organizational and psychological research. Despite their importance, current methods of identifying and interpreting these moderation effects have several limitations that may result in misleading conclusions about their implications. This issue has been particularly salient in the literature on differential prediction where recent research has suggested that these limitations have had a significant impact on past research. To help address these issues, we propose several new effect size indices that provide additional information about categorical moderation analyses. The advantages of these indices are then illustrated in two large databases of respondents by examining categorical moderation in the prediction of psychological well-being and the extent of differential prediction in a large sample of job incumbents.


2020 ◽  
Vol 14 ◽  
Author(s):  
Aline da Silva Frost ◽  
Alison Ledgerwood

Abstract This article provides an accessible tutorial with concrete guidance for how to start improving research methods and practices in your lab. Following recent calls to improve research methods and practices within and beyond the borders of psychological science, resources have proliferated across book chapters, journal articles, and online media. Many researchers are interested in learning more about cutting-edge methods and practices but are unsure where to begin. In this tutorial, we describe specific tools that help researchers calibrate their confidence in a given set of findings. In Part I, we describe strategies for assessing the likely statistical power of a study, including when and how to conduct different types of power calculations, how to estimate effect sizes, and how to think about power for detecting interactions. In Part II, we provide strategies for assessing the likely type I error rate of a study, including distinguishing clearly between data-independent (“confirmatory”) and data-dependent (“exploratory”) analyses and thinking carefully about different forms and functions of preregistration.


ruffin_darden ◽  
1998 ◽  
Vol 1 ◽  
pp. 149-172 ◽  
Author(s):  
David M. Messick ◽  

In this article, I want to draw attention to one strand ofthe complex web of processes that are involved when people group others, including themselves, into social categories. I will focus on the tendency to treat members of one's own group more favorably than nonmembers, a tendency that has been called ingroup favoritism. The structure of the article has three parts. First I will offer anevolutionary argument as to why ingroup favoritism, or something very much like it, is required by theories of the evolution of altruism. I will then review some of the basic social psychological research findings dealing with social categorization generally, and ingroup favoritism specifically. Finally, I will examine two problems in business ethics from the point of view of ingroup favoritism to suggest ways in which social psychological principles and findings may be mobilized to help solve problems of racial or gender discrimination in business contexts.


2019 ◽  
Author(s):  
Chris Hubertus Joseph Hartgerink ◽  
Jan G. Voelkel ◽  
Jelte M. Wicherts ◽  
Marcel A. L. M. van Assen

Scientific misconduct potentially invalidates findings in many scientific fields. Improved detection of unethical practices like data fabrication is considered to deter such practices. In two studies, we investigated the diagnostic performance of various statistical methods to detect fabricated quantitative data from psychological research. In Study 1, we tested the validity of statistical methods to detect fabricated data at the study level using summary statistics. Using (arguably) genuine data from the Many Labs 1 project on the anchoring effect (k=36) and fabricated data for the same effect by our participants (k=39), we tested the validity of our newly proposed 'reversed Fisher method', variance analyses, and extreme effect sizes, and a combination of these three indicators using the original Fisher method. Results indicate that the variance analyses perform fairly well when the homogeneity of population variances is accounted for and that extreme effect sizes perform similarly well in distinguishing genuine from fabricated data. The performance of the 'reversed Fisher method' was poor and depended on the types of tests included. In Study 2, we tested the validity of statistical methods to detect fabricated data using raw data. Using (arguably) genuine data from the Many Labs 3 project on the classic Stroop task (k=21) and fabricated data for the same effect by our participants (k=28), we investigated the performance of digit analyses, variance analyses, multivariate associations, and extreme effect sizes, and a combination of these four methods using the original Fisher method. Results indicate that variance analyses, extreme effect sizes, and multivariate associations perform fairly well to excellent in detecting fabricated data using raw data, while digit analyses perform at chance levels. The two studies provide mixed results on how the use of random number generators affects the detection of data fabrication. Ultimately, we consider the variance analyses, effect sizes, and multivariate associations valuable tools to detect potential data anomalies in empirical (summary or raw) data. However, we argue against widespread (possible automatic) application of these tools, because some fabricated data may be irregular in one aspect but not in another. Considering how violations of the assumptions of fabrication detection methods may yield high false positive or false negative probabilities, we recommend comparing potentially fabricated data to genuine data on the same topic.


2020 ◽  
Vol 8 (2) ◽  
pp. 617-641 ◽  
Author(s):  
Joseph M. Pierre

Although conspiracy theories are endorsed by about half the population and occasionally turn out to be true, they are more typically false beliefs that, by definition, have a paranoid theme. Consequently, psychological research to date has focused on determining whether there are traits that account for belief in conspiracy theories (BCT) within a deficit model. Alternatively, a two-component, socio-epistemic model of BCT is proposed that seeks to account for the ubiquity of conspiracy theories, their variance along a continuum, and the inconsistency of research findings likening them to psychopathology. Within this model, epistemic mistrust is the core component underlying conspiracist ideation that manifests as the rejection of authoritative information, focuses the specificity of conspiracy theory beliefs, and can sometimes be understood as a sociocultural response to breaches of trust, inequities of power, and existing racial prejudices. Once voices of authority are negated due to mistrust, the resulting epistemic vacuum can send individuals “down the rabbit hole” looking for answers where they are vulnerable to the biased processing of information and misinformation within an increasingly “post-truth” world. The two-component, socio-epistemic model of BCT argues for mitigation strategies that address both mistrust and misinformation processing, with interventions for individuals, institutions of authority, and society as a whole.


Sign in / Sign up

Export Citation Format

Share Document