scholarly journals Heterogeneity of Research Results: A New Perspective From Which to Assess and Promote Progress in Psychological Science

2021 ◽  
pp. 174569162096419
Author(s):  
Audrey Helen Linden ◽  
Johannes Hönekopp

Heterogeneity emerges when multiple close or conceptual replications on the same subject produce results that vary more than expected from the sampling error. Here we argue that unexplained heterogeneity reflects a lack of coherence between the concepts applied and data observed and therefore a lack of understanding of the subject matter. Typical levels of heterogeneity thus offer a useful but neglected perspective on the levels of understanding achieved in psychological science. Focusing on continuous outcome variables, we surveyed heterogeneity in 150 meta-analyses from cognitive, organizational, and social psychology and 57 multiple close replications. Heterogeneity proved to be very high in meta-analyses, with powerful moderators being conspicuously absent. Population effects in the average meta-analysis vary from small to very large for reasons that are typically not understood. In contrast, heterogeneity was moderate in close replications. A newly identified relationship between heterogeneity and effect size allowed us to make predictions about expected heterogeneity levels. We discuss important implications for the formulation and evaluation of theories in psychology. On the basis of insights from the history and philosophy of science, we argue that the reduction of heterogeneity is important for progress in psychology and its practical applications, and we suggest changes to our collective research practice toward this end.

2020 ◽  
Author(s):  
Or Dagan ◽  
Pasco Fearon ◽  
Carlo Schuengel ◽  
Marije Verhage ◽  
Glenn I. Roisman ◽  
...  

Since the seminal 1992 paper by van IJzendoorn, Sagi, and Lambermon, putting forward the “The multiple caretaker paradox”, relatively little attention has been given to the potential joint effects of the role early attachment network to mother and father play in development. Recently, Dagan and Sagi-Schwartz (2018) have published a paper that attempts to revive this unsettled issue, calling for research on the subject and offering a framework for posing attachment network hypotheses. This Collaboration for Attachment Research Synthesis project attempts to use an Individual Participant Data meta-analyses to test the hypotheses put forward in Dagan and Sagi-Schwartz (2018). Specifically, we test (a) whether the number of secure attachments (0,1, or 2) matter in predicting a range of developmental outcomes, and (b) whether the quality of attachment relationship with one parent contributes more than the other to these outcomes.


Author(s):  
Gavin B. Stewart ◽  
Isabelle M. Côté ◽  
Hannah R. Rothstein ◽  
Peter S. Curtis

This chapter discusses the initiation of the process of systematic research synthesis. Without a systematic approach to defining, obtaining, and collating data, meta-analyses may yield precise but erroneous results, with different types of sampling error (biases) and excess subjectivity in choice of methods and definition of thresholds; these devalue the rigor of any statistical approaches employed. The chapter considers exactly the same issues that face an ecologist designing a field experiment. What's the question? How can I define my sampling universe? How should I collect my data? What analyses should I undertake? How should I interpret my results robustly? These questions are considered in the context of research synthesis.


1996 ◽  
Vol 26 (2) ◽  
pp. 279-287 ◽  
Author(s):  
A. Sacker ◽  
D. J. Done ◽  
T. J. Crow

SynopsisOn the basis of previous findings, we used meta-analyses to consider whether births to parents with schizophrenia have an increased risk of obstetric complications. Meta-analyses were based on published studies satisfying the following selection criteria. The schizophrenic diagnosis could apply to either parent: parents with non-schizophrenic psychoses were not included: only normal controls were accepted. In all, 14 studies provided effect sizes or data from which these could be derived. Studies were identified by data searches through MEDLINE, PSYCLIT and through references of papers relating to the subject. Births to individuals with schizophrenia incur an increased risk of pregnancy and birth complications, low birthweight and poor neonatal condition. However, in each case the effect size is small (mean r = 0·155; 95% CI = 0·057). The risk is greater for mothers with schizophrenia and is not confined to mothers with onset pre-delivery or to the births of the children who become schizophrenic themselves.


2019 ◽  
Author(s):  
Malte Elson

Research synthesis is based on the assumption that when the same association between constructs is observed repeatedly in a field, the relationship is probably real, even if its exact magnitude can be debated. Yet this probability is not only a function of recurring results, but also of the quality and consistency in the empirical procedures that produced those results and that any meta-analysis necessarily inherits. Standardized protocols in data collection, analysis, and interpretation are important empirical properties, and a healthy sign of a discipline's maturity.This manuscript proposes that meta-analyses as typically applied in psychology benefit from complementing their aggregates of observed effect sizes by systematically examining the standardization of methodology that deterministically produced them. Potential units of analyses are described and two examples are offered to illustrate the benefits of such efforts. Ideally, this synergetic approach emphasizes the role of methods in advancing theory by improving the quality of meta-analytic inferences.


2019 ◽  
Vol 46 (2-3) ◽  
pp. 322-333 ◽  
Author(s):  
Christopher J Carpenter

Abstract This essay attempts to describe the apples and oranges problem in meta-analyses. Essentially, some meta-analyses combine original studies of various variables that are not the same pairs of variables. Metaphorically, they meta-analyze the effects of fruit when they should conduct separate meta-analyses of apples and oranges. This practice is inconsistent with the assumptions behind the meta-analytic formulae concerning sampling error and makes meta-analytic estimates difficult to interpret. Meta-analysis teams are advised to justify their choices and types of evidence are described to assist researchers and reviewers in assessing and justifying when constructs can and cannot be combined together in a meta-analysis.


2012 ◽  
Vol 16 (3) ◽  
pp. 440-452 ◽  
Author(s):  
Youngdeok Kim ◽  
Ilhyeok Park ◽  
Minsoo Kang

AbstractObjectiveThe purpose of the present study was to use a meta-analytic approach to examine the convergent validity of the International Physical Activity Questionnaire (IPAQ).DesignSystematic review by meta-analysis.SettingThe relevant studies were surveyed from five electronic databases. Primary outcomes of interest were the product-moment correlation coefficients between IPAQ and other instruments. Five separate meta-analyses were performed for each physical activity (PA) category of IPAQ: walking, moderate PA (MPA), total moderate PA (TMPA), vigorous PA (VPA) and total PA (TPA). The corrected mean effect size (ESρ) unaffected by statistical artefacts (i.e. sampling error and reliability) was calculated for each PA category. Selected moderator variables were length of IPAQ (i.e. short and long form), reference period (i.e. last 7 d and usual week), mode of administration (i.e. interviewer and self-reported), language (i.e. English and translated) and instruments (i.e. accelerometer, pedometer and subjective measure).SubjectsA total of 152 ESρ across five PA categories were retrieved from twenty-one studies.ResultsThe results showed small- to medium-sized ESρ (0·27–0·49). The highest value was observed in VPA while the lowest value was found in MPA. The ESρ were differentiated by some of the moderator variables across PA categories.ConclusionsThe study shows the overall convergent validity of IPAQ within each PA category. Some differences in degree of convergent validity across PA categories and moderator variables imply that different research conditions should be taken into account prior to deciding on use of the appropriate type of IPAQ.


2019 ◽  
Author(s):  
Francesco Margoni ◽  
Martin Shepperd

Infant research is making considerable progresses. However, among infant researchers there is growing concern regarding the widespread habit of undertaking studies that have small sample sizes and employ tests with low statistical power (to detect a wide range of possible effects). For many researchers, issues of confidence may be partially resolved by relying on replications. Here, we bring further evidence that the classical logic of confirmation, according to which the result of a replication study confirms the original finding when it reaches statistical significance, could be usefully abandoned. With real examples taken from the infant literature and Monte Carlo simulations, we show that a very wide range of possible replication results would in a formal statistical sense constitute confirmation as they can be explained simply due to sampling error. Thus, often no useful conclusion can be derived from a single or small number of replication studies. We suggest that, in order to accumulate and generate new knowledge, the dichotomous view of replication as confirmatory/disconfirmatory can be replaced by an approach that emphasizes the estimation of effect sizes via meta-analysis. Moreover, we discuss possible solutions for reducing problems affecting the validity of conclusions drawn from meta-analyses in infant research.


2019 ◽  
Vol 3 (1) ◽  
pp. 124-137 ◽  
Author(s):  
Frank A. Bosco ◽  
James G. Field ◽  
Kai R. Larsen ◽  
Yingyi Chang ◽  
Krista L. Uggerslev

In this article, we provide a review of research-curation and knowledge-management efforts that may be leveraged to advance research and education in psychological science. After reviewing the approaches and content of other efforts, we focus on the metaBUS project’s platform, the most comprehensive effort to date. The metaBUS platform uses standards-based protocols in combination with human judgment to organize and make readily accessible a database of research findings, currently numbering more than 1 million. It allows users to conduct rudimentary, instant meta-analyses, and capacities for visualization and communication of meta-analytic findings have recently been added. We conclude by discussing challenges, opportunities, and recommendations for expanding the project beyond applied psychology.


2019 ◽  
Author(s):  
Brenton M. Wiernik ◽  
Jeffrey Alan Dahlke

Most published meta-analyses address only artefactual variance due to sampling error and ignore the role of other statistical and psychometric artefacts, such as measurement error (due to factors including unreliability of measurements, group misclassification, and variable treatment strength) and selection effects (including range restriction/enhancement and collider biases). These artefacts can have severe biasing effects on the results of individual studies and meta-analyses. Failing to account for these artefacts can lead to inaccurate conclusions about the mean effect size and between-studies effect-size heterogeneity, and can influence the results of meta-regression, publication bias, and sensitivity analyses. In this paper, we provide a brief introduction to the biasing effects of measurement error and selection effects and their relevance to a variety of research designs. We describe how to estimate the effects of these artefacts in different research designs and correct for their impacts in primary studies and meta-analyses. We consider meta-analyses of correlations, observational group differences, and experimental effects. We provide R code to implement the corrections described.


2020 ◽  
Author(s):  
Magdalena Siegel ◽  
Junia Eder ◽  
Jelte M. Wicherts ◽  
Jakob Pietschnig

Inflated or outright false effects plague Psychological Science, but advances in the identification of dissemination biases in general and publication bias in particular have helped in dealing with biased effects in the literature. However, the application of publication bias detection methods appears to be not equally prevalent across subdisciplines. It has been suggested that particularly in I/O Psychology, appropriate publication bias detection methods are underused. In this meta-meta-analysis, we present prevalence estimates, predictors, and time trends of publication bias in 128 meta-analyses that were published in the Journal of Applied Psychology (7,263 effect sizes, 3,000,000+ participants). Moreover, we reanalyzed data of 87 meta-analyses and applied nine standard and more modern publication bias detection methods. We show that (i) the bias detection method applications are underused (only 41% of meta-analyses use at least one method) but have increased in recent years, (ii) those meta-analyses that apply such methods now use more, but mostly inappropriate methods, and (iii) the prevalence of publication bias is disconcertingly high (15% to 20% show severe, 33% to 48% some bias indication) but mostly remains undetected. Although our results indicate somewhat of a trend towards higher bias awareness, they also indicate that concerns about publication bias in I/O Psychology are justified and researcher awareness about appropriate and state-of-the-art bias detection needs to be further increased. Embracing open science practices such as data sharing or study preregistration is needed to raise reproducibility and ultimately strengthen Psychological Science in general and I/O Psychology in particular.


Sign in / Sign up

Export Citation Format

Share Document