scholarly journals Are effect sizes reported in highly cited emotion research overestimated relative to larger studies and meta-analyses addressing the same questions?

2021 ◽  
Author(s):  
Ioana Cristea ◽  
john Ioannidis ◽  
Raluca Georgescu

We assessed whether the most highly-cited studies in emotion research report larger effect sizes compared with meta-analyses and the largest studies on the same questions. We screened all reports with at least 1000 citations and identified matching meta-analyses for 40 highly-cited observational and 25 highly-cited experimental studies. Observational studies had on average 1.42-fold (95% CI 1.09 to 1.87) larger effects than meta-analyses and 1.99-fold (95% CI 1.33 to 2.99) larger effects than largest studies on the same questions. Experimental studies had fold-increases of 1.29 (95% CI 1.01 to 1.63) versus meta-analyses and 2.02 (95% CI 1.60 to 2.57) versus largest studies. There was substantial between-topic heterogeneity, more prominently for observational studies. Highly-cited studies were uncommonly (12/65 topics, 18%) the largest ones, but they were frequently (31/65 topics, 48%) the earliest published on the topic. Highly-cited studies may offer, on average, exaggerated estimates of effects in both observational and experimental designs.

2021 ◽  
pp. 216770262110493
Author(s):  
Ioana A. Cristea ◽  
Raluca Georgescu ◽  
John P. A. Ioannidis

We assessed whether the most highly cited studies in emotion research reported larger effect sizes compared with meta-analyses and the largest studies on the same question. We screened all reports with at least 1,000 citations and identified matching meta-analyses for 40 highly cited observational studies and 25 highly cited experimental studies. Highly cited observational studies had effects greater on average by 1.42-fold (95% confidence interval [CI] = [1.09, 1.87]) compared with meta-analyses and 1.99-fold (95% CI = [1.33, 2.99]) compared with largest studies on the same questions. Highly cited experimental studies had increases of 1.29-fold (95% CI = [1.01, 1.63]) compared with meta-analyses and 2.02-fold (95% CI = [1.60, 2.57]) compared with the largest studies. There was substantial between-topics heterogeneity, more prominently for observational studies. Highly cited studies often did not have the largest weight in meta-analyses (12 of 65 topics, 18%) but were frequently the earliest ones published on the topic (31 of 65 topics, 48%). Highly cited studies may offer, on average, exaggerated estimates of effects in both observational and experimental designs.


2016 ◽  
Vol 26 (4) ◽  
pp. 364-368 ◽  
Author(s):  
P. Cuijpers ◽  
E. Weitz ◽  
I. A. Cristea ◽  
J. Twisk

AimsThe standardised mean difference (SMD) is one of the most used effect sizes to indicate the effects of treatments. It indicates the difference between a treatment and comparison group after treatment has ended, in terms of standard deviations. Some meta-analyses, including several highly cited and influential ones, use the pre-post SMD, indicating the difference between baseline and post-test within one (treatment group).MethodsIn this paper, we argue that these pre-post SMDs should be avoided in meta-analyses and we describe the arguments why pre-post SMDs can result in biased outcomes.ResultsOne important reason why pre-post SMDs should be avoided is that the scores on baseline and post-test are not independent of each other. The value for the correlation should be used in the calculation of the SMD, while this value is typically not known. We used data from an ‘individual patient data’ meta-analysis of trials comparing cognitive behaviour therapy and anti-depressive medication, to show that this problem can lead to considerable errors in the estimation of the SMDs. Another even more important reason why pre-post SMDs should be avoided in meta-analyses is that they are influenced by natural processes and characteristics of the patients and settings, and these cannot be discerned from the effects of the intervention. Between-group SMDs are much better because they control for such variables and these variables only affect the between group SMD when they are related to the effects of the intervention.ConclusionsWe conclude that pre-post SMDs should be avoided in meta-analyses as using them probably results in biased outcomes.


Author(s):  
Eka Fadilah

This survey aims to review statisical report procedures in the experimental studies appearing in ten SLA and Applied Linguistic journals from 2011 to 2017. We specify our study on how the authors report and interprete their power analyses, effect sizes, and confidence intervals. Results reveal that of 217 articles, the authors reported effect sizes (70%), apriori power and posthoc power consecutively (1.8% and 6.9%), and confidence intervals (18.4%). Additionally, it shows that the authors interprete those statistical terms counted 5.5%, 27.2%, and 6%, respectively. The call for statistical report reform recommended and endorsed by scholars, researchers, and editors is inevitably echoed to shed more light on the trustworthiness and practicality of the data presented.


2012 ◽  
Vol 82 (3) ◽  
pp. 300-329 ◽  
Author(s):  
Erin Marie Furtak ◽  
Tina Seidel ◽  
Heidi Iverson ◽  
Derek C. Briggs

Although previous meta-analyses have indicated a connection between inquiry-based teaching and improved student learning, the type of instruction characterized as inquiry based has varied greatly, and few have focused on the extent to which activities are led by the teacher or student. This meta-analysis introduces a framework for inquiry-based teaching that distinguishes between cognitive features of the activity and degree of guidance given to students. This framework is used to code 37 experimental and quasi-experimental studies published between 1996 and 2006, a decade during which inquiry was the main focus of science education reform. The overall mean effect size is .50. Studies that contrasted epistemic activities or the combination of procedural, epistemic, and social activities had the highest mean effect sizes. Furthermore, studies involving teacher-led activities had mean effect sizes about .40 larger than those with student-led conditions. The importance of establishing the validity of the treatment construct in meta-analyses is also discussed.


Author(s):  
Piers Steel ◽  
Sjoerd Beugelsdijk ◽  
Herman Aguinis

AbstractMeta-analyses summarize a field’s research base and are therefore highly influential. Despite their value, the standards for an excellent meta-analysis, one that is potentially award-winning, have changed in the last decade. Each step of a meta-analysis is now more formalized, from the identification of relevant articles to coding, moderator analysis, and reporting of results. What was exemplary a decade ago can be somewhat dated today. Using the award-winning meta-analysis by Stahl et al. (Unraveling the effects of cultural diversity in teams: A meta-analysis of research on multicultural work groups. Journal of International Business Studies, 41(4):690–709, 2010) as an exemplar, we adopted a multi-disciplinary approach (e.g., management, psychology, health sciences) to summarize the anatomy (i.e., fundamental components) of a modern meta-analysis, focusing on: (1) data collection (i.e., literature search and screening, coding), (2) data preparation (i.e., treatment of multiple effect sizes, outlier identification and management, publication bias), (3) data analysis (i.e., average effect sizes, heterogeneity of effect sizes, moderator search), and (4) reporting (i.e., transparency and reproducibility, future research directions). In addition, we provide guidelines and a decision-making tree for when even foundational and highly cited meta-analyses should be updated. Based on the latest evidence, we summarize what journal editors and reviewers should expect, authors should provide, and readers (i.e., other researchers, practitioners, and policymakers) should consider about meta-analytic reviews.


Author(s):  
Uwe Czienskowski ◽  
Stefanie Giljohann

Abstract. Reference to oneself during incidental learning of words frequently results in better recall performance than reference to other persons. However, this effect occurs under different conditions with differing strength, and sometimes it is even reversed. Meta-analyses and further experimental studies suggest that increased recall performance under a self-referential encoding task occurs only if it is compared with a nonintimate other person and if abstract material is presented, irrespective of the type of previously presented words (adjectives or nouns). In the current paper, two experiments are reported which support the assumption that this intimacy effect on memory only occurs if no pictorial or concrete features of the material (nouns) to be learned can be exploited for an improvement in encoding or remembering the material. All results agree with predicted effect sizes, which were drawn from a meta-analysis and subsequently conducted experimental studies. This suggests that a recall advantage of referring to oneself compared to other persons is subordinate to the effects of concreteness or imageability. Moreover, the current results offer a theoretical explanation of some previously reported but nevertheless puzzling results from imagery instructions, which indicate decreased recall performance for self-reference compared to other-reference.


2021 ◽  
Author(s):  
Saranrat Sadoyu ◽  
Kaniz Afroz Tanni ◽  
Nontaporn Punrum ◽  
Sobhon Paengtrai ◽  
Nai Ming Lai ◽  
...  

Abstract Objective: To identify and describe the methodological approaches for assessing the certainty of the evidence in umbrella reviews (URs) of meta-analyses (MAs).Study Design and Setting: We included URs that included SR-MAs of interventions and non-interventions. We searched from 3 databases including PubMed, Embase, and The Cochrane Library from 2010 to 2020.Results: 138 URs have been included consisting of 96 and 42 URs of interventions and non-interventions, respectively. Only 31 (32.3%) of URs of interventions assessed certainty of evidence, in which the GRADE approach was the most frequently used method (N=20, 64.5%) followed by creditability assessments (N=6, 12.9%). Conversely, thirty (71.4%) of URs of non-interventions assessed certainty of evidence, in which the criteria for credibility assessment were mainly used (N=28; 93%). URs published in journals with high journal impact factor (JIF) are more likely to assess certainty of evidence than URs published in low JIFs. Conclusions: Only one-third of URs that included MAs of experimental designs have assessed the certainty of the evidence in contrast to the majority of the URs of observational studies. Therefore, guidance and standards are required to ensure the methodological rigor and consistency of certainty of evidence assessment for URs.


Author(s):  
Anthony Petrosino ◽  
Claire Morgan ◽  
Trevor Fronius

Systematic reviews and meta-analyses have become a focal point of evidence-based policy in criminology. Systematic reviews use explicit and transparent processes to identify, retrieve, code, analyze, and report on existing research studies bearing on a question of policy or practice. Meta-analysis can combine the results from the most rigorous evaluations identified in a systematic review to provide policymakers with the best evidence on what works for a variety of interventions relevant to reducing crime and making the justice system fairer and more effective. The steps of a systematic review using meta-analysis include specifying the topic area, developing management procedures, specifying the search strategy, developing eligibility criteria, extracting data from the studies, computing effect sizes, developing an analysis strategy, and interpreting and reporting the results. In a systematic review using meta-analysis, after identifying and coding eligible studies, the researchers create a measure of effect size for each experimental versus control contrast of interest in the study. Most commonly, reviewers do this by standardizing the difference between scores of the experimental and control groups, placing outcomes that are conceptually similar but measured differently (e.g., such as re-arrest or reconviction) on the same common scale or metric. Though these are different indices, they do measure a program’s effect on some construct (e.g., criminality). These effect sizes are usually averaged across all similar studies to provide a summary of program impact. The effect sizes also represent the dependent variable in the meta-analysis, and more advanced syntheses explore the role of potential moderating variables, such as sample size or other characteristics related to effect size. When done well and with full integrity, a systematic review using meta-analysis can provide the most comprehensive assessment of the available evaluative literature addressing the research question, as well as the most reliable statement about what works. Drawing from a larger body of research increases statistical power by reducing standard error; individual studies often use small sample sizes, which can result in large margins of error. In addition, conducting meta-analysis can be faster and less resource-intensive than replicating experimental studies. Using meta-analysis instead of relying on an individual program evaluation can help ensure that policy is guided by the totality of evidence, drawing upon a solid basis for generalizing outcomes.


2020 ◽  
Author(s):  
Hasan B Alam ◽  
Glenn Wakam ◽  
Michael T. Kemp

Conducting research in an intensive care unit (ICU) is both challenging and rewarding. ICU patients are heterogeneous, complex, and critically ill. Despite these challenges, the ICU is a data-rich research environment that lends itself to cutting-edge clinical investigation. To optimize research outcomes, investigators must carefully consider the principles of study design. This review discusses the most commonly used observational, experimental, and meta-analytic study designs, as well as the theoretical underpinnings of each study type. Published ICU-based research studies are used as examples to highlight key concepts.  This review contains 4 figures, 9 tables, and 33 references. Key words: clinical research, experimental studies, intensive care, meta-analyses, observational studies, study design


Sign in / Sign up

Export Citation Format

Share Document