Preliminary Examination of the Impact of Program Factors on Summary Effect Sizes

Author(s):  
Liana R. Taylor ◽  
Avinash Bhati ◽  
Faye S. Taxman

The Washington State Institute for Public Policy (WSIPP) uses meta-analyses to help program administrators identify effective programs that reduce recidivism. The results are displayed as summary effect sizes. Yet, many programs are grouped within a category (such as Intensive Supervision or Correctional Education), even though the features of the programs might suggest they may be very different. The following research question was examined: What program features are related to the effect size in the WSIPP program category? Researchers at ACE! at George Mason University reviewed the studies analyzed by WSIPP and their effect sizes. The meta-regression global models showed recidivism decreased with certain program features, while other program features actually increased recidivism. A multivariate meta-regression showed substantial variation across Cognitive-Behavioral Therapy programs. These preliminary findings suggest the need to further research how differing program features contribute to client-level outcomes, and develop a scheme to better classify programs.

2021 ◽  
Vol 5 (1) ◽  
pp. e100135
Author(s):  
Xue Ying Zhang ◽  
Jan Vollert ◽  
Emily S Sena ◽  
Andrew SC Rice ◽  
Nadia Soliman

ObjectiveThigmotaxis is an innate predator avoidance behaviour of rodents and is enhanced when animals are under stress. It is characterised by the preference of a rodent to seek shelter, rather than expose itself to the aversive open area. The behaviour has been proposed to be a measurable construct that can address the impact of pain on rodent behaviour. This systematic review will assess whether thigmotaxis can be influenced by experimental persistent pain and attenuated by pharmacological interventions in rodents.Search strategyWe will conduct search on three electronic databases to identify studies in which thigmotaxis was used as an outcome measure contextualised to a rodent model associated with persistent pain. All studies published until the date of the search will be considered.Screening and annotationTwo independent reviewers will screen studies based on the order of (1) titles and abstracts, and (2) full texts.Data management and reportingFor meta-analysis, we will extract thigmotactic behavioural data and calculate effect sizes. Effect sizes will be combined using a random-effects model. We will assess heterogeneity and identify sources of heterogeneity. A risk-of-bias assessment will be conducted to evaluate study quality. Publication bias will be assessed using funnel plots, Egger’s regression and trim-and-fill analysis. We will also extract stimulus-evoked limb withdrawal data to assess its correlation with thigmotaxis in the same animals. The evidence obtained will provide a comprehensive understanding of the strengths and limitations of using thigmotactic outcome measure in animal pain research so that future experimental designs can be optimised. We will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines and disseminate the review findings through publication and conference presentation.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.


2021 ◽  
pp. bmjspcare-2021-003163
Author(s):  
Ronald Chow ◽  
Robert Bergner ◽  
Elizabeth Prsic

ObjectivesSeveral reviews and meta-analyses have reported on music therapy for physical and emotional well-being among patients with cancer. However, the duration of music therapy offered may range from less than 1 hour to several hours. The aim of this study is to assess whether longer duration of music therapy is associated with different levels of improvement in physical and mental well-being.MethodsTen studies were included in this paper, reporting on the endpoints of quality of life and pain. A meta-regression, using an inverse-variance model, was performed to assess the impact of total music therapy time. A sensitivity analysis was conducted for the outcome of pain, among low risk of bias trials.ResultsOur meta-regression found a trend for positive association between greater total music therapy time and improved better pain control, but it was not statistically significant.ConclusionThere is a need for more high-quality studies examining music therapy for patients with cancer, with a focus on total music therapy time and patient-related outcomes including quality of life and pain.


2020 ◽  
Author(s):  
Michael W. Beets ◽  
R. Glenn Weaver ◽  
John P.A. Ioannidis ◽  
Alexis Jones ◽  
Lauren von Klinggraeff ◽  
...  

Abstract Background: Pilot/feasibility or studies with small sample sizes may be associated with inflated effects. This study explores the vibration of effect sizes (VoE) in meta-analyses when considering different inclusion criteria based upon sample size or pilot/feasibility status. Methods: Searches were conducted for meta-analyses of behavioral interventions on topics related to the prevention/treatment of childhood obesity from 01-2016 to 10-2019. The computed summary effect sizes (ES) were extracted from each meta-analysis. Individual studies included in the meta-analyses were classified into one of the following four categories: self-identified pilot/feasibility studies or based upon sample size (N≤100, N>100, and N>370 the upper 75th of sample size). The VoE was defined as the absolute difference (ABS) between the re-estimations of summary ES restricted to study classifications compared to the originally reported summary ES. Concordance (kappa) of statistical significance between summary ES was assessed. Fixed and random effects models and meta-regressions were estimated. Three case studies are presented to illustrate the impact of including pilot/feasibility and N≤100 studies on the estimated summary ES.Results: A total of 1,602 effect sizes, representing 145 reported summary ES, were extracted from 48 meta-analyses containing 603 unique studies (avg. 22 avg. meta-analysis, range 2-108) and included 227,217 participants. Pilot/feasibility and N≤100 studies comprised 22% (0-58%) and 21% (0-83%) of studies. Meta-regression indicated the ABS between the re-estimated and original summary ES where summary ES were comprised of ≥40% of N≤100 studies was 0.29. The ABS ES was 0.46 when summary ES comprised of >80% of both pilot/feasibility and N≤100 studies. Where ≤40% of the studies comprising a summary ES had N>370, the ABS ES ranged from 0.20-0.30. Concordance was low when removing both pilot/feasibility and N≤100 studies (kappa=0.53) and restricting analyses only to the largest studies (N>370, kappa=0.35), with 20% and 26% of the originally reported statistically significant ES rendered non-significant. Reanalysis of the three case study meta-analyses resulted in the re-estimated ES rendered either non-significant or half of the originally reported ES. Conclusions: When meta-analyses of behavioral interventions include a substantial proportion of both pilot/feasibility and N≤100 studies, summary ES can be affected markedly and should be interpreted with caution.


2016 ◽  
Vol 8 (2) ◽  
Author(s):  
Θεόδωρος Μητράκος

This study synthesizes the findings of the empirical literature available by means of meta-analyses of the impact of immigration on the Greek economy in order to detect whether consensus conclusions are emerging and whether differences in results across studies can be explained. For this purpose, the study recodes the contribution of migrants to the Greek economy and the labour market resulting from the available studies as benefiting or harming the Greek economy and native born, and estimates alternative probit and order probit models to assess the relationship between this observed impact and key study characteristics such as methodology, period of investigation, survey design, publication year etc. Even if the sample of studies available to generate comparable effect sizes remains severely limited by the heterogeneity in different approaches, the study shows that the contribution of immigrants in terms of economic growth, wages and employment is clearly positive, although rather relatively small.


2021 ◽  
Author(s):  
Loretta Gasparini ◽  
Sho Tsuji ◽  
Christina Bergmann

Meta-analyses provide researchers with an overview of the body of evidence in a topic, with quantified estimates of effect sizes and the role of moderators, and weighting studies according to their precision. We provide a guide for conducting a transparent and reproducible meta-analysis in the field of developmental psychology within the framework of the MetaLab platform, in 10 steps: 1) Choose a topic for your meta-analysis, 2) Formulate your research question and specify inclusion criteria, 3) Preregister and carefully document all stages of your meta-analysis, 4) Conduct the literature search, 5) Collect and screen records, 6) Extract data from eligible studies, 7) Read the data into analysis software and compute effect sizes, 8) Create meta-analytic models to assess the strength of the effect and investigate possible moderators, 9) Visualize your data, 10) Write up and promote your meta-analysis. Meta-analyses can inform future studies, through power calculations, by identifying robust methods and exposing research gaps. By adding a new meta-analysis to MetaLab, datasets across multiple topics of developmental psychology can be synthesized, and the dataset can be maintained as a living, community-augmented meta-analysis to which researchers add new data, allowing for a cumulative approach to evidence synthesis.


2021 ◽  
Vol 36 (6) ◽  
pp. 1095-1095
Author(s):  
Nicholas S Lackey ◽  
Natasha Nemanim ◽  
Alexander O Hauson ◽  
Eric J Connors ◽  
Anna Pollard ◽  
...  

Abstract Objective A previous meta-analysis utilized the Trail Making Test A (TMT-A) to measure the impact of heart failure (HF) on attention. A near medium effect size with moderate heterogeneity was observed, the HF group performed worse than healthy controls (HC). This study explores if the age of the HF group moderates differences in the performance of individuals with HF versus HC on TMT-A. Data Selection Two researchers searched eight databases, extracted data, and calculated effect sizes as part of a larger study. Inclusion criteria were: (a) adults with HF (New York Heart Association severity II or higher), (b) comparison to a HC group, (c) standardized neuropsychological/cognitive testing, and (d) adequate data to calculate effect sizes. Exclusion criteria were: (a) participants had other types of major organ failure, (b) the article was not in English, or (c) there was a risk of sample overlap with another included study. A total of six articles were included in this sub-study (Total HF n = 602 and HC n = 342). The unrestricted maximum likelihood computational model was used for the meta-regression. Data Synthesis Studies included in the meta-regression evidenced a statistically significant medium effect size estimate with moderate heterogeneity (k = 6, g = 0.636, p < 0.001, I2 = 56.85%). The meta-regression was statistically significant (slope = −0.0515, p = 0.0016, Qmodel = 9.86, df = 1, p = 0.0016). Conclusions Individuals with HF performed worse on the TMT-A than HC. Age accounted for a significant proportion of the observed heterogeneity in the meta-regression. Future research should examine the relationship of age on cognition in individuals with HF.


2021 ◽  
Vol 36 (6) ◽  
pp. 1096-1096
Author(s):  
Natasha Nemanim ◽  
Nicholas Lackey ◽  
Eric J Connors ◽  
Alexander O Hauson ◽  
Anna Pollard ◽  
...  

Abstract Objective A previous meta-analysis assessing the impact of heart failure (HF) on cognition found the HF group performed more poorly than the healthy control (HC) on global cognition measures. The study observed a medium effect and moderate heterogeneity when using the Mini-Mental Status Examination (MMSE) to measure HF’s impact on global cognition. The current meta-regression explores whether the mean age of the HF group moderates performance on the MMSE when comparing HF patients to HC. Data Selection Two researchers independently searched eight databases, extracted data, and calculated effect sizes as part of a larger study. Inclusion criteria were: (a) adults with a diagnosis of HF, (b) comparison of HF patients to HC, and (c) adequate data to calculate effect sizes. Articles were excluded if patients had other types of organ failure, the article was not available in English, or there was a risk of sample overlap with another included study. Twelve articles (HF n = 1166 and HC n = 1948) were included. The unrestricted maximum likelihood computational model was used for the meta-regression. Data Synthesis Studies included in the meta-regression evidenced a statistically significant medium effect size estimate with moderate heterogeneity (k = 12, g = 0.671, p < 0.001, I2 = 80.91%). The meta-regression was statistically significant (slope = −0.023, p = 0.0022, Qmodel = 5.26, df = 1, p = 0.022). Conclusions Individuals with HF performed more poorly on the MMSE than HC. Larger effect sizes on the MMSE were observed in studies with participants who were younger compared to studies with participants who were older. Future research should continue to delineate the impact of age on global cognition in individuals with HF.


Author(s):  
Michael J. Lambert ◽  
Jason L. Whipple ◽  
Maria Kleinstäuber

This meta-analysis examines the impact of measuring, monitoring, and feeding back information on client progress to clinicians while they deliver psychotherapy. It considers the effects of the two most frequently studied routine outcome monitoring practices: the Partners for Change Outcome System and the Outcome Questionnaire System. Meta-analyses of 24 studies produced effect sizes from small to moderate. Feedback practices reduced deterioration rates and nearly doubled clinically significant/reliable change rates in clients who were predicted to have a poor outcome. Clinical examples, diversity considerations, and therapeutic advances are provided.


2017 ◽  
Vol 10 (3) ◽  
pp. 485-488
Author(s):  
Ernest H. O'Boyle

Tett, Hundley, and Christiansen (2017) make a compelling case against meta-analyses that focus on mean effect sizes (e.g., rxy and ρ) while largely disregarding the precision of the estimate and true score variance. This is a reasonable point, but meta-analyses that myopically focus on mean effects at the expense of variance are not examples of validity generalization (VG)—they are examples of bad meta-analyses. VG and situational specificity (SS) fall along a continuum, and claims about generalization are confined to the research question and the type of generalization one is seeking (e.g., directional generalization, magnitude generalization). What Tett et al. (2017) successfully debunk is an extreme position along the generalization continuum significantly beyond the tenets of VG that few, if any, in the research community hold. The position they argue against is essentially a fixed-effects assumption, which runs counter to VG. Describing VG in this way is akin to describing SS as a position that completely ignores sampling error and treats every between-sample difference in effect size as true score variance. Both are strawmen that were knocked down decades ago (Schmidt et al., 1985). There is great value in debating whether a researcher should or can argue for generalization, but this debate must start with (a) an accurate portrayal of VG, (b) a discussion of different forms of generalization, and (c) the costs of trying to establish universal thresholds for VG.


Sign in / Sign up

Export Citation Format

Share Document