scholarly journals Internet- and Mobile-Based Interventions for Mental and Somatic Conditions in Children and Adolescents

Author(s):  
Matthias Domhardt ◽  
Lena Steubl ◽  
Harald Baumeister

Abstract. This meta-review integrates the current meta-analysis literature on the efficacy of internet- and mobile-based interventions (IMIs) for mental disorders and somatic diseases in children and adolescents. Further, it summarizes the moderators of treatment effects in this age group. Using a systematic literature search of PsycINFO and MEDLINE/PubMed, we identified eight meta-analyses (N = 8,417) that met all inclusion criteria. Current meta-analytical evidence of IMIs exists for depression (range of standardized mean differences, SMDs = .16 to .76; 95 % CI: –.12 to 1.12; k = 3 meta-analyses), anxiety (SMDs = .30 to 1.4; 95 % CI: –.53 to 2.44; k = 5) and chronic pain (SMD = .41; 95 % CI: .07 to .74; k = 1) with predominantly nonactive control conditions (waiting-list; placebo). The effect size for IMIs across mental disorders reported in one meta-analysis is SMD = 1.27 (95 % CI: .96 to 1.59; k = 1), the effect size of IMIs for different somatic conditions is SMD = .49 (95 % CI: .33 to .64; k = 1). Moderators of treatment effects are age (k = 3), symptom severity (k = 1), and source of outcome assessment (k = 1). Quality ratings with the AMSTAR-2-checklist indicate acceptable methodological rigor of meta-analyses included. Taken together, this meta-review suggests that IMIs are efficacious in some health conditions in youths, with evidence existing primarily for depression and anxiety so far. The findings point to the potential of IMIs to augment evidence based mental healthcare for children and adolescents.

2016 ◽  
Vol 106 (8) ◽  
pp. 792-806 ◽  
Author(s):  
L. V. Madden ◽  
H.-P. Piepho ◽  
P. A. Paul

Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS.


Sports ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 88 ◽  
Author(s):  
Håvard Lorås

Appropriate levels of motor competence are an integrated part of individuals’ health-related fitness, and physical education is proposed as an important context for developing a broad range of motor skills. The aim of the current study was to apply meta-analyses to assess the effectiveness of curriculum-based physical education on the development of the overall motor competence of children and adolescents. Studies were located by searching seven databases and included according to predefined criteria. Random effects models using the standardized effect size (Hedges’ g) were used to aggregate results, including an examination of heterogeneity and inconsistency. The meta-analysis included 20 studies, and a total of 38 effect sizes were calculated. A statistically significant improvement in motor competence following curriculum-based physical education compared to active control groups was observed in children and adolescents (g = −0.69, 95% CI −0.91 to −0.46, n = 23). Participants’ ages, total time for physical education intervention, and type of motor competence assessment did not appear to be statistically significant moderators of effect size. Physical education with various curricula can, therefore, increase overall motor competence in children and adolescents.


2020 ◽  
Vol 15 (4) ◽  
pp. 1026-1041 ◽  
Author(s):  
Joshua R. Polanin ◽  
Emily A. Hennessy ◽  
Sho Tsuji

Systematic review and meta-analysis are possible as viable research techniques only through transparent reporting of primary research; thus, one might expect meta-analysts to demonstrate best practice in their reporting of results and have a high degree of transparency leading to reproducibility of their work. This assumption has yet to be fully tested in the psychological sciences. We therefore aimed to assess the transparency and reproducibility of psychological meta-analyses. We conducted a meta-review by sampling 150 studies from Psychological Bulletin to extract information about each review’s transparent and reproducible reporting practices. The results revealed that authors reported on average 55% of criteria and that transparent reporting practices increased over the three decades studied ( b = 1.09, SE = 0.24, t = 4.519, p < .001). Review authors consistently reported eligibility criteria, effect-size information, and synthesis techniques. Review authors, however, on average, did not report specific search results, screening and extraction procedures, and most importantly, effect-size and moderator information from each individual study. Far fewer studies provided statistical code required for complete analytical replication. We argue that the field of psychology and research synthesis in general should require review authors to report these elements in a transparent and reproducible manner.


2019 ◽  
Author(s):  
Michael P. Hengartner ◽  
Janus Christian Jakobsen ◽  
Anders Sorensen ◽  
Martin Plöderl

Background: It has been claimed that efficacy estimates based in the Hamilton Depression Rating-Scale (HDRS) underestimate antidepressants true treatment effects due to the instrument’s poor psychometric properties. The aim of this study is to compare efficacy estimates based on the HDRS with the gold standard procedure, the Montgomery-Asberg Depression Rating-Scale (MADRS).Methods and findings: We conducted a meta-analysis based on the comprehensive dataset of acute antidepressant trials provided by Cipriani et al. We included all placebo-controlled trials that reported continuous outcomes based on either the HDRS 17-item version or the MADRS. We computed standardised mean difference effect size estimates and raw score drug-placebo differences to evaluate thresholds for clinician-rated minimal improvements (clinical significance). We selected 109 trials (n=32,399) that assessed the HDRS-17 and 28 trials (n=11,705) that assessed the MADRS. The summary estimate (effect size) for the HDRS-17 was 0.27 (0.23 to 0.30) compared to 0.30 (0.22 to 0.38) for the MADRS. The difference between HDRS-17 and MADRS was not statistically significant according to both subgroup analysis (p=0.47) and meta-regression (p=0.44). Drug-placebo raw score difference was 2.07 (1.76 to 2.37) points on the HDRS-17 (threshold for minimal improvement: 7 points) and 2.99 (2.24-3.74) points on the MADRS (threshold for minimal improvement: 8 points). Conclusions: Overall there was no difference between the HDRS-17 and the MADRS. These findings suggest that previous meta-analyses that were mostly based on the HDRS did not underestimate the drugs’ true treatment effect as assessed with MADRS, the preferred outcome rating scale. Moreover, the drug-placebo differences in raw scores suggest that treatment effects are indeed marginally small and with questionable importance for the average patient.


2021 ◽  
pp. 146531252110272
Author(s):  
Despina Koletsi ◽  
Anna Iliadi ◽  
Theodore Eliades

Objective: To evaluate all available evidence on the prediction of rotational tooth movements with aligners. Data sources: Seven databases of published and unpublished literature were searched up to 4 August 2020 for eligible studies. Data selection: Studies were deemed eligible if they included evaluation of rotational tooth movement with any type of aligner, through the comparison of software-based and actually achieved data after patient treatment. Data extraction and data synthesis: Data extraction was done independently and in duplicate and risk of bias assessment was performed with the use of the QUADAS-2 tool. Random effects meta-analyses with effect sizes and their 95% confidence intervals (CIs) were performed and the quality of the evidence was assessed through GRADE. Results: Seven articles were included in the qualitative synthesis, of which three contributed to meta-analyses. Overall results revealed a non-accurate prediction of the outcome for the software-based data, irrespective of the use of attachments or interproximal enamel reduction (IPR). Maxillary canines demonstrated the lowest percentage accuracy for rotational tooth movement (three studies: effect size = 47.9%; 95% CI = 27.2–69.5; P < 0.001), although high levels of heterogeneity were identified (I2: 86.9%; P < 0.001). Contrary, mandibular incisors presented the highest percentage accuracy for predicted rotational movement (two studies: effect size = 70.7%; 95% CI = 58.9–82.5; P < 0.001; I2: 0.0%; P = 0.48). Risk of bias was unclear to low overall, while quality of the evidence ranged from low to moderate. Conclusion: Allowing for all identified caveats, prediction of rotational tooth movements with aligner treatment does not appear accurate, especially for canines. Careful selection of patients and malocclusions for aligner treatment decisions remain challenging.


2012 ◽  
Vol 9 (5) ◽  
pp. 610-620 ◽  
Author(s):  
Thomas A Trikalinos ◽  
Ingram Olkin

Background Many comparative studies report results at multiple time points. Such data are correlated because they pertain to the same patients, but are typically meta-analyzed as separate quantitative syntheses at each time point, ignoring the correlations between time points. Purpose To develop a meta-analytic approach that estimates treatment effects at successive time points and takes account of the stochastic dependencies of those effects. Methods We present both fixed and random effects methods for multivariate meta-analysis of effect sizes reported at multiple time points. We provide formulas for calculating the covariance (and correlations) of the effect sizes at successive time points for four common metrics (log odds ratio, log risk ratio, risk difference, and arcsine difference) based on data reported in the primary studies. We work through an example of a meta-analysis of 17 randomized trials of radiotherapy and chemotherapy versus radiotherapy alone for the postoperative treatment of patients with malignant gliomas, where in each trial survival is assessed at 6, 12, 18, and 24 months post randomization. We also provide software code for the main analyses described in the article. Results We discuss the estimation of fixed and random effects models and explore five options for the structure of the covariance matrix of the random effects. In the example, we compare separate (univariate) meta-analyses at each of the four time points with joint analyses across all four time points using the proposed methods. Although results of univariate and multivariate analyses are generally similar in the example, there are small differences in the magnitude of the effect sizes and the corresponding standard errors. We also discuss conditional multivariate analyses where one compares treatment effects at later time points given observed data at earlier time points. Limitations Simulation and empirical studies are needed to clarify the gains of multivariate analyses compared with separate meta-analyses under a variety of conditions. Conclusions Data reported at multiple time points are multivariate in nature and are efficiently analyzed using multivariate methods. The latter are an attractive alternative or complement to performing separate meta-analyses.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Duygu Akçay ◽  
Nuray Barış

Purpose The purpose of this paper is to evaluate the impact of interventions focused on reducing screen time in children. Design/methodology/approach Studies that aim to investigate the effects of interventions aimed at reducing the time spent in front of the screen (i.e. screen time). A Random-effects model was used to calculate the pooled standard mean differences. The outcome was to evaluate the screen time in children in the 0–18 age range. A subgroup analysis was performed to reveal the extent to which the overall effect size varied by subgroups (participant age, duration of intervention and follow). Findings For the outcome, the meta-analysis included 21 studies, and the standard difference in mean change in screen time in the intervention group compared with the control group was −0.16 (95% confidence interval [CI], −0.21 to −0.12) (p < 0.001). The effect size was found to be higher in long-term (=7 months) interventions and follow-ups (p < 0.05). Originality/value Subgroup analysis showed that a significant effect of screen time reduction was observed in studies in which the duration of intervention and follow-up was =7 months. As the evidence base grows, future researchers can contribute to these findings by conducting a more comprehensive analysis of effect modifiers and optimizing interventions to reduce screen time.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.


2020 ◽  
Author(s):  
Nasrin Amiri Dashatan ◽  
Marzieh Ashrafmansouri ◽  
Mehdi Koushki ◽  
Nayebali Ahmadi

Abstract Background Leishmaniasis is one of the most important health problems worldwide. The evidence has suggested that resveratrol and its derivatives have anti-leishmanial effects; however, the results are inconsistent and inconclusive. The aim of this study was to assess the effect of resveratrol and its derivatives on the Leishmania viability through a systematic review and meta-analysis of available relevant studies. Methods The electronic databases PubMed, ScienceDirect, Embase, Web of Science and Scopus were queried between October 2000 and April 2020 using a comprehensive search strategy. The eligible articles selected and data extraction conducted by two reviewers. Mean differences of IC50 (concentration leading to reduction of 50% of Leishmania) for each outcome was calculated using random-effects models. Sensitivity analyses and prespecified subgroup were conducted to evaluate potential heterogeneity and the stability of the pooled results. Publication bias was evaluated using the Egger’s and Begg’s tests. We also followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines for this review. Results Ten studies were included in the meta-analysis. We observed that RSV and its derivatives had significant reducing effects on Leishmania viability in promastigote [24.02 µg/ml; (95% CI 17.1, 30.8); P < 0.05; I2 = 99.8%; P heterogeneity = 0.00] and amastigote [18.3 µg/ml; (95% CI 13.5, 23.2); P < 0.05; I2 = 99.6%; P heterogeneity = 0.00] stages of Leishmania. A significant publication bias was observed in the meta-analysis. Sensitivity analyses showed a similar effect size while reducing the heterogeneity. Subgroup analysis indicated that the pooled effects of leishmanicidal of resveratrol and its derivatives were affected by type of stilbenes and Leishmania species. Conclusions Our findings clearly suggest that the strategies for the treatment of leishmaniasis should be focused on natural products such as RSV and its derivatives. Further study is needed to identify the mechanisms mediating this protective effects of RSV and its derivatives in leishmaniasis.


2021 ◽  
Author(s):  
Megha Joshi ◽  
James E Pustejovsky ◽  
S. Natasha Beretvas

The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some meta-analyses also include multiple studies conducted by the same lab or investigator, creating another potential source of dependence. An increasingly popular method to handle dependence is robust variance estimation (RVE), but this method can result in inflated Type I error rates when the number of studies is small. Small-sample correction methods for RVE have been shown to control Type I error rates adequately but may be overly conservative, especially for tests of multiple-contrast hypotheses. We evaluated an alternative method for handling dependence, cluster wild bootstrapping, which has been examined in the econometrics literature but not in the context of meta-analysis. Results from two simulation studies indicate that cluster wild bootstrapping maintains adequate Type I error rates and provides more power than extant small sample correction methods, particularly for multiple-contrast hypothesis tests. We recommend using cluster wild bootstrapping to conduct hypothesis tests for meta-analyses with a small number of studies. We have also created an R package that implements such tests.


Sign in / Sign up

Export Citation Format

Share Document