Letter to the Editor: Claims about the effects of botulinum toxin on depression should raise some eyebrows.

2021 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
Jeff T. Larsen

Recently, Schulze and colleagues (2021) and we (Coles, Larsen, Kuribayashi, & Kuelz, 2019) published two separate meta-analyses examining whether glabellar-region botulinum toxin injections can decrease depression. Both meta-analysis teams reviewed similar studies; discussed similar mechanisms-of-action; observed unexpectedly large effect sizes; observed asymmetry in funnel plot distributions; and acknowledged that it is difficult to blind participants. Yet, our two teams came to starkly different conclusions. Whereas Schulze and colleagues concluded that the treatment reaches rigorous “1a level of evidence” standards (p. 338), we concluded the opposite: that the claim is “not yet well substantiated by a credible balance of evidence” (p. 11). In this Letter to the Editor, we clarify why we believe that a more careful consideration of the quality of the evidence, potential costs, and potential benefits is necessary before promoting botulinum toxin as an off-label treatment for depression.

2019 ◽  
Vol 11 (4) ◽  
pp. 294-309
Author(s):  
Nicholas A. Coles ◽  
Jeff T. Larsen ◽  
Joyce Kuribayashi ◽  
Ashley Kuelz

Researchers have proposed that blocking facial feedback via glabellar-region botulinum toxin injections (GBTX) can reduce depression. Random-effects meta-analyses of studies that administered GBTX to individuals with depression indicate that, 6 weeks postintervention, GBTX groups were significantly less depressed compared to placebo groups ( d = 0.83) and pretreatment levels ( d = 1.57). However, we noted the following concerns: (a) effect sizes were extraordinarily large, (b) authors failed to provide information to compute 51% of relevant effect sizes, (c) 96% of effect sizes came from studies conducted by investigators with conflicts of interest, (d) there is some evidence of publication bias, and (e) studies used ineffective blinding procedures. These considerations suggest that confidence in GBTX as a treatment for depression is premature.


2019 ◽  
Vol 39 (9) ◽  
pp. 927-942 ◽  
Author(s):  
Andrew A Jacono ◽  
A Sean Alemi ◽  
Joseph L Russell

AbstractBackgroundSub-superficial musculo-aponeurotic system (SMAS) rhytidectomy techniques are considered to have a higher complication profile, especially for facial nerve injury, compared with less invasive SMAS techniques. This results in surgeons avoiding sub-SMAS dissection.ObjectivesThe authors sought to aggregate and summarize data on complications among different SMAS facelift techniques.MethodsA broad systematic search was performed. All included studies: (1) described a SMAS facelifting technique categorized as SMAS plication, SMASectomy/imbrication, SMAS flap, high lateral SMAS flap, deep plane, and composite; and (2) reported the number of postoperative complications in participants. Meta-analysis was performed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines.ResultsA total 183 studies were included. High lateral SMAS (1.85%) and composite rhytidectomy (1.52%) had the highest rates of temporary nerve injury and were the only techniques to show a statistically significant difference compared with SMAS plication (odds ratio [OR] = 2.71 and 2.22, respectively, P < 0.05). Risk of permanent injury did not differ among techniques. An increase in major hematoma was found for the deep plane (1.22%, OR = 1.67, P < 0.05) and SMAS imbrication (1.92%, OR = 2.65, P < 0.01). Skin necrosis was higher with the SMAS flap (1.57%, OR = 2.29, P < 0.01).ConclusionsThere are statistically significant differences in complication rates between SMAS facelifting techniques for temporary facial nerve injury, hematoma, seroma, necrosis, and infection. Technique should be selected based on quality of results and not the complication profile.Level of Evidence: 2


2020 ◽  
Vol 29 ◽  
Author(s):  
Nickolas D. Frost ◽  
Thomas W. Baskin ◽  
Bruce E. Wampold

Abstract Aims The purpose of this review is to examine the replication attempts of psychotherapy clinical trials for depression and anxiety. We focus specifically on replications of trials that exhibit large differences between psychotherapies. The replicability of these trials is especially important for meta-analysis, where the inclusion of false-positive trials can lead to erroneous conclusions about treatment efficacy. Methods Standard replication criteria were developed to distinguish direct from conceptual replication methodologies. Next, an exhaustive literature search was conducted for published meta-analyses of psychotherapy comparisons. Trials that exhibited large effects (d > 0.8) were culled from these meta-analyses. For each trial, a cited replication was conducted to determine if the trial had been subsequently replicated by either ‘direct’ or ‘conceptual’ methods. Finally, a broader search was conducted to examine the extent of replication efforts in the psychotherapy literature overall. Results In the meta-analytic search, a total of N = 10 meta-analyses met the inclusion criteria. From these meta-analyses, N = 12 distinct trials exhibited large effect sizes. The meta-analyses containing more than two large effect trials reported evidence for treatment superiority. A cited replication search yielded no direct replication attempts (N = 0) for the trials with large effects, and N = 4 conceptual replication attempts of average or above average quality. However, of these four attempts, only two partially corroborated the results from their original trial. Conclusion Meta-analytic reviews are influenced by trials with large effects, and it is not uncommon for these reviews to contain several such trials. Since we find no evidence that trials with such large effects are directly replicable, treatment superiority conclusions from these reviews are highly questionable. To enhance the quality of clinical science, the development of authoritative replication criteria for clinical trials is needed. Moreover, quality benchmarks should be considered before trials are included in a meta-analysis, or replications are attempted.


BMJ Open ◽  
2020 ◽  
Vol 10 (5) ◽  
pp. e034846 ◽  
Author(s):  
Rutger MJ de Zoete ◽  
James H McAuley ◽  
Nigel R Armfield ◽  
Michele Sterling

IntroductionNeck pain is a global burdensome problem, with a large proportion of neck pain cases becoming chronic. Although physical exercise is a commonly prescribed treatment, the evidence on the effectiveness of isolated exercise interventions remains limited. Traditional pairwise randomised controlled trials (RCTs) and meta-analyses are limited in only comparing two interventions. This protocol describes the design of a network meta-analysis, which enables a comparative investigation of all physical exercise interventions for which RCTs are available. We aim to systematically compare the effectiveness of different types of physical exercise in people with chronic non-specific neck pain.Methods and analysisNine electronic databases (AMED, CINAHL, Cochrane Central Register of Controlled Trials, Embase, MEDLINE, Physiotherapy Evidence Database, PsycINFO, Scopus and SPORTDiscus) were searched for RCTs from inception to 12 March 2019. Titles and abstract firstly, and full-text papers secondly, will be screened by two reviewers. Data will be extracted by two reviewers. The primary outcome measure is effectiveness of the intervention. Methodological quality of included studies will be assessed by two reviewers using the PEDro scale. The overall quality of evidence will be assessed with the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework, which has been adapted for network meta-analyses. The available evidence will be summarised using a network diagram. A contribution matrix will be presented to allow assessment of direct and indirect evidence. Forest plots will be constructed to visualise effects of all included exercise interventions. Pairwise effect sizes will be calculated by including all evidence available in the network. Effect measures for treatments that have not been compared in a pairwise RCT can be compared indirectly by contrasting effect sizes of comparisons with a common comparator.Ethics and disseminationThis work synthesises evidence from previously published studies and does not require ethics review or approval. A manuscript describing the findings will be submitted for publication in a peer-reviewed scientific journal.PROSPERO registration numberCRD42019126523.


2020 ◽  
Vol 46 (2-3) ◽  
pp. 334-342
Author(s):  
Thomas Hugh Feeley

Abstract An assumption of meta-analyses is made with regard to the quality of the primary studies included for analysis. Specifically, the process assumes each study is a valid estimation of a hypothesized relationship of interest. In instances when a primary study's quality is below an acceptable standard, one option is for the study be excluded from further analyses. Alternatively, studies of acceptable merit could be further investigated through moderator analyses in an attempt to explain heterogeneity among effect sizes due to quality elements. The current essay discusses methods for evaluating study quality before proposing guidelines for their assessment. It is recommended that future meta-analyses in communication include a dedicated section detailing how study quality is addressed when reviewing studies for inclusion.


2019 ◽  
Author(s):  
Malte Elson

Research synthesis is based on the assumption that when the same association between constructs is observed repeatedly in a field, the relationship is probably real, even if its exact magnitude can be debated. Yet this probability is not only a function of recurring results, but also of the quality and consistency in the empirical procedures that produced those results and that any meta-analysis necessarily inherits. Standardized protocols in data collection, analysis, and interpretation are important empirical properties, and a healthy sign of a discipline's maturity.This manuscript proposes that meta-analyses as typically applied in psychology benefit from complementing their aggregates of observed effect sizes by systematically examining the standardization of methodology that deterministically produced them. Potential units of analyses are described and two examples are offered to illustrate the benefits of such efforts. Ideally, this synergetic approach emphasizes the role of methods in advancing theory by improving the quality of meta-analytic inferences.


2016 ◽  
Vol 33 (S1) ◽  
pp. S550-S551
Author(s):  
J.M. Rubio ◽  
G. Inczedy-Farkas ◽  
S. Leucht ◽  
J.M. Kane ◽  
C. Correll

Antipsychotics are the cornerstone of treatment for schizophrenia, but they have limited effectiveness, as most patients require subsequent strategies at some point of their treatment. Despite being widely used, the efficacy of pharmacologic augmentation of antipsychotics is controversial and no combination treatment has been approved for schizophrenia. We conducted a systematic review in PubMed and PsycInfo on June 1st 2015 and a random effects meta-analysis of meta-analyses of short-term, placebo-controlled studies of pharmacological augmentation strategies of antipsychotics in schizophrenia. Methodological quality of meta-analyses was measured using the AMSTAR, plus 6 additional items developed to rate the content quality of the meta-analyzed trials. Out of 3062 publications, we identified 36 eligible augmenting strategies. For total symptom reduction, 25 strategies augmenting antipsychotics and 5 strategies augmenting clozapine were eligible and examined. Eleven strategies were more efficacious than placebo, none of them augmenting clozapine. Significant effect sizes ranged between SMD −1.03 and −0.23. Efficacy was not correlated with the quality of the meta-analyses. Only the meta-analysis for NSAIDs augmentation had a score greater than half of the possible points for content quality. Only antipsychotics, azapirones, antidepressants and lithium were less discontinued than placebo. Serotonin-3-receptor antagonists, lamotrigine, mirtazapine/mianserine, minocycline and estrogens had large effect sizes augmenting antipsychotics. However the quality of the content of most meta-analyses was low. The NSAIDs augmentation meta-analysis had the best content quality, yet with a low effect size for efficacy. The evidence for short-term augmentation strategies of antipsychotics in schizophrenia is inconclusive, due to the limited quality of the available trials.Disclosure of interestThe authors have not supplied their declaration of competing interest.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261120
Author(s):  
Dongil Kim ◽  
Seohyeon Choi

Data-based instruction (DBI) is an ongoing process to utilize students’ data for determining when and how to intensify intervention. It is an educational approach that is suggested as effective to enhance achievements of struggling learners, particularly for those who did not respond to intensive intervention in usual ways. In Korea, DBI was introduced and applied for students with learning difficulties especially since 2000 when the first Korea curriculum-based measurement (CBM) was developed as the name of Basic Academic Skills Assessment. Despite a number of studies accumulated since then, there has been a lack of research that examined the level of evidence-based practice (EBP) of DBI research. Thus, the present study sought to synthesize the DBI research so far in Korea by analyzing the effectiveness of DBI for school-aged students with learning difficulties via meta-analysis and evaluating the quality of the research. In this study, a total of 32 single-subject design studies were used. Multilevel meta-analysis revealed that the mean effect size of DBI was statistically significant (B = 1.34) and there was significant variance across participants in effect sizes. The results from the conditional model showed that exceptionality type, the number of sessions, and the length of each session were significantly accountable for the variability of effect sizes. In addition, the results of the qualitative analysis revealed the acceptable quality of the overall DBI research with some limitations. Based on these findings, implications and study limitations were discussed.


2020 ◽  
pp. 106907272098503
Author(s):  
Francis Milot-Lapointe ◽  
Yann Le Corff ◽  
Nicole Arifoulline

This article reports on the results of the first meta-analysis of the association between working alliance and outcomes of individual career counseling. This random-effects meta-analysis included 18 published and unpublished studies that produced a weighted mean effect size of r = .42. This effect size was heterogeneous across studies. Separate meta-analyses were conducted for several types of outcomes: Career outcomes, mental health outcomes, and client-perceived quality of the intervention. Average effect sizes for the association between working alliance and types of outcomes were .28, .18 and .62, respectively. Moderator analyses indicated that the overall mean effect size ( r =.42) varied in a large proportion as a function of the type of outcomes and the time of assessment of working alliance (first session, mid or at termination of the counseling service). Our results confirm that working alliance is associated to career counseling effectiveness and suggest that career counselors should emphasize on the working alliance during the career counseling process. In conclusion, this article provides suggestions for practice in individual career counseling and avenues of research on working alliance in this context.


2019 ◽  
Author(s):  
Malte Elson

Research synthesis is based on the assumption that when the same association between constructs is observed repeatedly in a field, the relationship is probably real, even if its exact magnitude can be debated. Yet this probability is not only a function of recurring results, but also of the quality and consistency in the empirical procedures that produced those results and that any meta-analysis necessarily inherits. Standardized protocols in data collection, analysis, and interpretation are important empirical properties, and a healthy sign of a discipline's maturity.This manuscript proposes that meta-analyses as typically applied in psychology benefit from complementing their aggregates of observed effect sizes by systematically examining the standardization of methodology that deterministically produced them. Potential units of analyses are described and two examples are offered to illustrate the benefits of such efforts. Ideally, this synergetic approach emphasizes the role of methods in advancing theory by improving the quality of meta-analytic inferences.


Sign in / Sign up

Export Citation Format

Share Document