A Meta-Analysis of the Facial Feedback Literature: Effects of Facial Feedback on Emotional Experience Are Small and Variable

Author(s):  
Nicholas Alvaro Coles ◽  
Jeff T. Larsen ◽  
Heather Lench

The facial feedback hypothesis suggests that an individual’s experience of emotion is influenced by feedback from their facial movements. To evaluate the cumulative evidence for this hypothesis, we conducted a meta-analysis on 286 effect sizes derived from 138 studies that manipulated facial feedback and collected emotion self-reports. Using random effects meta-regression with robust variance estimates, we found that the overall effect of facial feedback was significant, but small. Results also indicated that feedback effects are stronger in some circumstances than others. We examined 12 potential moderators, and three were associated with differences in effect sizes. 1. Type of emotional outcome: Facial feedback influenced emotional experience (e.g., reported amusement) and, to a greater degree, affective judgments of a stimulus (e.g., the objective funniness of a cartoon). Three publication bias detection methods did not reveal evidence of publication bias in studies examining the effects of facial feedback on emotional experience, but all three methods revealed evidence of publication bias in studies examining affective judgments. 2. Presence of emotional stimuli: Facial feedback effects on emotional experience were larger in the absence of emotionally evocative stimuli (e.g., cartoons). 3. Type of stimuli: When participants were presented with emotionally evocative stimuli, facial feedback effects were larger in the presence of some types of stimuli (e.g., emotional sentences) than others (e.g., pictures). The available evidence supports the facial feedback hypothesis’ central claim that facial feedback influences emotional experience, although these effects tend to be small and heterogeneous.

2020 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
Lowell Gaertner ◽  
Brooke Frohlich ◽  
Jeff T. Larsen ◽  
Dana Basnight-Brown

The facial feedback hypothesis suggests that an individual’s facial expressions can influence their emotional experience (e.g., that smiling can make one feel happier). However, a reoccurring concern is that demand characteristics drive this effect. Across three experiments (n = 250, 192, 131), university students in the United States and Kenya posed happy, angry, and neutral expressions and self-reported their emotions following a demand characteristics manipulation. To manipulate demand characteristics we either (a) told participants we hypothesized their poses would influence their emotions, (b) told participants we hypothesized their poses would not influence their emotions, or (c) did not tell participants a hypothesis. Results indicated that demand characteristics moderated the effects of facial poses on self-reported emotion. However, facial poses still influenced self-reported emotion when participants were told we hypothesized their poses would not influence emotion. These results indicate that facial feedback effects are not solely an artifact of demand characteristics.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2018 ◽  
Vol 28 (03) ◽  
pp. 268-274 ◽  
Author(s):  
T. Munder ◽  
C. Flückiger ◽  
F. Leichsenring ◽  
A. A. Abbass ◽  
M. J. Hilsenroth ◽  
...  

AbstractAimsThe aim of this study was to reanalyse the data from Cuijpers et al.'s (2018) meta-analysis, to examine Eysenck's claim that psychotherapy is not effective. Cuijpers et al., after correcting for bias, concluded that the effect of psychotherapy for depression was small (standardised mean difference, SMD, between 0.20 and 0.30), providing evidence that psychotherapy is not as effective as generally accepted.MethodsThe data for this study were the effect sizes included in Cuijpers et al. (2018). We removed outliers from the data set of effects, corrected for publication bias and segregated psychotherapy from other interventions. In our study, we considered wait-list (WL) controls as the most appropriate estimate of the natural history of depression without intervention.ResultsThe SMD for all interventions and for psychotherapy compared to WL controls was approximately 0.70, a value consistent with past estimates of the effectiveness of psychotherapy. Psychotherapy was also more effective than care-as-usual (SMD = 0.31) and other control groups (SMD = 0.43).ConclusionsThe re-analysis reveals that psychotherapy for adult patients diagnosed with depression is effective.


2019 ◽  
Vol 35 (2) ◽  
pp. 350-356 ◽  
Author(s):  
Juan Botella ◽  
Juan I. Durán

Meta-analysis is a firmly established methodology and an integral part of the process of generating knowledge across the empirical sciences. Meta-analysis has also focused on methodology and has become a dominant critic of methodological shortcomings. We highlight several problematic issues on how we research in psychology: excess of heterogeneity in the results and difficulties for replication, publication bias, suboptimal methodological quality, and questionable practices of the researchers. These and other problems led to a “crisis of confidence” in psychology. We discuss how the meta-analytical perspective and its procedures can help to overcome the crisis. A more cooperative perspective, instead of a competitive one, can shift to consider replication as a more valuable contribution. Knowledge cannot be based in isolated studies. Given the nature of the object of study of psychology the natural unit to generate knowledge must be the estimated distribution of the effect sizes, not the dichotomous decision on statistical significance in specific studies. Some suggestions are offered on how to redirect researchers' research and practices, so that their personal interests and those of science as such are better aligned.


2020 ◽  
Vol 25 (1) ◽  
pp. 51-72 ◽  
Author(s):  
Christian Franz Josef Woll ◽  
Felix D. Schönbrodt

Abstract. Recent meta-analyses come to conflicting conclusions about the efficacy of long-term psychoanalytic psychotherapy (LTPP). Our first goal was to reproduce the most recent meta-analysis by Leichsenring, Abbass, Luyten, Hilsenroth, and Rabung (2013) who found evidence for the efficacy of LTPP in the treatment of complex mental disorders. Our replicated effect sizes were in general slightly smaller. Second, we conducted an updated meta-analysis of randomized controlled trials comparing LTPP (lasting for at least 1 year and 40 sessions) to other forms of psychotherapy in the treatment of complex mental disorders. We focused on a transparent research process according to open science standards and applied a series of elaborated meta-analytic procedures to test and control for publication bias. Our updated meta-analysis comprising 191 effect sizes from 14 eligible studies revealed small, statistically significant effect sizes at post-treatment for the outcome domains psychiatric symptoms, target problems, social functioning, and overall effectiveness (Hedges’ g ranging between 0.24 and 0.35). The effect size for the domain personality functioning (0.24) was not significant ( p = .08). No signs for publication bias could be detected. In light of a heterogeneous study set and some methodological shortcomings in the primary studies, these results should be interpreted cautiously. In conclusion, LTPP might be superior to other forms of psychotherapy in the treatment of complex mental disorders. Notably, our effect sizes represent the additional gain of LTPP versus other forms of primarily long-term psychotherapy. In this case, large differences in effect sizes are not to be expected.


Author(s):  
Andy P. Field

This chapter discusses meta-analysis, effect sizes (what they are and why they are useful), principles of meta-analysis, types of meta-analysis, methods for performing a meta-analysis (Hedges’ method, Hunter and Schmidt method), and problems that can occur in meta-analysis (publication bias, artefacts, misapplications of meta-analysis, methodological error).


2021 ◽  
Vol 12 ◽  
Author(s):  
Hanna Suh ◽  
Jisun Jeong

Objectives: Self-compassion functions as a psychological buffer in the face of negative life experiences. Considering that suicidal thoughts and behaviors (STBs) and non-suicidal self-injury (NSSI) are often accompanied by intense negative feelings about the self (e.g., self-loathing, self-isolation), self-compassion may have the potential to alleviate these negative attitudes and feelings toward oneself. This meta-analysis investigated the associations of self-compassion with STBs and NSSI.Methods: A literature search finalized in August 2020 identified 18 eligible studies (13 STB effect sizes and seven NSSI effect sizes), including 8,058 participants. Two studies were longitudinal studies, and the remainder were cross-sectional studies. A random-effects meta-analysis was conducted using CMA 3.0. Subgroup analyses, meta-regression, and publication bias analyses were conducted to probe potential sources of heterogeneity.Results: With regard to STBs, a moderate effect size was found for self-compassion (r = −0.34, k = 13). Positively worded subscales exhibited statistically significant effect sizes: self-kindness (r = −0.21, k = 4), common humanity (r = −0.20, k = 4), and mindfulness (r = −0.15, k = 4). For NSSI, a small effect size was found for self-compassion (r = −0.29, k = 7). There was a large heterogeneity (I2 = 80.92% for STBs, I2 = 86.25% for NSSI), and publication bias was minimal. Subgroup analysis results showed that sample characteristic was a moderator, such that a larger effect size was witnessed in clinical patients than sexually/racially marginalized individuals, college students, and healthy-functioning community adolescents.Conclusions: Self-compassion was negatively associated with STBs and NSSI, and the effect size of self-compassion was larger for STBs than NSSI. More evidence is necessary to gauge a clinically significant protective role that self-compassion may play by soliciting results from future longitudinal studies or intervention studies.


Author(s):  
Katrin Auspurg ◽  
Thomas Hinz

SummarySignificance tests were originally developed to enable more objective evaluations of research results. Yet the strong orientation towards statistical significance encourages biased results, a phenomenon termed “publication bias”. Publication bias occurs whenever the likelihood or time-lag of publication, or the prominence, language, impact factor of journal space or the citation rate of studies depend on the direction and significance of research findings.Although there is much evidence concerning the existence of publication bias in all scientific disciplines and although its detrimental consequences for the progress of the sciences have been known for a long time, all attempts to eliminate the bias have failed. The present article reviews the history and logic of significance testing, the state of research on publication bias, and existing practical recommendations. After demonstrating that more systematical research on the risk factors of publication bias is needed, the paper suggests two new directions for publication bias research. First, a more comprehensive theoretical model based on theories of rational choice and economics as well as on the sociology of science is sketched out. Publication bias is recognized as the outcome of a social dilemma that cannot be overcome by moral pleas alone. Second, detection methods for publication bias going beyond meta-analysis, ones that are more suitable for testing causal hypotheses, are discussed. In particular, the “caliper test” seems well-suited for conducting theoretically motivated comparisons across heterogeneous research fields like sociology. Its potential is demonstrated by testing hypotheses on (a) the relevance of explicitly vs. implicitly stated research propositions and on (b) the relevance of the number of authors on incidence rates of publication bias in 50 papers published in leading German sociology journals.


2020 ◽  
Vol 2020 ◽  
pp. 1-24
Author(s):  
Mesfin Wudu Kassaw ◽  
Aschalew Afework Bitew ◽  
Alemayehu Digssie Gebremariam ◽  
Netsanet Fentahun ◽  
Murat Açık ◽  
...  

Background. Malnutrition is major public health problem worldwide, particularly in developing countries including Ethiopia. In 2016, out of 667 million children under five years of age, 159 million were stunted worldwide. The prevalence of stunting has been decreasing greatly from 58% in 2000 to 44% in 2011 and 38% in 2016 in Ethiopia. However, the prevalence of stunting is still high and considered as public health problem for the country. The aim of this systematic review and meta-analysis is to assess the prevalence of stunting and its associations with wealth index among children under five years of age in Ethiopia. Methodology. The databases searched were MEDLINE, Scopus, HINARI, and grey literature studies. The studies’ qualities were assessed by two reviewers independently, and any controversy was handled by other reviewers using the Joanna Briggs Institute (JBI) critical appraisal checklist. The JBI checklist was used in assessing the risk of bias and method of measurement for both outcome and independent variables. Especially, the study design, study participants, definition of stunting, statistical methods used to identify the associations, results/data presentations, and odds ratios (ORs) with confidence intervals (CIs) were assessed. In the statistical analysis, the funnel plot, Egger’s test, and Begg’s test were used to assess publication bias. The I2 statistic, forest plot, and Cochran’s Q-test were used to deal with heterogeneity. Results. In this review, 35 studies were included to assess the pooled prevalence of stunting. Similarly, 16 studies were used to assess the estimated effect sizes of wealth index on stunting. In this meta-analysis, the pooled prevalence of stunting was 41.5% among children under five years of age, despite its considerable heterogeneity (I2 = 97.6%, p < 0.001 , Q = 1461.93). However, no publication bias was detected (Egger’s test p = 0.26 and Begg’s test p = 0.87 ). Children from households with a medium or low/poor wealth index had higher odds of stunting (AOR: 1.33, 95% CI 1.07, 1.65 or AOR: 1.92, 95% CI 1.46, 2.54, respectively) compared to children from households with a high/rich wealth index. Both of the estimated effect sizes of low and medium wealth indexes had substantial heterogeneity (I2 = 63.8%, p < 0.001 , Q = 44.21 and I2 = 78.3%, p < 0.001 , Q = 73.73) respectively). In estimating the effect, there was no publication bias (small-studies effect) (Egger and Begg’s test, p > 0.05 ). Conclusions. The pooled prevalence of stunting was great. In the subgroup analysis, the Amhara region had the highest prevalence of stunting, followed by the Oromia and Tigray regions, respectively. Low economic status was associated with stunting in Ethiopia. This relationship was found to be statistically more accurate in Oromia and Amhara regions. The government should emphasize community-based nutrition programs by scaling up more in these regions, just like the Seqota Declaration.


2015 ◽  
Vol 206 (1) ◽  
pp. 7-16 ◽  
Author(s):  
Ioana A. Cristea ◽  
Robin N. Kok ◽  
Pim Cuijpers

BackgroundCognitive bias modification (CBM) interventions are strongly advocated in research and clinical practice.AimsTo examine the efficiency of CBM for clinically relevant outcomes, along with study quality, publication bias and potential moderators.MethodWe included randomised controlled trials (RCTs) of CBM interventions that reported clinically relevant outcomes assessed with standardised instruments.ResultsWe identified 49 trials and grouped outcomes into anxiety and depression. Effect sizes were small considering all the samples, and mostly non-significant for patient samples. Effect sizes became non-significant when outliers were excluded and after adjustment for publication bias. The quality of the RCTs was suboptimal.ConclusionsCBM may have small effects on mental health problems, but it is also very well possible that there are no significant clinically relevant effects. Research in this field is hampered by small and low-quality trials, and by risk of publication bias. Many positive outcomes are driven by extreme outliers.


Sign in / Sign up

Export Citation Format

Share Document