scholarly journals Action Effects on Visual Perception of Distances: A Multilevel Bayesian Meta-Analysis

2019 ◽  
Author(s):  
Lisa Molto ◽  
Ladislas Nalborczyk ◽  
richard palluel-germain ◽  
Nicolas Morgado

Some studies suggested that action constraints influence visual perception of distances. For instance, the greater the effort to cover a distance, the longer people perceive this distance. The present multilevel Bayesian meta-analysis supports the existence of a small action constraint effect on distance estimation, Hedge’s g = 0.29, 95% CrI [0.16, 0.47] (Nstudies = 37, Nparticipants = 1035). This effect slightly varied according to the action constraint category (i.e., effort, weight, and tool-use) but not according to participants’ motor intention. Some authors argued such effects reflect experimental demand biases rather than genuine perceptual effects. Our meta-analysis did not allow to dismiss this possibility, but it did not support it. We provide field-specific conventions for interpreting action constraint effect sizes and minimum sample size to detect them with various levels of power. We encourage researchers to update this meta-analysis using our online repository (https://osf.io/bc3wn/) to send their published or unpublished data.

2020 ◽  
Vol 31 (5) ◽  
pp. 488-504
Author(s):  
Lisa Molto ◽  
Ladislas Nalborczyk ◽  
Richard Palluel-Germain ◽  
Nicolas Morgado

Previous studies have suggested that action constraints influence visual perception of distances. For instance, the greater the effort to cover a distance, the longer people perceive this distance to be. The present multilevel Bayesian meta-analysis (37 studies with 1,035 total participants) supported the existence of a small action-constraint effect on distance estimation, Hedges’s g = 0.29, 95% credible interval = [0.16, 0.47]. This effect varied slightly according to the action-constraint category (effort, weight, tool use) but not according to participants’ motor intention. Some authors have argued that such effects reflect experimental demand biases rather than genuine perceptual effects. Our meta-analysis did not allow us to dismiss this possibility, but it also did not support it. We provide field-specific conventions for interpreting action-constraint effect sizes and the minimum sample sizes required to detect them with various levels of power. We encourage researchers to help us update this meta-analysis by directly uploading their published or unpublished data to our online repository ( https://osf.io/bc3wn/ ).


2020 ◽  
Author(s):  
Michael W. Beets ◽  
R. Glenn Weaver ◽  
John P.A. Ioannidis ◽  
Alexis Jones ◽  
Lauren von Klinggraeff ◽  
...  

Abstract Background: Pilot/feasibility or studies with small sample sizes may be associated with inflated effects. This study explores the vibration of effect sizes (VoE) in meta-analyses when considering different inclusion criteria based upon sample size or pilot/feasibility status. Methods: Searches were conducted for meta-analyses of behavioral interventions on topics related to the prevention/treatment of childhood obesity from 01-2016 to 10-2019. The computed summary effect sizes (ES) were extracted from each meta-analysis. Individual studies included in the meta-analyses were classified into one of the following four categories: self-identified pilot/feasibility studies or based upon sample size (N≤100, N>100, and N>370 the upper 75th of sample size). The VoE was defined as the absolute difference (ABS) between the re-estimations of summary ES restricted to study classifications compared to the originally reported summary ES. Concordance (kappa) of statistical significance between summary ES was assessed. Fixed and random effects models and meta-regressions were estimated. Three case studies are presented to illustrate the impact of including pilot/feasibility and N≤100 studies on the estimated summary ES.Results: A total of 1,602 effect sizes, representing 145 reported summary ES, were extracted from 48 meta-analyses containing 603 unique studies (avg. 22 avg. meta-analysis, range 2-108) and included 227,217 participants. Pilot/feasibility and N≤100 studies comprised 22% (0-58%) and 21% (0-83%) of studies. Meta-regression indicated the ABS between the re-estimated and original summary ES where summary ES were comprised of ≥40% of N≤100 studies was 0.29. The ABS ES was 0.46 when summary ES comprised of >80% of both pilot/feasibility and N≤100 studies. Where ≤40% of the studies comprising a summary ES had N>370, the ABS ES ranged from 0.20-0.30. Concordance was low when removing both pilot/feasibility and N≤100 studies (kappa=0.53) and restricting analyses only to the largest studies (N>370, kappa=0.35), with 20% and 26% of the originally reported statistically significant ES rendered non-significant. Reanalysis of the three case study meta-analyses resulted in the re-estimated ES rendered either non-significant or half of the originally reported ES. Conclusions: When meta-analyses of behavioral interventions include a substantial proportion of both pilot/feasibility and N≤100 studies, summary ES can be affected markedly and should be interpreted with caution.


2020 ◽  
Author(s):  
Jonathan Z Bakdash ◽  
Laura Ranee Marusich ◽  
Jared Kenworthy ◽  
Elyssa Twedt ◽  
Erin Zaroukian

Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-performance associations (Endsley, 2019). How much does selection using statistical significance quantitatively impact results in a meta-analytic context? We evaluate and compare results using significance-filtered effects versus analyses with all effects as-reported. Endsley reported high predictiveness scores and large positive mean correlations but used atypical methods: the hypothesis was used to select papers and effects. Papers were assigned the maximum predictiveness scores if they contained at-least-one significant effect, yet most papers reported multiple effects, and the number of non-significant effects did not impact the score. Thus, the predictiveness score was rarely less than the maximum. In addition, only significant effects were included in Endsley’s quantitative synthesis. Filtering excluded half of all reported effects, with guaranteed minimum effect sizes based on sample size. Results for filtered compared to as-reported effects clearly diverged. Compared to the mean of as-reported effects, the filtered mean was overestimated by 56%. Furthermore, 92% (or 222 out of 241) of the as-reported effects were below the mean of filtered effects. We conclude that outcome-dependent selection of effects is circular, predetermining results and running contrary to the purpose of meta-analysis. Instead of using significance to score and filter effects, meta-analyses should follow established research practices.


2021 ◽  
Author(s):  
Jonathan Z Bakdash ◽  
Laura Ranee Marusich ◽  
Jared Kenworthy ◽  
Elyssa Twedt ◽  
Erin Zaroukian

Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-performance associations (Endsley, 2019). How much does selection using statistical significance quantitatively impact results in a meta-analytic context? We evaluate and compare results using significance-filtered effects versus analyses with all effects as-reported. Endsley reported high predictiveness scores and large positive mean correlations but used atypical methods: the hypothesis was used to select papers and effects. Papers were assigned the maximum predictiveness scores if they contained at-least-one significant effect, yet most papers reported multiple effects, and the number of non-significant effects did not impact the score. Thus, the predictiveness score was rarely less than the maximum. In addition, only significant effects were included in Endsley’s quantitative synthesis. Filtering excluded half of all reported effects, with guaranteed minimum effect sizes based on sample size. Results for filtered compared to as-reported effects clearly diverged. Compared to the mean of as-reported effects, the filtered mean was overestimated by 56%. Furthermore, 92% (or 222 out of 241) of the as-reported effects were below the mean of filtered effects. We conclude that outcome-dependent selection of effects is circular, predetermining results and running contrary to the purpose of meta-analysis. Instead of using significance to score and filter effects, meta-analyses should follow established research practices.


2020 ◽  
Vol 11 ◽  
Author(s):  
Jonathan Z. Bakdash ◽  
Laura R. Marusich ◽  
Jared B. Kenworthy ◽  
Elyssa Twedt ◽  
Erin G. Zaroukian

Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-performance associations (Endsley, 2019). How much does selection using statistical significance quantitatively impact results in a meta-analytic context? We evaluate and compare results using significance-filtered effects versus analyses with all effects as-reported. Endsley reported high predictiveness scores and large positive mean correlations but used atypical methods: the hypothesis was used to select papers and effects. Papers were assigned the maximum predictiveness scores if they contained at-least-one significant effect, yet most papers reported multiple effects, and the number of non-significant effects did not impact the score. Thus, the predictiveness score was rarely less than the maximum. In addition, only significant effects were included in Endsley’s quantitative synthesis. Filtering excluded half of all reported effects, with guaranteed minimum effect sizes based on sample size. Results for filtered compared to as-reported effects clearly diverged. Compared to the mean of as-reported effects, the filtered mean was overestimated by 56%. Furthermore, 92% (or 222 out of 241) of the as-reported effects were below the mean of filtered effects. We conclude that outcome-dependent selection of effects is circular, predetermining results and running contrary to the purpose of meta-analysis. Instead of using significance to score and filter effects, meta-analyses should follow established research practices.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2019 ◽  
Author(s):  
Bettina Moltrecht ◽  
Jessica Deighton ◽  
Praveetha Patalay ◽  
Julian Childs

Background: Research investigating the role of emotion regulation (ER) in the development and treatment of psychopathology has increased in recent years. Evidence suggests that an increased focus on ER in treatment can improve existing interventions. Most ER research has neglected young people, therefore the present meta-analysis summarizes the evidence for existing psychosocial intervention and their effectiveness to improve ER in youth. Methods: A systematic review and meta-analysis was conducted according to the PRISMA guidelines. Twenty-one randomized-control-trials (RCTs) assessed changes in ER following a psychological intervention in youth exhibiting various psychopathological symptoms.Results: We found moderate effect sizes for current interventions to decrease emotion dysregulation in youth (g=-.46) and small effect sizes to improve emotion regulation (g=0.36). Significant differences between studies including intervention components, ER measures and populations studied resulted in large heterogeneity. Conclusion: This is the first meta-analysis that summarizes the effectiveness for existing interventions to improve ER in youth. The results suggest that interventions can enhance ER in youth, and that these improvements correlate with improvements in psychopathology. More RCTs including larger sample sizes, different age groups and psychopathologies are needed to increase our understanding of what works for who and when.


2017 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
Jeff T. Larsen ◽  
Heather Lench

The facial feedback hypothesis suggests that an individual’s experience of emotion is influenced by feedback from their facial movements. To evaluate the cumulative evidence for this hypothesis, we conducted a meta-analysis on 286 effect sizes derived from 138 studies that manipulated facial feedback and collected emotion self-reports. Using random effects meta-regression with robust variance estimates, we found that the overall effect of facial feedback was significant, but small. Results also indicated that feedback effects are stronger in some circumstances than others. We examined 12 potential moderators, and three were associated with differences in effect sizes. 1. Type of emotional outcome: Facial feedback influenced emotional experience (e.g., reported amusement) and, to a greater degree, affective judgments of a stimulus (e.g., the objective funniness of a cartoon). Three publication bias detection methods did not reveal evidence of publication bias in studies examining the effects of facial feedback on emotional experience, but all three methods revealed evidence of publication bias in studies examining affective judgments. 2. Presence of emotional stimuli: Facial feedback effects on emotional experience were larger in the absence of emotionally evocative stimuli (e.g., cartoons). 3. Type of stimuli: When participants were presented with emotionally evocative stimuli, facial feedback effects were larger in the presence of some types of stimuli (e.g., emotional sentences) than others (e.g., pictures). The available evidence supports the facial feedback hypothesis’ central claim that facial feedback influences emotional experience, although these effects tend to be small and heterogeneous.


Sign in / Sign up

Export Citation Format

Share Document