scholarly journals Misunderstandings of Effect Sizes in Message Effects Research

2017 ◽  
Vol 11 (3) ◽  
pp. 210-219 ◽  
Author(s):  
Daniel J. O’Keefe
Keyword(s):  
2020 ◽  
Vol 29 (3) ◽  
pp. 1574-1595
Author(s):  
Chaleece W. Sandberg ◽  
Teresa Gray

Purpose We report on a study that replicates previous treatment studies using Abstract Semantic Associative Network Training (AbSANT), which was developed to help persons with aphasia improve their ability to retrieve abstract words, as well as thematically related concrete words. We hypothesized that previous results would be replicated; that is, when abstract words are trained using this protocol, improvement would be observed for both abstract and concrete words in the same context-category, but when concrete words are trained, no improvement for abstract words would be observed. We then frame the results of this study with the results of previous studies that used AbSANT to provide better evidence for the utility of this therapeutic technique. We also discuss proposed mechanisms of AbSANT. Method Four persons with aphasia completed one phase of concrete word training and one phase of abstract word training using the AbSANT protocol. Effect sizes were calculated for each word type for each phase. Effect sizes for this study are compared with the effect sizes from previous studies. Results As predicted, training abstract words resulted in both direct training and generalization effects, whereas training concrete words resulted in only direct training effects. The reported results are consistent across studies. Furthermore, when the data are compared across studies, there is a distinct pattern of the added benefit of training abstract words using AbSANT. Conclusion Treatment for word retrieval in aphasia is most often aimed at concrete words, despite the usefulness and pervasiveness of abstract words in everyday conversation. We show the utility of AbSANT as a means of improving not only abstract word retrieval but also concrete word retrieval and hope this evidence will help foster its application in clinical practice.


Methodology ◽  
2019 ◽  
Vol 15 (3) ◽  
pp. 97-105
Author(s):  
Rodrigo Ferrer ◽  
Antonio Pardo

Abstract. In a recent paper, Ferrer and Pardo (2014) tested several distribution-based methods designed to assess when test scores obtained before and after an intervention reflect a statistically reliable change. However, we still do not know how these methods perform from the point of view of false negatives. For this purpose, we have simulated change scenarios (different effect sizes in a pre-post-test design) with distributions of different shapes and with different sample sizes. For each simulated scenario, we generated 1,000 samples. In each sample, we recorded the false-negative rate of the five distribution-based methods with the best performance from the point of view of the false positives. Our results have revealed unacceptable rates of false negatives even with effects of very large size, starting from 31.8% in an optimistic scenario (effect size of 2.0 and a normal distribution) to 99.9% in the worst scenario (effect size of 0.2 and a highly skewed distribution). Therefore, our results suggest that the widely used distribution-based methods must be applied with caution in a clinical context, because they need huge effect sizes to detect a true change. However, we made some considerations regarding the effect size and the cut-off points commonly used which allow us to be more precise in our estimates.


2019 ◽  
Vol 50 (5-6) ◽  
pp. 292-304 ◽  
Author(s):  
Mario Wenzel ◽  
Marina Lind ◽  
Zarah Rowland ◽  
Daniela Zahn ◽  
Thomas Kubiak

Abstract. Evidence on the existence of the ego depletion phenomena as well as the size of the effects and potential moderators and mediators are ambiguous. Building on a crossover design that enables superior statistical power within a single study, we investigated the robustness of the ego depletion effect between and within subjects and moderating and mediating influences of the ego depletion manipulation checks. Our results, based on a sample of 187 participants, demonstrated that (a) the between- and within-subject ego depletion effects only had negligible effect sizes and that there was (b) large interindividual variability that (c) could not be explained by differences in ego depletion manipulation checks. We discuss the implications of these results and outline a future research agenda.


2019 ◽  
Author(s):  
Joel L Pick ◽  
Nyil Khwaja ◽  
Michael A. Spence ◽  
Malika Ihle ◽  
Shinichi Nakagawa

We often quantify a behaviour by counting the number of times it occurs within a specific, short observation period. Measuring behaviour in such a way is typically unavoidable but induces error. This error acts to systematically reduce effect sizes, including metrics of particular interest to behavioural and evolutionary ecologists such as R2, repeatability (intra-class correlation, ICC) and heritability. Through introducing a null model, the Poisson process, for modelling the frequency of behaviour, we give a mechanistic explanation of how this problem arises and demonstrate how it makes comparisons between studies and species problematic, because the magnitude of the error depends on how frequently the behaviour has been observed (e.g. as a function of the observation period) as well as how biologically variable the behaviour is. Importantly, the degree of error is predictable and so can be corrected for. Using the example of parental provisioning rate in birds, we assess the applicability of our null model for modelling the frequency of behaviour. We then review recent literature and demonstrate that the error is rarely accounted for in current analyses. We highlight the problems that arise from this and provide solutions. We further discuss the biological implications of deviations from our null model, and highlight the new avenues of research that they may provide. Adopting our recommendations into analyses of behavioural counts will improve the accuracy of estimated effect sizes and allow meaningful comparisons to be made between studies.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


Sign in / Sign up

Export Citation Format

Share Document