scholarly journals Using Anchor-Based Methods to Determine the Smallest Effect Size of Interest

Author(s):  
Farid Anvari ◽  
Daniel Lakens

Effect sizes are an important outcome of quantitative research, but few guidelines exist that explain how researchers can determine which effect sizes are meaningful. Psychologists often want to study effects that are large enough to make a difference to people’s subjective experience. Thus, subjective experience is one way to gauge meaningfulness of an effect. We illustrate how to quantify the minimum subjectively experienced difference—the smallest change in an outcome measure that individuals consider to be meaningful enough in their subjective experience such that they are willing to rate themselves as feeling different—using an anchor-based method with a global rating of change question applied to the positive and negative affect scale. For researchers interested in people’s subjective experiences, this anchor-based method provides one way to specify a smallest effect size of interest, which allows researchers to interpret observed results in terms of their theoretical and practical significance.

2021 ◽  
Author(s):  
Farid Anvari

In some fields of research, psychologists are interested in effect sizes that are large enough to make a difference to people’s subjective experience. Recently, an anchor-based method using a global rating of change was proposed as a way to quantify the smallest subjectively experienced difference—the smallest numerical difference in the outcome measure that, on average, corresponds to reported changes in people’s subjective experience. According to the method, the construct of interest is measured on two occasions (Time 1 and Time 2). At Time 2, people also use an anchor-item to report how much they experienced a change in the construct. Participants are then categorized as those who stayed the same, those who changed a lot, and those who changed a little. The average change score for those who changed a little is the estimate of the smallest subjectively experienced difference. In the present study, I examined two aspects of the method’s validity. First, I tested whether presenting the anchor-item before or after the Time 2 outcome measure influences the results. The results suggest that any potential influence of the anchor-position, assuming there is an influence, is likely to be small. Second, I tested whether the pattern of the anchor-item’s validity correlations is improved when the delay between Time 1 and 2 is one day, as opposed to the pattern found in past research where the delay was two and five days. The observed pattern of validity correlations remained largely the same. I note directions for future research.


2013 ◽  
Vol 12 (3) ◽  
pp. 345-351 ◽  
Author(s):  
Jessica Middlemis Maher ◽  
Jonathan C. Markey ◽  
Diane Ebert-May

Statistical significance testing is the cornerstone of quantitative research, but studies that fail to report measures of effect size are potentially missing a robust part of the analysis. We provide a rationale for why effect size measures should be included in quantitative discipline-based education research. Examples from both biological and educational research demonstrate the utility of effect size for evaluating practical significance. We also provide details about some effect size indices that are paired with common statistical significance tests used in educational research and offer general suggestions for interpreting effect size measures. Finally, we discuss some inherent limitations of effect size measures and provide further recommendations about reporting confidence intervals.


Author(s):  
H. S. Styn ◽  
S. M. Ellis

The determination of significance of differences in means and of relationships between variables is of importance in many empirical studies. Usually only statistical significance is reported, which does not necessarily indicate an important (practically significant) difference or relationship. With studies based on probability samples, effect size indices should be reported in addition to statistical significance tests in order to comment on practical significance. Where complete populations or convenience samples are worked with, the determination of statistical significance is strictly speaking no longer relevant, while the effect size indices can be used as a basis to judge significance. In this article attention is paid to the use of effect size indices in order to establish practical significance. It is also shown how these indices are utilized in a few fields of statistical application and how it receives attention in statistical literature and computer packages. The use of effect sizes is illustrated by a few examples from the research literature.


2019 ◽  
Author(s):  
Miguel Alejandro Silan

One of the main criticisms of NHST is that statistical significance is not practical significance. And this evaluation of the practical significance of effects often take an implicit but consequential form in the field: from informal conversations among researchers when evaluating findings, to peer reviewers deciding the importance of an article. This primer seeks to make explicit what we mean when we talk about practical significance, organize what we know of it, and assert a framework for how we can evaluate and establish it. The practical significance of effects is appraised by analyzing (i.) along different levels of analysis, (ii.) across different outcomes, (iii.) across time and (iv.) across relevant moderators; which also underlie the conditions of when small effect sizes can be consequential. Practical significance is contrasted with often conflated terms including statistical significance, effect size and effect size benchmarks as well as theoretical significance. Promising directions are then presented.


2018 ◽  
Author(s):  
Robert Calin-Jageman

This paper has now been published in the Journal of Undergraduate Neuroscience Eduction: http://www.funjournal.org/wp-content/uploads/2018/04/june-16-e21.pdf?x91298. See also, this record on PubMed and PubMedCentral: https://www.ncbi.nlm.nih.gov/pubmed/30057503. An ongoing reform in statistical practice is to report and interpret effect sizes. This paper provides a short tutorial on effect sizes and some tips on how to help your students think in terms of effect sizes when analyzing data. An effect size is just a quantitative answer to a research question. Effect sizes should always be accompanied by a confidence interval or some other means of expressing uncertainty in generalizing from the sample to the population. Effect sizes are best interpreted in raw scores, but can also be expressed in standardized terms; several popular standardized effect score measures are explained and compared. Training your students to reporting and interpreting effect sizes can help them be better scientists: it will help them think critically about the practical significance of their results, makes uncertainty salient, foster better planning for subsequent experiments, encourage meta-analytic thinking, and can help focus their efforts on optimizing measurement. You can help your students start to think in effect sizes by giving them tools to visualize and translate between different effect size measures, and by tasking them to build a ‘library’ of effect sizes in a research field of interest.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Dobromir Rahnev

Abstract Many studies have shown that confidence and accuracy can be dissociated in a variety of tasks. However, most of these dissociations involve small effect sizes, occur only in a subset of participants, and include a reaction time (RT) confound. Here, I develop a new method for inducing confidence–accuracy dissociations that overcomes these limitations. The method uses an external noise manipulation and relies on the phenomenon of criterion attraction where criteria for different tasks become attracted to each other. Subjects judged the identity of stimuli generated with either low or high external noise. The results showed that the two conditions were matched on accuracy and RT but produced a large difference in confidence (effect appeared for 25 of 26 participants, effect size: Cohen’s d = 1.9). Computational modeling confirmed that these results are consistent with a mechanism of criterion attraction. These findings establish a new method for creating conditions with large differences in confidence without differences in accuracy or RT. Unlike many previous studies, however, the current method does not lead to differences in subjective experience and instead produces robust confidence–accuracy dissociations by exploiting limitations in post-perceptual, cognitive processes.


2020 ◽  
Author(s):  
César Villacura-Herr ◽  
Nicolas Kenner

Effect sizes are highly relevant in quantitative research. They facilitate the comparison an quantitative synthesis of scientific studies. The main objective of this report is to present: a) a brief summary of the formulas used for conversion between the three main effect sizes used in the meta-analysis: the correlation coefficient, the standardized mean difference and the odds ratio; and b) the Rapid Effect Size Converter for Meta-Analysis (rESCMA), a open-source and browser-based app for efficiently converting and bulk-converting effect sizes and their variances based on the formulas proposed in this paper. In addition, a table summarizing the formulas is presented for easy accessibility and use.


2017 ◽  
Author(s):  
Erin Michelle Buchanan ◽  
John E. Scofield

As effect sizes gain ground as important indicators of practical significance and as a meta-analytic tool, we must critically understand their limitations and biases. This project expands on research by @Okada2013, which highlighted the positive bias of eta squared and suggested the use of omega squared or epsilon for their lack of bias. These variance overlap measures were examined for potential bias in different data scenarios (i.e. truncated and Likert type data) to elucidate differences in bias from previous research. We found that data precision and truncation affected effect size bias, often lowering the bias in eta squared. This work expands our understanding of bias on variance overlap measures and allows researchers to make an informed choice about the type of effect to report given their research study. Implications for sample size planning and power are also discussed.


Author(s):  
Scott B. Morris ◽  
Arash Shokri

To understand and communicate research findings, it is important for researchers to consider two types of information provided by research results: the magnitude of the effect and the degree of uncertainty in the outcome. Statistical significance tests have long served as the mainstream method for statistical inferences. However, the widespread misinterpretation and misuse of significance tests has led critics to question their usefulness in evaluating research findings and to raise concerns about the far-reaching effects of this practice on scientific progress. An alternative approach involves reporting and interpreting measures of effect size along with confidence intervals. An effect size is an indicator of magnitude and direction of a statistical observation. Effect size statistics have been developed to represent a wide range of research questions, including indicators of the mean difference between groups, the relative odds of an event, or the degree of correlation among variables. Effect sizes play a key role in evaluating practical significance, conducting power analysis, and conducting meta-analysis. While effect sizes summarize the magnitude of an effect, the confidence intervals represent the degree of uncertainty in the result. By presenting a range of plausible alternate values that might have occurred due to sampling error, confidence intervals provide an intuitive indicator of how strongly researchers should rely on the results from a single study.


2019 ◽  
Vol 1 (1) ◽  
pp. 45
Author(s):  
Bahrul Hayat

Many experimental researches have been conducted until recent years to see the effect of mastery learning approach on students’ cognitive behavior and affective characteristics. But the question is how much evidence is there in the existing research results provides scientific conclusions by combining existing experimental results. By treating different experiments of mastery learning as research replications, the experimental results can be combined using a meta-analysis technique. This paper shows how a quantitative research synthesis can effectively be used to combine statistical evidences of researches conducted separately and independently. The effect of mastery learning on affective characteristics of students was selected for this research synthesis. The mastery learning approach to be investigated in this research synthesis is Bloom type of mastery learning strategy.Using 26 independent comparisons, the results of study show that: a) the effect sizes of mastery learning on affective chharacteristics of students are heterogeneous across studies, b) the source of study, either from dissertation or journal article, does not explain the variability among the effect sizes, c) mastery learning programs using a ≥ 75% mastery criterion seem to have positive affective impact on the students, while those using < 75 % mastery criterion have no impact on the affective characteristics of students, d) the mean effect size shows a decreasing trend as the level of education increases, e) the mean effect size is highly positive for mathematics class and low positive effect for science and social studies, and f) short treatment duration has a much larger positive effect size than the long term treatment duration.


Sign in / Sign up

Export Citation Format

Share Document