sufficient statistical power
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 12)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 275-275
Author(s):  
Igor Akushevich

Abstract This study uses Medicare data to non-parametrically evaluate race- and place-of-residence-related disparities in AD/ADRD prevalence and incidence-based mortality, separate them out into the epidemiological causal components including race-related disparities in incidence and survival, and finally explain these in terms of health-care-related factors using causal methods of group variable effects (propensity scores and the rank-and-replace method) and regression-based analyses (extended Fairlie’s model and generalized Oaxaca-Blinder approach for censoring outcomes). Partitioning analysis showed that the incidence rate is the main predictor for temporal changes and racial disparities in AD/ADRD prevalence and mortality, though survival began to play a role after 2010. Arterial hypertension is the leading predictor responsible for racial disparities in AD/ADRD risks. This study demonstrated that Medicare data has sufficient statistical power and potential for studying disparities in AD/ADRD in three interacting directions: multi-ethnic structure of population, place of residence, and time period.


2021 ◽  
Author(s):  
Nick J. Broers ◽  
Henry Otgaar

Since the early work of Cohen (1962) psychological researchers have become aware of the importance of doing a power analysis to ensure that the predicted effect will be detectable with sufficient statistical power. APA guidelines require researchers to provide a justification of the chosen sample size with reference to the expected effect size; an expectation that should be based on previous research. However, we argue that a credible estimate of an expected effect size is only reasonable under two conditions: either the new study forms a direct replication of earlier work or the outcome scale makes use of meaningful and familiar units that allow for the quantification of a minimal effect of psychological interest. In practice neither of these conditions is usually met. We propose a different rationale for a power analysis that will ensure that researchers will be able to justify their sample size as meaningful and adequate.


2021 ◽  
Author(s):  
Abigail Davis ◽  
Robin Stewart Samuel Kramer

Attachment styles in individuals with autism are not well understood, and research into the topic is limited to date. Authors regularly utilise standardised measures to classify attachment in adulthood, and this is the case for research with neurotypical and autistic populations. Here, we argue that there may be fundamental problems with using such measures, developed for neurotypical populations, in order to quantify attachment in those with autism. Crucially, such tools may be unable to differentiate between autistic behaviours and behaviours associated with insecure attachment styles. Furthermore, many studies which have investigated attachment and autism may lack sufficient statistical power due to the use of time-consuming attachment interviews or student populations which typically do not contain sufficient numbers of adults with autism. We argue that it is essential that measures are developed which accurately distinguish between insecure attachment styles and behaviours associated with autism, with the goal of better understanding attachment in those with autism for both parental and romantic relationships.


2021 ◽  
pp. 133-151
Author(s):  
R. Barker Bausell

While replication of research is the ultimate arbitrator of reproducibility, the process is a bit more complex than it appears. And, like any empirical study, a replication can itself be wrong. However, replications are the best tool available for determining reproducibility if (a) they employ sufficient statistical power; (b) they follow the original study procedures as closely as possible (sans any questionable research practices present therein); (c) their investigators are able to obtain the necessary information, advice, and materials from the original authors; and (d) the replication protocol is preregistered. The chapter describes different types of replications, such as exact (seldom possible for experimental research), direct (the recommended approach, which involves employing the same methodological procedures, outcome variables, and statistical approaches as the original study), conceptual (not recommended since they customarily presume the original results to be correct and are conducted to determine the extent to which said results can be extended), self (primarily useful for the original investigators to convince themselves of the validity of a finding via a replication of an original study to ensure that its results are reproducible), and partial (seldom necessary but useful when there is no alternative, such as when all of the procedures cannot be duplicated for ethical reasons).


2021 ◽  
Vol 8 (2) ◽  
Author(s):  
Alba Motes-Rodrigo ◽  
Roger Mundry ◽  
Josep Call ◽  
Claudio Tennie

The ability to imitate has been deemed crucial for the emergence of human culture. Although non-human animals also possess culture, the acquisition mechanisms underlying behavioural variation between populations in other species is still under debate. It is especially controversial whether great apes can spontaneously imitate. Action- and subject-specific factors have been suggested to influence the likelihood of an action to be imitated. However, few studies have jointly tested these hypotheses. Just one study to date has reported spontaneous imitation in chimpanzees (Persson et al . 2017 Primates 59 , 19–29), although important methodological limitations were not accounted for. Here, we present a study in which we (i) replicate the above-mentioned study addressing their limitations in an observational study of human–chimpanzee imitation; and (ii) aim to test the influence of action- and subject-specific factors on action copying in chimpanzees by providing human demonstrations of multiple actions to chimpanzees of varying rearing background. To properly address our second aim, we conducted a preparatory power analysis using simulated data. Contrary to Persson et al .'s study, we found extremely low rates of spontaneous chimpanzee imitation and we did not find enough cases of action matching to be able to apply our planned model with sufficient statistical power. We discuss possible factors explaining the lack of observed action matching in our experiments compared with previous studies.


2021 ◽  
Author(s):  
Blair Saunders ◽  
Michael Inzlicht

Recent years have witnessed calls for increased rigour and credibility in the cognitive and behavioural sciences, including psychophysiology. Many procedures exist to increase rigour, and among the most important is the need to increase statistical power. Achieving sufficient statistical power, however, is a considerable challenge for resource intensive methodologies, particularly for between-subjects designs. Meta-analysis is one potential solution; yet, the validity of such quantitative review is limited by potential bias in both the primary literature and in meta-analysis itself. Here, we provide a non-technical overview and evaluation of open science methods that could be adopted to increase the transparency of novel meta-analyses. We also contrast post hoc statistical procedures that can be used to correct for publication bias in the primary literature. We suggest that traditional meta-analyses, as applied in ERP research, are exploratory in nature, providing a range of plausible effect sizes without necessarily having the ability to confirm (or disconfirm) existing hypotheses. To complement traditional approaches, we detail how prospective meta-analyses, combined with multisite collaboration, could be used to conduct statistically powerful, confirmatory ERP research.


2020 ◽  
pp. 109442812092193
Author(s):  
Jeffrey M. Stanton

Testing and rejecting the null hypothesis is a routine part of quantitative research, but relatively few organizational researchers prepare for confirming the null or, similarly, testing a hypothesis of equivalence (e.g., that two group means are practically identical). Both theory and practice could benefit from greater attention to this capability. Planning ahead for equivalence testing also provides helpful input on assuring sufficient statistical power in a study. This article provides background on these ideas plus guidance on the use of two frequentist and two Bayesian techniques for testing a hypothesis of no nontrivial effect. The guidance highlights some faulty strategies and how to avoid them. An organizationally relevant example illustrates how to put these techniques into practice. A simulation compares the four techniques to support recommendations of when and how to use each one. A nine-step process table describes separate analytical tracks for frequentist and Bayesian equivalence techniques.


Author(s):  
Laura Mieth ◽  
Raoul Bell ◽  
Axel Buchner

Abstract. This registered report aims at replicating the so-called “mnemonic time-travel” effect. Aksentijevic, Brandt, Tsakanikos, and Thorpe (2019) reported that memory was improved when their participants experienced backward motion before a memory test in comparison to when they experienced forward motion or no motion. This finding was interpreted as suggesting that backward motion brought individuals back to the moment of encoding. In the original study, the mnemonic time-travel effect was robustly found with various types of backward motion (real, simulated, and imagined). Such a spectacular finding calls for a preregistered replication. To determine the robustness of the effect, we performed a close replication of Experiment 4 of Aksentijevic et al. in which the mnemonic time-travel effect was most pronounced. Despite sufficient statistical power to detect an even considerably smaller effect than the one reported by Aksentijevic et al., we found no significant differences among the different motion conditions. The present results thus disconfirm the idea that experiencing backward motion improves memory which suggests that the empirical robustness of the mnemonic time travel effect should be further scrutinized before any conclusions about mnemonic space and time can be drawn.


2019 ◽  
Vol 7 (30) ◽  
pp. 63-66
Author(s):  
Shengpin Yang ◽  
Gilbert Berdine

I am planning a clinical trial to compare two diets on reducing the risk of type II diabetes.Because there is a restriction on the total budget, I would prefer to enroll a small number ofparticipants. Meanwhile, it is important that there is sufficient statistical power to detect aclinically meaningful difference. Is there any study design that can be utilized?


2019 ◽  
Vol 2 (3) ◽  
pp. 199-213 ◽  
Author(s):  
Marc-André Goulet ◽  
Denis Cousineau

When running statistical tests, researchers can commit a Type II error, that is, fail to reject the null hypothesis when it is false. To diminish the probability of committing a Type II error (β), statistical power must be augmented. Typically, this is done by increasing sample size, as more participants provide more power. When the estimated effect size is small, however, the sample size required to achieve sufficient statistical power can be prohibitive. To alleviate this lack of power, a common practice is to measure participants multiple times under the same condition. Here, we show how to estimate statistical power by taking into account the benefit of such replicated measures. To that end, two additional parameters are required: the correlation between the multiple measures within a given condition and the number of times the measure is replicated. An analysis of a sample of 15 studies (total of 298 participants and 38,404 measurements) suggests that in simple cognitive tasks, the correlation between multiple measures is approximately .14. Although multiple measurements increase statistical power, this effect is not linear, but reaches a plateau past 20 to 50 replications (depending on the correlation). Hence, multiple measurements do not replace the added population representativeness provided by additional participants.


Sign in / Sign up

Export Citation Format

Share Document