Statistical Power and Sample Size in PLS Path Analysis

Author(s):  
J. Christopher Westland
2008 ◽  
Vol 4 ◽  
pp. T263-T264
Author(s):  
Steven D. Edland ◽  
Linda K. McEvoy ◽  
Dominic Holland ◽  
John C. Roddey ◽  
Christine Fennema-Notestine ◽  
...  

1990 ◽  
Vol 47 (1) ◽  
pp. 2-15 ◽  
Author(s):  
Randall M. Peterman

Ninety-eight percent of recently surveyed papers in fisheries and aquatic sciences that did not reject some null hypothesis (H0) failed to report β, the probability of making a type II error (not rejecting H0 when it should have been), or statistical power (1 – β). However, 52% of those papers drew conclusions as if H0 were true. A false H0 could have been missed because of a low-power experiment, caused by small sample size or large sampling variability. Costs of type II errors can be large (for example, for cases that fail to detect harmful effects of some industrial effluent or a significant effect of fishing on stock depletion). Past statistical power analyses show that abundance estimation techniques usually have high β and that only large effects are detectable. I review relationships among β, power, detectable effect size, sample size, and sampling variability. I show how statistical power analysis can help interpret past results and improve designs of future experiments, impact assessments, and management regulations. I make recommendations for researchers and decision makers, including routine application of power analysis, more cautious management, and reversal of the burden of proof to put it on industry, not management agencies.


2018 ◽  
Vol 53 (7) ◽  
pp. 716-719
Author(s):  
Monica R. Lininger ◽  
Bryan L. Riemann

Objective: To describe the concept of statistical power as related to comparative interventions and how various factors, including sample size, affect statistical power.Background: Having a sufficiently sized sample for a study is necessary for an investigation to demonstrate that an effective treatment is statistically superior. Many researchers fail to conduct and report a priori sample-size estimates, which then makes it difficult to interpret nonsignificant results and causes the clinician to question the planning of the research design.Description: Statistical power is the probability of statistically detecting a treatment effect when one truly exists. The α level, a measure of differences between groups, the variability of the data, and the sample size all affect statistical power.Recommendations: Authors should conduct and provide the results of a priori sample-size estimations in the literature. This will assist clinicians in determining whether the lack of a statistically significant treatment effect is due to an underpowered study or to a treatment's actually having no effect.


2018 ◽  
Author(s):  
Kathleen Wade Reardon ◽  
Avante J Smack ◽  
Kathrin Herzhoff ◽  
Jennifer L Tackett

Although an emphasis on adequate sample size and statistical power has a long history in clinical psychological science (Cohen, 1992), increased attention to the replicability of scientific findings has again turned attention to the importance of statistical power (Bakker, van Dijk, & Wicherts, 2012). These recent efforts have not yet circled back to modern clinical psychological research, despite the continued importance of sample size and power in producing a credible body of evidence. As one step in this process of scientific self-examination, the present study estimated an N-pact Factor (the statistical power of published empirical studies to detect typical effect sizes; Fraley & Vazire, 2014) in two leading clinical journals (the Journal of Abnormal Psychology; JAP, and the Journal of Consulting and Clinical Psychology; JCCP) for the years 2000, 2005, 2010, and 2015. Study sample size, as one proxy for statistical power, is a useful focus because it allows direct comparisons with other subfields and may highlight some of the core methodological differences between clinical and other areas (e.g., hard-to-reach populations, greater emphasis on correlational designs). We found that, across all years examined, the average median sample size in clinical research is 179 participants (175 for JAP and 182 for JCCP). The power to detect a small-medium effect size of .20 is just below 80% for both journals. Although the clinical N-pact factor was higher than that estimated for social psychology, the statistical power in clinical journals is still limited to detect many effects of interest to clinical psychologists, with little evidence of improvement in sample sizes over time.


2019 ◽  
Author(s):  
Peter E Clayson ◽  
Kaylie Amanda Carbine ◽  
Scott Baldwin ◽  
Michael J. Larson

Methodological reporting guidelines for studies of event-related potentials (ERPs) were updated in Psychophysiology in 2014. These guidelines facilitate the communication of key methodological parameters (e.g., preprocessing steps). Failing to report key parameters represents a barrier to replication efforts, and difficultly with replicability increases in the presence of small sample sizes and low statistical power. We assessed whether guidelines are followed and estimated the average sample size and power in recent research. Reporting behavior, sample sizes, and statistical designs were coded for 150 randomly-sampled articles from five high-impact journals that frequently publish ERP research from 2011 to 2017. An average of 63% of guidelines were reported, and reporting behavior was similar across journals, suggesting that gaps in reporting is a shortcoming of the field rather than any specific journal. Publication of the guidelines paper had no impact on reporting behavior, suggesting that editors and peer reviewers are not enforcing these recommendations. The average sample size per group was 21. Statistical power was conservatively estimated as .72-.98 for a large effect size, .35-.73 for a medium effect, and .10-.18 for a small effect. These findings indicate that failing to report key guidelines is ubiquitous and that ERP studies are primarily powered to detect large effects. Such low power and insufficient following of reporting guidelines represent substantial barriers to replication efforts. The methodological transparency and replicability of studies can be improved by the open sharing of processing code and experimental tasks and by a priori sample size calculations to ensure adequately powered studies.


Author(s):  
Luh Ade Yumita Handriani ◽  
Sudarsana Arka

This study aims to analyze the impact of the BPNT program on household consumption and consumption patterns of BPNT recipient households in Mengwi District, Badung Regency. This research was conducted in Mengwi District, Badung Regency using a questionnaire distributed to respondents with a large sample size of 96 KPM. This study uses path analysis techniques to analyze the direct effect and Sobel test to analyze the indirect effect. Based on path analysis, the results of the study concluded that the BPNT variable had a positive and significant effect on the consumption of BPNT recipient households in Mengwi District, Badung Regency. The BPNT variable has no effect on the consumption pattern of BPNT recipient households in Mengwi District, Badung Regency. The household consumption variable has a negative and significant effect on the consumption pattern of BPNT recipient households in Mengwi District, Badung Regency. The household consumption variable did mediate the effect of the BPNT Program on the consumption pattern of BPNT recipient households in Mengwi District, Badung Regency


2019 ◽  
Author(s):  
Rob Cribbie ◽  
Nataly Beribisky ◽  
Udi Alter

Many bodies recommend that a sample planning procedure, such as traditional NHST a priori power analysis, is conducted during the planning stages of a study. Power analysis allows the researcher to estimate how many participants are required in order to detect a minimally meaningful effect size at a specific level of power and Type I error rate. However, there are several drawbacks to the procedure that render it “a mess.” Specifically, the identification of the minimally meaningful effect size is often difficult but unavoidable for conducting the procedure properly, the procedure is not precision oriented, and does not guide the researcher to collect as many participants as feasibly possible. In this study, we explore how these three theoretical issues are reflected in applied psychological research in order to better understand whether these issues are concerns in practice. To investigate how power analysis is currently used, this study reviewed the reporting of 443 power analyses in high impact psychology journals in 2016 and 2017. It was found that researchers rarely use the minimally meaningful effect size as a rationale for the chosen effect in a power analysis. Further, precision-based approaches and collecting the maximum sample size feasible are almost never used in tandem with power analyses. In light of these findings, we offer that researchers should focus on tools beyond traditional power analysis when sample planning, such as collecting the maximum sample size feasible.


Sign in / Sign up

Export Citation Format

Share Document