scholarly journals When to Use Different Tests for Power Analysis and Data Analysis for Mediation

2021 ◽  
Author(s):  
Jessica L Fossum ◽  
Amanda Kay Montoya

Several options exist for conducting inference on indirect effects in mediation analysis. While methods which use bootstrapping are the preferred inferential approach for testing mediation, they are time consuming when the test must be performed many times for a power analysis. Alternatives which are more computationally efficient are not as robust, meaning accuracy of the inferences from these methods are more affected by nonnormal and heteroskedastic data (Biesanz et al., 2010). While previous research focused on how different sample sizes would be needed to achieve the same amount of power for different inferential approaches (Fritz & MacKinnon, 2007), we explore how similar power estimates are at the same sample size. We compare the power estimates from six tests using a Monte Carlo simulation study, varying the path coefficients and tests of the indirect effect. If tests produce similar power estimates, the more computationally efficient test could be used for power analysis and the more intensive test involving resampling can be used for data analysis. We found that when the assumptions of linear regression are met, three tests consistently perform similarly: the joint significance test, the Monte Carlo confidence interval, and the percentile bootstrap confidence interval. Based on these results, we recommend using the more computationally efficient joint significance test for power analysis then using the percentile bootstrap confidence interval for the data analysis.

Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 484 ◽  
Author(s):  
Gadde Srinivasa Rao ◽  
Mohammed Albassam ◽  
Muhammad Aslam

This paper assesses the bootstrap confidence intervals of a newly proposed process capability index (PCI) for Weibull distribution, using the logarithm of the analyzed data. These methods can be applied when the quality of interest has non-symmetrical distribution. Bootstrap confidence intervals, which consist of standard bootstrap (SB), percentile bootstrap (PB), and bias-corrected percentile bootstrap (BCPB) confidence interval are constructed for the proposed method. A Monte Carlo simulation study is used to determine the efficiency of newly proposed index Cpkw over the existing method by addressing the coverage probabilities and average widths. The outcome shows that the BCPB confidence interval is recommended. The methodology of the proposed index has been explained by using the real data of breaking stress of carbon fibers.


Methodology ◽  
2011 ◽  
Vol 7 (3) ◽  
pp. 81-87 ◽  
Author(s):  
Shuyan Sun ◽  
Wei Pan ◽  
Lihshing Leigh Wang

Observed power analysis is recommended by many scholarly journal editors and reviewers, especially for studies with statistically nonsignificant test results. However, researchers may not fully realize that blind observance of this recommendation could lead to an unfruitful effort, despite the repeated warnings from methodologists. Through both a review of 14 published empirical studies and a Monte Carlo simulation study, the present study demonstrates that observed power is usually not as informative or helpful as we think because (a) observed power for a nonsignificant test is generally low and, therefore, does not provide additional information to the test; and (b) a low observed power does not always indicate that the test is underpowered. Implications and suggestions of statistical power analysis for quantitative researchers are discussed.


2004 ◽  
Vol 21 (03) ◽  
pp. 407-419 ◽  
Author(s):  
JAE-HAK LIM ◽  
SANG WOOK SHIN ◽  
DAE KYUNG KIM ◽  
DONG HO PARK

Steady-state availability, denoted by A, has been widely used as a measure to evaluate the reliability of a repairable system. In this paper, we develop new confidence intervals for steady-state availability based on four bootstrap methods; standard bootstrap confidence interval, percentile bootstrap confidence interval, bootstrap-t confidence interval, and bias-corrected and accelerated confidence interval. We also investigate the accuracy of these bootstrap confidence intervals by calculating the coverage probability and the average length of intervals.


2016 ◽  
Vol 27 (8) ◽  
pp. 2478-2503 ◽  
Author(s):  
Shi-Fang Qiu ◽  
Heng Lian ◽  
GY Zou ◽  
Xiao-Song Zeng

Double-sampling schemes using one classifier assessing the whole sample and another classifier assessing a subset of the sample have been introduced for reducing classification errors when an infallible or gold standard classifier is unavailable or impractical. Inference procedures have previously been proposed for situations where an infallible classifier is available for validating a subset of the sample that has already been classified by a fallible classifier. Here, we consider the case where both classifiers are fallible, proposing and evaluating several confidence interval procedures for a proportion under two models, distinguished by the assumption regarding ascertainment of two classifiers. Simulation results suggest that the modified Wald-based confidence interval, Score-based confidence interval, two Bayesian credible intervals, and the percentile Bootstrap confidence interval performed reasonably well even for small binomial proportions and small validated sample under the model with the conditional independent assumption, and the confidence interval derived from the Wald test with nuisance parameters appropriately evaluated, likelihood ratio-based confidence interval, Score-based confidence interval, and the percentile Bootstrap confidence interval performed satisfactory in terms of coverage under the model without the conditional independent assumption. Moreover, confidence intervals based on log- and logit-transformations also performed well when the binomial proportion and the ratio of the validated sample are not very small under two models. Two examples were used to illustrate the procedures.


2017 ◽  
Vol 8 (4) ◽  
pp. 379-386 ◽  
Author(s):  
Alexander M. Schoemann ◽  
Aaron J. Boulton ◽  
Stephen D. Short

Mediation analyses abound in social and personality psychology. Current recommendations for assessing power and sample size in mediation models include using a Monte Carlo power analysis simulation and testing the indirect effect with a bootstrapped confidence interval. Unfortunately, these methods have rarely been adopted by researchers due to limited software options and the computational time needed. We propose a new method and convenient tools for determining sample size and power in mediation models. We demonstrate our new method through an easy-to-use application that implements the method. These developments will allow researchers to quickly and easily determine power and sample size for simple and complex mediation models.


Sign in / Sign up

Export Citation Format

Share Document