type i error
Recently Published Documents


TOTAL DOCUMENTS

1194
(FIVE YEARS 415)

H-INDEX

53
(FIVE YEARS 7)

Author(s):  
Matthew A. Powell ◽  
Virginia L. Filiaci ◽  
Martee L. Hensley ◽  
Helen Q. Huang ◽  
Kathleen N. Moore ◽  
...  

PURPOSE This phase III randomized trial ( NCT00954174 ) tested the null hypothesis that paclitaxel and carboplatin (PC) is inferior to paclitaxel and ifosfamide (PI) for treating uterine carcinosarcoma (UCS). PATIENTS AND METHODS Adults with chemotherapy-naïve UCS or ovarian carcinosarcoma (OCS) were randomly assigned to PC or PI with 3-week cycles for 6-10 cycles. With 264 events in patients with UCS, the power for an overall survival (OS) hybrid noninferiority design was 80% for a null hazard ratio (HR) of 1.2 against a 13% greater death rate on PI with a type I error of 5% for a one-tailed test. RESULTS The study enrolled 536 patients with UCS and 101 patients with OCS, with 449 and 90 eligible, respectively. Primary analysis was on patients with UCS, distributed as follows: 40% stage I, 6% stage II, 31% stage III, 15% stage IV, and 8% recurrent. Among eligible patients with UCS, PC was assigned to 228 and PI to 221. PC was not inferior to PI. The median OS was 37 versus 29 months (HR = 0.87; 90% CI, 0.70 to 1.075; P < .01 for noninferiority, P > .1 for superiority). The median progression-free survival was 16 versus 12 months (HR = 0.73; P = < 0.01 for noninferiority, P < .01 for superiority). Toxicities were similar, except that more patients in the PC arm had hematologic toxicity and more patients in the PI arm had confusion and genitourinary hemorrhage. Among 90 eligible patients with OCS, those in the PC arm had longer OS (30 v 25 months) and progression-free survival (15 v 10 months) than those in the PI arm, but with limited precision, these differences were not statistically significant. CONCLUSION PC was not inferior to the active regimen PI and should be standard treatment for UCS.


2022 ◽  
pp. 001316442110684
Author(s):  
Natalie A. Koziol ◽  
J. Marc Goodrich ◽  
HyeonJin Yoon

Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A simulation study was performed to compare the new framework with traditional logistic regression, with respect to Type I error and power rates of the uniform DIF test statistics and bias and root mean square error of the corresponding effect size estimators. The new framework better controlled the Type I error rate and demonstrated minimal bias but suffered from low power and lack of precision. Implications for practice are discussed.


2022 ◽  
Author(s):  
Mikkel Helding Vembye ◽  
James E Pustejovsky ◽  
Terri Pigott

Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power approximations for tests of average effect sizes based upon the most common models for handling dependent effect sizes. In a Monte Carlo simulation, we show that the new power formulas can accurately approximate the true power of common meta-analytic models for dependent effect sizes. Lastly, we investigate the Type I error rate and power for several common models, finding that tests using robust variance estimation provide better Type I error calibration than tests with model-based variance estimation. We consider implications for practice with respect to selecting a working model and an inferential approach.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0259994
Author(s):  
Ahmet Faruk Aysan ◽  
Ibrahim Guney ◽  
Nicoleta Isac ◽  
Asad ul Islam Khan

This paper evaluates the performance of eight tests with null hypothesis of cointegration on basis of probabilities of type I and II errors using Monte Carlo simulations. This study uses a variety of 132 different data generations covering three cases of deterministic part and four sample sizes. The three cases of deterministic part considered are: absence of both intercept and linear time trend, presence of only the intercept and presence of both the intercept and linear time trend. It is found that all of tests have either larger or smaller probabilities of type I error and concluded that tests face either problems of over rejection or under rejection, when asymptotic critical values are used. It is also concluded that use of simulated critical values leads to controlled probability of type I error. So, the use of asymptotic critical values may be avoided, and the use of simulated critical values is highly recommended. It is found and concluded that the simple LM test based on KPSS statistic performs better than rest for all specifications of deterministic part and sample sizes.


2021 ◽  
Author(s):  
Ye Yue ◽  
Yijuan Hu

Abstract Background: Understanding whether and which microbes played a mediating role between an exposure and a disease outcome are essential for researchers to develop clinical interventions to treat the disease by modulating the microbes. Existing methods for mediation analysis of the microbiome are often limited to a global test of community-level mediation or selection of mediating microbes without control of the false discovery rate (FDR). Further, while the null hypothesis of no mediation at each microbe is a composite null that consists of three types of null (no exposure-microbe association, no microbe-outcome association given the exposure, or neither), most existing methods for the global test such as MedTest and MODIMA treat the microbes as if they are all under the same type of null. Results: We propose a new approach based on inverse regression that regresses the (possibly transformed) relative abundance of each taxon on the exposure and the exposure-adjusted outcome to assess the exposure-taxon and taxon-outcome associations simultaneously. Then the association p-values are used to test mediation at both the community and individual taxon levels. This approach fits nicely into our Linear Decomposition Model (LDM) framework, so our new method is implemented in the LDM and enjoys all the features of the LDM, i.e., allowing an arbitrary number of taxa to be tested, supporting continuous, discrete, or multivariate exposures and outcomes as well as adjustment of confounding covariates, accommodating clustered data, and offering analysis at the relative abundance or presence-absence scale. We refer to this new method as LDM-med. Using extensive simulations, we showed that LDM-med always controlled the type I error of the global test and had compelling power over existing methods; LDM-med always preserved the FDR of testing individual taxa and had much better sensitivity than alternative approaches. In contrast, MedTest and MODIMA had severely inflated type I error when different taxa were under different types of null. The flexibility of LDM-med for a variety of mediation analyses is illustrated by the application to a murine microbiome dataset, which identified a plausible mediator.Conclusions: Inverse regression coupled with the LDM is a strategy that performs well and is capable of handling mediation analysis in a wide variety of microbiome studies.


2021 ◽  
Author(s):  
Angély Loubert ◽  
Antoine Regnault ◽  
Véronique Sébille ◽  
Jean-Benoit Hardouin

Abstract BackgroundIn the analysis of clinical trial endpoints, calibration of patient-reported outcomes (PRO) instruments ensures that resulting “scores” represent the same quantity of the measured concept between applications. Rasch measurement theory (RMT) is a psychometric approach that guarantees algebraic separation of person and item parameter estimates, allowing formal calibration of PRO instruments. In the RMT framework, calibration is performed using the item parameter estimates obtained from a previous “calibration” study. But if calibration is based on poorly estimated item parameters (e.g., because the sample size of the calibration sample was low), this may hamper the ability to detect a treatment effect, and direct estimation of item parameters from the trial data (non-calibration) may then be preferred. The objective of this simulation study was to assess the impact of calibration on the comparison of PRO results between treatment groups, using different analysis methods.MethodsPRO results were simulated following a polytomous Rasch model, for a calibration and a trial sample. Scenarios included varying sample sizes, with instrument of varying number of items and modalities, and varying item parameters distributions. Different treatment effect sizes and distributions of the two patient samples were also explored. Comparison of treatment groups was performed using different methods based on a random effect Rasch model. Calibrated and non-calibrated approaches were compared based on type-I error, power, bias, and variance of the estimates for the difference between groups.Results There was no impact of the calibration approach on type-I error, power, bias, and dispersion of the estimates. Among other findings, mistargeting between the PRO instrument and patients from the trial sample (regarding the level of measured concept) resulted in a lower power and higher position bias than appropriate targeting. ConclusionsCalibration of PROs in clinical trials does not compromise the ability to accurately assess a treatment effect and is essential to properly interpret PRO results. Given its important added value, calibration should thus always be performed when a PRO instrument is used as an endpoint in a clinical trial, in the RMT framework.


Author(s):  
Mbanefo S. Madukaife

This paper compares the empirical power performances of eight tests for multivariate normality classified under Baringhaus-Henze-Epps-Pulley (BHEP) class of tests. The tests are compared under eight different alternative distributions. The result shows that the eight statistics have good control over type-I-error. Also, some tests are more sensitive to distributional differences with respect to their power performances than others. Also, some tests are generally more powerful than others. The generally most powerful ones are therefore recommended.


2021 ◽  
Author(s):  
Sebastian Sosa ◽  
Cristian Pasquaretta ◽  
Ivan Puga-Gonzalez ◽  
F Stephen Dobson ◽  
Vincent A Viblanc ◽  
...  

Animal social network analyses (ASNA) have led to a foundational shift in our understanding of animal sociality that transcends the disciplinary boundaries of genetics, spatial movements, epidemiology, information transmission, evolution, species assemblages and conservation. However, some analytical protocols (i.e., permutation tests) used in ASNA have recently been called into question due to the unacceptable rates of false negatives (type I error) and false positives (type II error) they generate in statistical hypothesis testing. Here, we show that these rates are related to the way in which observation heterogeneity is accounted for in association indices. To solve this issue, we propose a method termed the "global index" (GI) that consists of computing the average of individual associations indices per unit of time. In addition, we developed an "index of interactions" (II) that allows the use of the GI approach for directed behaviours. Our simulations show that GI: 1) returns more reasonable rates of false negatives and positives, with or without observational biases in the collected data, 2) can be applied to both directed and undirected behaviours, 3) can be applied to focal sampling, scan sampling or "gambit of the group" data collection protocols, and 4) can be applied to first- and second-order social network measures. Finally, we provide a method to control for non-social biological confounding factors using linear regression residuals. By providing a reliable approach for a wide range of scenarios, we propose a novel methodology in ASNA with the aim of better understanding social interactions from a mechanistic, ecological and evolutionary perspective.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Vahid Ebrahimi ◽  
Zahra Bagheri ◽  
Zahra Shayan ◽  
Peyman Jafari

Assessing differential item functioning (DIF) using the ordinal logistic regression (OLR) model highly depends on the asymptotic sampling distribution of the maximum likelihood (ML) estimators. The ML estimation method, which is often used to estimate the parameters of the OLR model for DIF detection, may be substantially biased with small samples. This study is aimed at proposing a new application of the elastic net regularized OLR model, as a special type of machine learning method, for assessing DIF between two groups with small samples. Accordingly, a simulation study was conducted to compare the powers and type I error rates of the regularized and nonregularized OLR models in detecting DIF under various conditions including moderate and severe magnitudes of DIF ( DIF = 0.4   and   0.8 ), sample size ( N ), sample size ratio ( R ), scale length ( I ), and weighting parameter ( w ). The simulation results revealed that for I = 5 and regardless of R , the elastic net regularized OLR model with w = 0.1 , as compared with the nonregularized OLR model, increased the power of detecting moderate uniform DIF ( DIF = 0.4 ) approximately 35% and 21% for N = 100   and   150 , respectively. Moreover, for I = 10 and severe uniform DIF ( DIF = 0.8 ), the average power of the elastic net regularized OLR model with 0.03 ≤ w ≤ 0.06 , as compared with the nonregularized OLR model, increased approximately 29.3% and 11.2% for N = 100   and   150 , respectively. In these cases, the type I error rates of the regularized and nonregularized OLR models were below or close to the nominal level of 0.05. In general, this simulation study showed that the elastic net regularized OLR model outperformed the nonregularized OLR model especially in extremely small sample size groups. Furthermore, the present research provided a guideline and some recommendations for researchers who conduct DIF studies with small sample sizes.


2021 ◽  
pp. 096228022110616
Author(s):  
Bo Chen ◽  
Wei Xu

Functional regression has been widely used on longitudinal data, but it is not clear how to apply functional regression to microbiome sequencing data. We propose a novel functional response regression model analyzing correlated longitudinal microbiome sequencing data, which extends the classic functional response regression model only working for independent functional responses. We derive the theory of generalized least squares estimators for predictors’ effects when functional responses are correlated, and develop a data transformation technique to solve the computational challenge for analyzing correlated functional response data using existing functional regression method. We show by extensive simulations that our proposed method provides unbiased estimations for predictors’ effect, and our model has accurate type I error and power performance for correlated functional response data, compared with classic functional response regression model. Finally we implement our method to a real infant gut microbiome study to evaluate the relationship of clinical factors to predominant taxa along time.


Sign in / Sign up

Export Citation Format

Share Document