assumption violations
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 13)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Peng Huang ◽  
Yixin Zou ◽  
Xingyu Zhang ◽  
Xiangyu Ye ◽  
Yidi Wang ◽  
...  

Psychiatric disorder, including bipolar disorder (BD), major depression (MDD), and schizophrenia (SCZ), affects millions of persons around the world. Understanding the disease causal mechanism underlying the three diseases and identifying the modifiable risk factors for them hold the key for the development of effective preventative and treatment strategies. We used a two-sample Mendelian randomization method to assess the causal effect of insomnia on the risk of BD, MDD, and SCZ in a European population. We collected one dataset of insomnia, three of BD, one of MDD, and three of SCZ and performed a meta-analysis for each trait, further verifying the analysis through extensive complementarity and sensitivity analysis. Among the three psychiatric disorders, we found that only insomnia is causally associated with MDD and that higher insomnia increases the risk of MDD. Specifically, the odds ratio of MDD increase of insomnia is estimated to be 1.408 [95% confidence interval (CI): 1.210–1.640, p = 1.03E-05] in the European population. The identified causal relationship between insomnia and MDD is robust with respect to the choice of statistical methods and is validated through extensive sensitivity analyses that guard against various model assumption violations. Our results provide new evidence to support the causal effect of insomnia on MDD and pave ways for reducing the psychiatric disorder burden.


2021 ◽  
Author(s):  
Áki Jarl Láruson ◽  
Matthew C Fitzpatrick ◽  
Stephen R Keller ◽  
Benjamin C Haller ◽  
Katie E Lotterhos

Gradient Forest (GF) is increasingly being used to forecast climate change impacts, but remains mostly untested for this purpose. We explore its robustness to assumption violations, and relationship to measures of fitness, using SLiM simulations with explicit genome architecture and a spatial metapopulation. We evaluate measures of GF offset in: (1) a neutral model with no environmental adaptation; (2) a monogenic "population genetic" model with a single environmentally adapted locus; and (3) a polygenic "quantitative genetic" model with two adaptive traits, each adapting to a different environment. Although we found GF Offset to be broadly correlated with fitness offsets under both single locus and polygenic architectures. It could also be confounded by neutral demography, genomic architecture, and the nature of the adaptive environment. GF Offset is a promising tool, but it is important to understand its limitations and underlying assumptions, especially when used in the context of forecasting maladaptation.


Author(s):  
Daniela R. Crișan ◽  
Jorge N. Tendeiro ◽  
Rob R. Meijer

Abstract Purpose In Mokken scaling, the Crit index was proposed and is sometimes used as evidence (or lack thereof) of violations of some common model assumptions. The main goal of our study was twofold: To make the formulation of the Crit index explicit and accessible, and to investigate its distribution under various measurement conditions. Methods We conducted two simulation studies in the context of dichotomously scored item responses. We manipulated the type of assumption violation, the proportion of violating items, sample size, and quality. False positive rates and power to detect assumption violations were our main outcome variables. Furthermore, we used the Crit coefficient in a Mokken scale analysis to a set of responses to the General Health Questionnaire (GHQ-12), a self-administered questionnaire for assessing current mental health. Results We found that the false positive rates of Crit were close to the nominal rate in most conditions, and that power to detect misfit depended on the sample size, type of violation, and number of assumption-violating items. Overall, in small samples Crit lacked the power to detect misfit, and in larger samples power differed considerably depending on the type of violation and proportion of misfitting items. Furthermore, we also found in our empirical example that even in large samples the Crit index may fail to detect assumption violations. Discussion Even in large samples, the Crit coefficient showed limited usefulness for detecting moderate and severe violations of monotonicity. Our findings are relevant to researchers and practitioners who use Mokken scaling for scale and questionnaire construction and revision.


2021 ◽  
Author(s):  
Dustin Fife

Users of statistics quite frequently use multivariate models to make conditional inferences (e.g., stress affects depression, after controlling for gender). These inferences are often done without adequately considering (or understanding) the assumptions one makes when claiming these inferences. A particularly problematic instance of assumption violations is with nonlinear and/or interactive effects. Many of these inferences are not merited because the inference is "contaminated" by the variables and their relationships within the model. In this paper, we highlight when conditional inferences are contaminated by other features of the model and identify the conditions under which variable effects are marginally independent. We then show a strategy for partitioning multivariate effects into uncontaminated blocks using visualizations. This approach simplifies multivariate analyses immensely, without oversimplifying the analysis.


2021 ◽  
Author(s):  
Amanda Kay Montoya ◽  
Chris Aberson ◽  
Jessica Fossum ◽  
Donna Chen ◽  
Oscar Gonzalez

Mediation analysis is commonly used in social-personality psychology to evaluate potential mechanisms of effects. With the recent replicability crisis, researchers are turning to power analysis to help plan studies; however, power analysis for mediation is not implemented in popular software (e.g., G*Power). Our symposium includes two presentations focusing on implementation of power analysis for mediation: (1) describing easy-to-use tools for implementing power analysis (e.g., pwr2ppl R package), and (2) evaluating whether different inferential methods result in similar recommended sample sizes and the role of assumption violations in these differences. Two presenters focus on study characteristics which can affect power: (1) use of the bias-corrected confidence interval and alternatives which better balance power and type I error, and (2) how measurement error on the mediator can impact power and how to correct this issue with latent variable models. Presentations will include applied examples, aimed at a social-personality audience, and provide concrete steps for increasing the validity and replicability of mediation analyses conducted in social-personality research. (Symposium Presented at SPSP 2021)


2019 ◽  
Author(s):  
Daniela Ramona Crișan ◽  
Jorge Tendeiro ◽  
Rob Meijer

In empirical use of Mokken scaling, the Crit index is used as evidence (or lack thereof) of violations of some common model assumptions. The main goal of our study was two-fold: To make the formulation of the Crit index explicit and accessible, and to investigate its distribution under various measurement conditions. We conducted two simulation studies in the context of dichotomously-scored item responses. False positive rates and power to detect assumption violations were considered. We found that the false positive rates of Crit were close to the nominal rate in most conditions, and that power to detect misfit depended on the sample size, type of violation, and number of assumption-violating items. Our findings are relevant to all practitioners who use Mokken scaling for scale and questionnaire construction and revision.


2019 ◽  
Vol 3 ◽  
Author(s):  
Matt N Williams ◽  
Casper Albers

Virtually any inferential statistical analysis relies on distributional assumptions of some kind. The violation of distributional assumptions can result in consequences ranging from small changes to error rates through to substantially biased estimates and parameters fundamentally losing their intended interpretations. Conventionally, researchers have conducted assumption checks after collecting data, and then changed the primary analysis technique if violations of distributional assumptions are observed. An approach to dealing with distributional assumptions that requires decisions to be made contingent on observed data is problematic, however, in preregisteredresearch, where researchers attempt to specify all important analysis decisions prior to collecting data. Limited methodological advice is currently available regarding how to deal with the prospect of distributional assumption violations in preregistered research. In this article, we examine several strategies that researchers could use in preregistrations to reduce the potential impact of distributional assumption violations. We suggest that pre-emptively selecting analysis methods that are as robust as possible to assumption violations, performing planned robustness analyses, and/or supplementing preregistered confirmatory analyses with exploratory checks of distributional assumptions may all be useful strategies. On the other hand, we suggest that prespecifying “decision trees” for selecting data analysis methods based on the distributional characteristics of the data may not be practical in most situations.


Sign in / Sign up

Export Citation Format

Share Document