model misspecification
Recently Published Documents


TOTAL DOCUMENTS

505
(FIVE YEARS 176)

H-INDEX

35
(FIVE YEARS 5)

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
James H. McVittie ◽  
David B. Wolfson ◽  
Vittorio Addona ◽  
Zhaoheng Li

AbstractWhen modelling the survival distribution of a disease for which the symptomatic progression of the associated condition is insidious, it is not always clear how to measure the failure/censoring times from some true date of disease onset. In a prevalent cohort study with follow-up, one approach for removing any potential influence from the uncertainty in the measurement of the true onset dates is through the utilization of only the residual lifetimes. As the residual lifetimes are measured from a well-defined screening date (prevalence day) to failure/censoring, these observed time durations are essentially error free. Using residual lifetime data, the nonparametric maximum likelihood estimator (NPMLE) may be used to estimate the underlying survival function. However, the resulting estimator can yield exceptionally wide confidence intervals. Alternatively, while parametric maximum likelihood estimation can yield narrower confidence intervals, it may not be robust to model misspecification. Using only right-censored residual lifetime data, we propose a stacking procedure to overcome the non-robustness of model misspecification; our proposed estimator comprises a linear combination of individual nonparametric/parametric survival function estimators, with optimal stacking weights obtained by minimizing a Brier Score loss function.


2022 ◽  
pp. 001316442110669
Author(s):  
Bitna Lee ◽  
Wonsook Sohn

A Monte Carlo study was conducted to compare the performance of a level-specific (LS) fit evaluation with that of a simultaneous (SI) fit evaluation in multilevel confirmatory factor analysis (MCFA) models. We extended previous studies by examining their performance under MCFA models with different factor structures across levels. In addition, various design factors and interaction effects between intraclass correlation (ICC) and misspecification type (MT) on their performance were considered. The simulation results demonstrate that the LS outperformed the SI in detecting model misspecification at the between-group level even in the MCFA model with different factor structures across levels. Especially, the performance of LS fit indices depended on the ICC, group size (GS), or MT. More specifically, the results are as follows. First, the performance of root mean square error of approximation (RMSEA) was more promising in detecting misspecified between-level models as GS or ICC increased. Second, the effect of ICC on the performance of comparative fit index (CFI) or Tucker–Lewis index (TLI) depended on the MT. Third, the performance of standardized root mean squared residual (SRMR) improved as ICC increased and this pattern was more clear in structure misspecification than in measurement misspecification. Finally, the summary and implications of the results are discussed.


2022 ◽  
Author(s):  
Mia S. Tackney ◽  
Tim Morris ◽  
Ian White ◽  
Clemence Leyrat ◽  
Karla Diaz-Ordaz ◽  
...  

Abstract Adjustment for baseline covariates in randomized trials has been shown to lead to gains in power and can protect against chance imbalances in covariates. For continuous covariates, there is a risk that the the form of the relationship between the covariate and outcome is misspecified when taking an adjusted approach. Using a simulation study focusing on small to medium-sized individually randomized trials, we explore whether a range of adjustment methods are robust to misspecification, either in the covariate-outcome relationship or through an omitted covariate-treatment interaction. Specifically, we aim to identify potential settings where G-computation, Inverse Probability of Treatment Weighting ( IPTW ), Augmented Inverse Probability of Treatment Weighting ( AIPTW ) and Targeted Maximum Likelihood Estimation ( TMLE ) offer improvement over the commonly used Analysis of Covariance ( ANCOVA ). Our simulations show that all adjustment methods are generally robust to model misspecification if adjusting for a few covariates, sample size is 100 or larger, and there are no covariate-treatment interactions. When there is a non-linear interaction of treatment with a skewed covariate and sample size is small, all adjustment methods can suffer from bias; however, methods that allow for interactions (such as G-computation with interaction and IPTW ) show improved results compared to ANCOVA . When there are a high number of covariates to adjust for, ANCOVA retains good properties while other methods suffer from under- or over-coverage. An outstanding issue for G-computation, IPTW and AIPTW in small samples is that standard errors are underestimated; development of small sample corrections is needed.


Author(s):  
Máté Mihalovits ◽  
Sándor Kemény

Pharmaceutical stability studies are conducted to estimate the shelf life, i.e. the period during which the drug product maintains its identity and stability. In the evaluation of process, regression curve is fitted on the data obtained during the study and the shelf life is determined using the fitted curve. The evaluation process suggested by ICH considers only the case of the true relationship between the measured attribute and time being linear. However, no method is suggested for the practitioner to decide if the linear model is appropriate for their dataset. This is a major problem, as a falsely selected model may distort the estimated shelf life to a great extent, resulting in unreliable quality control. The difficulty of model misspecification detection in stability studies is that very few observations are available. The conventional methods applied for model verification might not be appropriate or efficient due to the small sample size. In this paper, this problem is addressed and some developed methods are proposed to detect model misspecification. The methods can be applied for any process where the regression estimation is performed on independent small samples. Besides stability studies, frequently performed construction of single calibration curves for an analytical measurement is another case where the methods may be applied. It is shown that our methods are statistically appropriate and some of them have high efficiency in the detection of model misspecification when applied in simulated situations which resemble pre-approval and post-approval stability studies.


2021 ◽  
pp. 1-20
Author(s):  
ZEYNEP KANTUR ◽  
GÜLSERİM ÖZCAN

The last decades proved that policymaking without considering uncertainty is impracticable. In an environment of uncertainty, policymakers have doubts about the policy models they routinely use. This paper focuses specifically on the situation where uncertainty on the financial side of the economy leads to misspecification in the policy model. We describe a coherent strategy for policymakers who are averse to model misspecification and analyze optimal policy design in the face of Knightian uncertainty. To do so, we augment a financial dynamic stochastic general equilibrium model with model misspecification in a simple minimax framework where the central bank plays a zero-sum game versus a hypothetical evil agent. The policy is tailored to insure against the worst-case outcomes. We show that model ambiguity on the financial side requires a passive monetary policy stance. However, if the uncertainty originates from the supply side of the economy, an aggressive response of interest rate is required. We also show the impact of an additional macroprudential tool on the dynamics of the economy.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Cassandra Lisitza

In this report, we first have a review of the maximin space-filling design methods that is often applied and discussed in the literature (for example, Müller (2007)). Then we will discuss the robustness of the maximin space-filling design against model misspecification via numerical simulation. For this purpose, we will generate spatial data sets on a n x n grid and design points are selected from the n2 locations. The predictions at the unsampled locations are made based on the observations at these design points. Then the mean of the squared prediction errors are estimated as a measure of the robustness of the designs against possible model misspecification. Surprisingly, according to the simulation results, we find that the maximin space-filling designs may be robust against possible model misspecification in the sense that the mean of the squared prediction error does not increase significantly when the model is misspecified. Although the results were obtained based on simple models, this result is very inspiring. It will guide further numerical and theoretical studies which will be done as future work.


Author(s):  
Kara Layne Johnson ◽  
Jennifer L. Walsh ◽  
Yuri A. Amirkhanian ◽  
Nicole Bohme Carnegie

Leveraging social influence is an increasingly common strategy to change population behavior or acceptance of public health policies and interventions; however, assessing the effectiveness of these social network interventions and projecting their performance at scale requires modeling of the opinion diffusion process. We previously developed a genetic algorithm to fit the DeGroot opinion diffusion model in settings with small social networks and limited follow-up of opinion change. Here, we present an assessment of the algorithm performance under the less-than-ideal conditions likely to arise in practical applications. We perform a simulation study to assess the performance of the algorithm in the presence of ordinal (rather than continuous) opinion measurements, network sampling, and model misspecification. We found that the method handles alternate models well, performance depends on the precision of the ordinal scale, and sampling the full network is not necessary to use this method. We also apply insights from the simulation study to investigate notable features of opinion diffusion models for a social network intervention to increase uptake of pre-exposure prophylaxis (PrEP) among Black men who have sex with men (BMSM).


Econometrics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 44
Author(s):  
Kimon Ntotsis ◽  
Alex Karagrigoriou ◽  
Andreas Artemiou

When it comes to variable interpretation, multicollinearity is among the biggest issues that must be surmounted, especially in this new era of Big Data Analytics. Since even moderate size multicollinearity can prevent proper interpretation, special diagnostics must be recommended and implemented for identification purposes. Nonetheless, in the areas of econometrics and statistics, among other fields, these diagnostics are controversial concerning their “successfulness”. It has been remarked that they frequently fail to do proper model assessment due to information complexity, resulting in model misspecification. This work proposes and investigates a robust and easily interpretable methodology, termed Elastic Information Criterion, capable of capturing multicollinearity rather accurately and effectively and thus providing a proper model assessment. The performance is investigated via simulated and real data.


2021 ◽  
Vol 9 ◽  
Author(s):  
Mark L. Taper ◽  
Subhash R. Lele ◽  
José M. Ponciano ◽  
Brian Dennis ◽  
Christopher L. Jerde

Scientists need to compare the support for models based on observed phenomena. The main goal of the evidential paradigm is to quantify the strength of evidence in the data for a reference model relative to an alternative model. This is done via an evidence function, such as ΔSIC, an estimator of the sample size scaled difference of divergences between the generating mechanism and the competing models. To use evidence, either for decision making or as a guide to the accumulation of knowledge, an understanding of the uncertainty in the evidence is needed. This uncertainty is well characterized by the standard statistical theory of estimation. Unfortunately, the standard theory breaks down if the models are misspecified, as is commonly the case in scientific studies. We develop non-parametric bootstrap methodologies for estimating the sampling distribution of the evidence estimator under model misspecification. This sampling distribution allows us to determine how secure we are in our evidential statement. We characterize this uncertainty in the strength of evidence with two different types of confidence intervals, which we term “global” and “local.” We discuss how evidence uncertainty can be used to improve scientific inference and illustrate this with a reanalysis of the model identification problem in a prominent landscape ecology study using structural equations.


Sign in / Sign up

Export Citation Format

Share Document