Putting It Together: Fitting a Dynamic Model

2021 ◽  
pp. 283-294
Author(s):  
Timothy E. Essington

The chapter “Putting It Together: Fitting a Dynamic Model” provides a synthesis of the material presented in the book, by presenting a worked example that embraces concepts of density dependence, complex model dynamics, parameter estimation, and model selection using the Akaike information criterion. The example chosen is the recovery of gray wolf (Canis lupus) population in Washington State since 2008. The chapter begins by explaining how to fit an observation error model. Next, it examines how to fit a process error model. It then discusses parameter estimates and model selection. The chapter concludes with discussion of how to determine whether the population given as an example exhibits complex population dynamics.

Author(s):  
James Cui

The generalized estimating equation (GEE) approach is a widely used statistical method in the analysis of longitudinal data in clinical and epidemiological studies. It is an extension of the generalized linear model (GLM) method to correlated data such that valid standard errors of the parameter estimates can be drawn. Unlike the GLM method, which is based on the maximum likelihood theory for independent observations, the gee method is based on the quasilikelihood theory and no assumption is made about the distribution of response observations. Therefore, Akaike's information criterion, a widely used method for model selection in glm, is not applicable to gee directly. However, Pan (Biometrics 2001; 57: 120–125) proposed a model-selection method for gee and termed it quasilikelihood under the independence model criterion. This criterion can also be used to select the best-working correlation structure. From Pan's methods, I developed a general Stata program, qic, that accommodates all the distribution and link functions and correlation structures available in Stata version 9. In this paper, I introduce this program and demonstrate how to use it to select the best working correlation structure and the best subset of covariates through two examples in longitudinal studies.


2004 ◽  
Vol 16 (5) ◽  
pp. 1077-1104 ◽  
Author(s):  
Masashi Sugiyama ◽  
Motoaki Kawanabe ◽  
Klaus-Robert Müller

A well-known result by Stein (1956) shows that in particular situations, biased estimators can yield better parameter estimates than their generally preferred unbiased counterparts. This letter follows the same spirit, as we will stabilize the unbiased generalization error estimates by regularization and finally obtain more robust model selection criteria for learning. We trade a small bias against a larger variance reduction, which has the beneficial effect of being more precise on a single training set. We focus on the subspace information criterion (SIC), which is an unbiased estimator of the expected generalization error measured by the reproducing kernel Hilbert space norm. SIC can be applied to the kernel regression, and it was shown in earlier experiments that a small regularization of SIC has a stabilization effect. However, it remained open how to appropriately determine the degree of regularization in SIC. In this article, we derive an unbiased estimator of the expected squared error, between SIC and the expected generalization error and propose determining the degree of regularization of SIC such that the estimator of the expected squared error is minimized. Computer simulations with artificial and real data sets illustrate that the proposed method works effectively for improving the precision of SIC, especially in the high-noise-level cases. We furthermore compare the proposed method to the original SIC, the cross-validation, and anempirical Bayesian method in ridge parameter selection, withgood results.


Genetics ◽  
1996 ◽  
Vol 143 (4) ◽  
pp. 1819-1829 ◽  
Author(s):  
G Thaller ◽  
L Dempfle ◽  
I Hoeschele

Abstract Maximum likelihood methodology was applied to determine the mode of inheritance of rare binary traits with data structures typical for swine populations. The genetic models considered included a monogenic, a digenic, a polygenic, and three mixed polygenic and major gene models. The main emphasis was on the detection of major genes acting on a polygenic background. Deterministic algorithms were employed to integrate and maximize likelihoods. A simulation study was conducted to evaluate model selection and parameter estimation. Three designs were simulated that differed in the number of sires/number of dams within sires (10/10, 30/30, 100/30). Major gene effects of at least one SD of the liability were detected with satisfactory power under the mixed model of inheritance, except for the smallest design. Parameter estimates were empirically unbiased with acceptable standard errors, except for the smallest design, and allowed to distinguish clearly between the genetic models. Distributions of the likelihood ratio statistic were evaluated empirically, because asymptotic theory did not hold. For each simulation model, the Average Information Criterion was computed for all models of analysis. The model with the smallest value was chosen as the best model and was equal to the true model in almost every case studied.


Metrika ◽  
2021 ◽  
Author(s):  
Andreas Anastasiou ◽  
Piotr Fryzlewicz

AbstractWe introduce a new approach, called Isolate-Detect (ID), for the consistent estimation of the number and location of multiple generalized change-points in noisy data sequences. Examples of signal changes that ID can deal with are changes in the mean of a piecewise-constant signal and changes, continuous or not, in the linear trend. The number of change-points can increase with the sample size. Our method is based on an isolation technique, which prevents the consideration of intervals that contain more than one change-point. This isolation enhances ID’s accuracy as it allows for detection in the presence of frequent changes of possibly small magnitudes. In ID, model selection is carried out via thresholding, or an information criterion, or SDLL, or a hybrid involving the former two. The hybrid model selection leads to a general method with very good practical performance and minimal parameter choice. In the scenarios tested, ID is at least as accurate as the state-of-the-art methods; most of the times it outperforms them. ID is implemented in the R packages IDetect and breakfast, available from CRAN.


Polar Biology ◽  
2021 ◽  
Vol 44 (2) ◽  
pp. 259-273
Author(s):  
Céline Cunen ◽  
Lars Walløe ◽  
Kenji Konishi ◽  
Nils Lid Hjort

AbstractChanges in the body condition of Antarctic minke whales (Balaenoptera bonaerensis) have been investigated in a number of studies, but remain contested. Here we provide a new analysis of body condition measurements, with particularly careful attention to the statistical model building and to model selection issues. We analyse body condition data for a large number (4704) of minke whales caught between 1987 and 2005. The data consist of five different variables related to body condition (fat weight, blubber thickness and girth) and a number of temporal, spatial and biological covariates. The body condition variables are analysed using linear mixed-effects models, for which we provide sound biological motivation. Further, we conduct model selection with the focused information criterion (FIC), reflecting the fact that we have a clearly specified research question, which leads us to a clear focus parameter of particular interest. We find that there has been a substantial decline in body condition over the study period (the net declines are estimated to 10% for fat weight, 7% for blubber thickness and 3% for the girth). Interestingly, there seems to be some differences in body condition trends between males and females and in different regions of the Antarctic. The decline in body condition could indicate major changes in the Antarctic ecosystem, in particular, increased competition from some larger krill-eating whale species.


2014 ◽  
Vol 2014 ◽  
pp. 1-13
Author(s):  
Qichang Xie ◽  
Meng Du

The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing ak-class generalized information criterion (k-GIC), which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.


Economies ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 49 ◽  
Author(s):  
Waqar Badshah ◽  
Mehmet Bulut

Only unstructured single-path model selection techniques, i.e., Information Criteria, are used by Bounds test of cointegration for model selection. The aim of this paper was twofold; one was to evaluate the performance of these five routinely used information criteria {Akaike Information Criterion (AIC), Akaike Information Criterion Corrected (AICC), Schwarz/Bayesian Information Criterion (SIC/BIC), Schwarz/Bayesian Information Criterion Corrected (SICC/BICC), and Hannan and Quinn Information Criterion (HQC)} and three structured approaches (Forward Selection, Backward Elimination, and Stepwise) by assessing their size and power properties at different sample sizes based on Monte Carlo simulations, and second was the assessment of the same based on real economic data. The second aim was achieved by the evaluation of the long-run relationship between three pairs of macroeconomic variables, i.e., Energy Consumption and GDP, Oil Price and GDP, and Broad Money and GDP for BRICS (Brazil, Russia, India, China and South Africa) countries using Bounds cointegration test. It was found that information criteria and structured procedures have the same powers for a sample size of 50 or greater. However, BICC and Stepwise are better at small sample sizes. In the light of simulation and real data results, a modified Bounds test with Stepwise model selection procedure may be used as it is strongly theoretically supported and avoids noise in the model selection process.


2012 ◽  
Vol 12 (8) ◽  
pp. 2550-2565 ◽  
Author(s):  
Marcelo N. Kapp ◽  
Robert Sabourin ◽  
Patrick Maupin

2017 ◽  
Author(s):  
Rebecca L. Koscik ◽  
Derek L. Norton ◽  
Samantha L. Allison ◽  
Erin M. Jonaitis ◽  
Lindsay R. Clark ◽  
...  

ObjectiveIn this paper we apply Information-Theoretic (IT) model averaging to characterize a set of complex interactions in a longitudinal study on cognitive decline. Prior research has identified numerous genetic (including sex), education, health and lifestyle factors that predict cognitive decline. Traditional model selection approaches (e.g., backward or stepwise selection) attempt to find models that best fit the observed data; these techniques risk interpretations that only the selected predictors are important. In reality, several models may fit similarly well but result in different conclusions (e.g., about size and significance of parameter estimates); inference from traditional model selection approaches can lead to overly confident conclusions.MethodHere we use longitudinal cognitive data from ~1550 late-middle aged adults the Wisconsin Registry for Alzheimer’s Prevention study to examine the effects of sex, Apolipoprotein E (APOE) ɛ4 allele (non-modifiable factors), and literacy achievement (modifiable) on cognitive decline. For each outcome, we applied IT model averaging to a model set with combinations of interactions among sex, APOE, literacy, and age.ResultsFor a list-learning test, model-averaged results showed better performance for women vs men, with faster decline among men; increased literacy was associated with better performance, particularly among men. APOE had less of an effect on cognitive performance in this age range (~40-70).ConclusionsThese results illustrate the utility of the IT approach and point to literacy as a potential modifier of decline. Whether the protective effect of literacy is due to educational attainment or intrinsic verbal intellectual ability is the topic of ongoing work.


Sign in / Sign up

Export Citation Format

Share Document