Obtaining Interpretable Parameters From Reparameterized Longitudinal Models: Transformation Matrices Between Growth Factors in Two Parameter Spaces

2021 ◽  
pp. 107699862110520
Author(s):  
Jin Liu ◽  
Robert A. Perera ◽  
Le Kang ◽  
Roy T. Sabo ◽  
Robert M. Kirkpatrick

This study proposes transformation functions and matrices between coefficients in the original and reparameterized parameter spaces for an existing linear-linear piecewise model to derive the interpretable coefficients directly related to the underlying change pattern. Additionally, the study extends the existing model to allow individual measurement occasions and investigates predictors for individual differences in change patterns. We present the proposed methods with simulation studies and a real-world data analysis. Our simulation study demonstrates that the method can generally provide an unbiased and accurate point estimate and appropriate confidence interval coverage for each parameter. The empirical analysis shows that the model can estimate the growth factor coefficients and path coefficients directly related to the underlying developmental process, thereby providing meaningful interpretation.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Luca Gamberi ◽  
Yanik-Pascal Förster ◽  
Evan Tzanis ◽  
Alessia Annibale ◽  
Pierpaolo Vivo

AbstractAn important question in representative democracies is how to determine the optimal parliament size of a given country. According to an old conjecture, known as the cubic root law, there is a fairly universal power-law relation, with an exponent equal to 1/3, between the size of an elected parliament and the country’s population. Empirical data in modern European countries support such universality but are consistent with a larger exponent. In this work, we analyse this intriguing regularity using tools from complex networks theory. We model the population of a democratic country as a random network, drawn from a growth model, where each node is assigned a constituency membership sampled from an available set of size D. We calculate analytically the modularity of the population and find that its functional relation with the number of constituencies is strongly non-monotonic, exhibiting a maximum that depends on the population size. The criterion of maximal modularity allows us to predict that the number of representatives should scale as a power-law in the size of the population, a finding that is qualitatively confirmed by the empirical analysis of real-world data.


2001 ◽  
Vol 33 (3) ◽  
pp. 279-292 ◽  
Author(s):  
Sharon L. Lewis ◽  
Douglas C. Montgomery ◽  
Raymond H. Myers

2018 ◽  
Vol 28 (6) ◽  
pp. 1741-1760 ◽  
Author(s):  
Cheng Ju ◽  
Joshua Schwab ◽  
Mark J van der Laan

The positivity assumption, or the experimental treatment assignment (ETA) assumption, is important for identifiability in causal inference. Even if the positivity assumption holds, practical violations of this assumption may jeopardize the finite sample performance of the causal estimator. One of the consequences of practical violations of the positivity assumption is extreme values in the estimated propensity score (PS). A common practice to address this issue is truncating the PS estimate when constructing PS-based estimators. In this study, we propose a novel adaptive truncation method, Positivity-C-TMLE, based on the collaborative targeted maximum likelihood estimation (C-TMLE) methodology. We demonstrate the outstanding performance of our novel approach in a variety of simulations by comparing it with other commonly studied estimators. Results show that by adaptively truncating the estimated PS with a more targeted objective function, the Positivity-C-TMLE estimator achieves the best performance for both point estimation and confidence interval coverage among all estimators considered.


Author(s):  
Heather Kitada Smalley ◽  
Sarah C. Emerson ◽  
Virginia Lesser

In this chapter, we develop theory and methodology to support mode adjustment and hindcasting/forecasting in the presence of different possible mode effect types, including additive effects and odds-multiplicative effects. Mode adjustment is particularly important if the ultimate goal is to report one aggregate estimate of response parameters, and also to allow for comparison to historical surveys performed with different modes. Effect type has important consequences for inferential validity when the baseline response changes over time (i.e. when there is a time trend or time effect). We present a methodology to provide inference for additive and odds-multiplicative effect types, and demonstrate its performance in a simulation study. We also show that if the wrong effect type is assumed, the resulting inference can be invalid as confidence interval coverage is greatly reduced and estimates can be biased.


2021 ◽  
Vol 17 (10) ◽  
pp. e1009159
Author(s):  
Jennifer Laura Lee ◽  
Wei Ji Ma

The spatial distribution of visual items allows us to infer the presence of latent causes in the world. For instance, a spatial cluster of ants allows us to infer the presence of a common food source. However, optimal inference requires the integration of a computationally intractable number of world states in real world situations. For example, optimal inference about whether a common cause exists based on N spatially distributed visual items requires marginalizing over both the location of the latent cause and 2N possible affiliation patterns (where each item may be affiliated or non-affiliated with the latent cause). How might the brain approximate this inference? We show that subject behaviour deviates qualitatively from Bayes-optimal, in particular showing an unexpected positive effect of N (the number of visual items) on the false-alarm rate. We propose several “point-estimating” observer models that fit subject behaviour better than the Bayesian model. They each avoid a costly computational marginalization over at least one of the variables of the generative model by “committing” to a point estimate of at least one of the two generative model variables. These findings suggest that the brain may implement partially committal variants of Bayesian models when detecting latent causes based on complex real world data.


2021 ◽  
Author(s):  
Jennifer L Lee ◽  
Wei Ji Ma

The spatial distribution of visual items allows us to infer the presence of latent causes in the world. For instance, a spatial cluster of ants allows us to infer the presence of a common food source. However, optimal inference requires the integration of a computationally intractable number of world states in real world situations. For example, optimal inference about whether a common cause exists based on N spatially distributed visual items requires marginalizing over both the location of the latent cause and 2N possible affiliation patterns (where each item may be affiliated or non-affiliated with the latent cause). How might the brain approximate this inference? We show that subject behaviour deviates qualitatively from Bayes-optimal, in particular showing an unexpected positive effect of N (the number of visual items) on the false-alarm rate. We propose several “point-estimating” observer models that fit subject behaviour better than the Bayesian model. They each avoid a costly computational marginalization over at least one of the variables of the generative model by “committing” to a point estimate of at least one of the two generative model variables. These findings suggest that the brain may implement partially committal variants of Bayesian models when detecting latent causes based on complex real world data.


2017 ◽  
Vol 28 (4) ◽  
pp. 1044-1063 ◽  
Author(s):  
Cheng Ju ◽  
Richard Wyss ◽  
Jessica M Franklin ◽  
Sebastian Schneeweiss ◽  
Jenny Häggström ◽  
...  

Propensity score-based estimators are increasingly used for causal inference in observational studies. However, model selection for propensity score estimation in high-dimensional data has received little attention. In these settings, propensity score models have traditionally been selected based on the goodness-of-fit for the treatment mechanism itself, without consideration of the causal parameter of interest. Collaborative minimum loss-based estimation is a novel methodology for causal inference that takes into account information on the causal parameter of interest when selecting a propensity score model. This “collaborative learning” considers variable associations with both treatment and outcome when selecting a propensity score model in order to minimize a bias-variance tradeoff in the estimated treatment effect. In this study, we introduce a novel approach for collaborative model selection when using the LASSO estimator for propensity score estimation in high-dimensional covariate settings. To demonstrate the importance of selecting the propensity score model collaboratively, we designed quasi-experiments based on a real electronic healthcare database, where only the potential outcomes were manually generated, and the treatment and baseline covariates remained unchanged. Results showed that the collaborative minimum loss-based estimation algorithm outperformed other competing estimators for both point estimation and confidence interval coverage. In addition, the propensity score model selected by collaborative minimum loss-based estimation could be applied to other propensity score-based estimators, which also resulted in substantive improvement for both point estimation and confidence interval coverage. We illustrate the discussed concepts through an empirical example comparing the effects of non-selective nonsteroidal anti-inflammatory drugs with selective COX-2 inhibitors on gastrointestinal complications in a population of Medicare beneficiaries.


Sign in / Sign up

Export Citation Format

Share Document