scholarly journals Effects of error covariance structure on estimation of model averaging weights and predictive performance

2013 ◽  
Vol 49 (9) ◽  
pp. 6029-6047 ◽  
Author(s):  
Dan Lu ◽  
Ming Ye ◽  
Philip D. Meyer ◽  
Gary P. Curtis ◽  
Xiaoqing Shi ◽  
...  
2016 ◽  
Vol 41 (3) ◽  
pp. 444-455 ◽  
Author(s):  
Cherng G. Ding ◽  
Ten-Der Jane ◽  
Chiu-Hui Wu ◽  
Hang-Rung Lin ◽  
Chih-Kang Shen

It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which, however, is not systematically addressed in the literature. In this study, we first discuss characteristics of various covariance structures and their nested relations, based on which we then propose a systematic approach to facilitate identifying a plausible covariance structure. A test for stationarity of an error process and the sequential chi-square difference test are conducted in the approach. Preliminary simulation results indicate that the approach performs well when sample size is large enough. The approach is illustrated with empirical data. We recommend that the approach be used in LGM empirical studies to improve the quality of the specification of the error covariance structure.


2009 ◽  
Vol 100 (10) ◽  
pp. 2376-2388 ◽  
Author(s):  
Xinyu Zhang ◽  
Ti Chen ◽  
Alan T.K. Wan ◽  
Guohua Zou

2021 ◽  
Vol 72 ◽  
pp. 901-942
Author(s):  
Aliaksandr Hubin ◽  
Geir Storvik ◽  
Florian Frommlet

Regression models are used in a wide range of applications providing a powerful scientific tool for researchers from different fields. Linear, or simple parametric, models are often not sufficient to describe complex relationships between input variables and a response. Such relationships can be better described through  flexible approaches such as neural networks, but this results in less interpretable models and potential overfitting. Alternatively, specific parametric nonlinear functions can be used, but the specification of such functions is in general complicated. In this paper, we introduce a  flexible approach for the construction and selection of highly  flexible nonlinear parametric regression models. Nonlinear features are generated hierarchically, similarly to deep learning, but have additional  flexibility on the possible types of features to be considered. This  flexibility, combined with variable selection, allows us to find a small set of important features and thereby more interpretable models. Within the space of possible functions, a Bayesian approach, introducing priors for functions based on their complexity, is considered. A genetically modi ed mode jumping Markov chain Monte Carlo algorithm is adopted to perform Bayesian inference and estimate posterior probabilities for model averaging. In various applications, we illustrate how our approach is used to obtain meaningful nonlinear models. Additionally, we compare its predictive performance with several machine learning algorithms.  


Sign in / Sign up

Export Citation Format

Share Document