Uncovering and Treating Unobserved Heterogeneity with FIMIX-PLS: Which Model Selection Criterion Provides an Appropriate Number of Segments?

2011 ◽  
Vol 63 (1) ◽  
pp. 34-62 ◽  
Author(s):  
Marko Sarstedt ◽  
Jan-Michael Becker ◽  
Christian M. Ringle ◽  
Manfred Schwaiger
Forecasting ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 56-90
Author(s):  
Monica Defend ◽  
Aleksey Min ◽  
Lorenzo Portelli ◽  
Franz Ramsauer ◽  
Francesco Sandrini ◽  
...  

This article considers the estimation of Approximate Dynamic Factor Models with homoscedastic, cross-sectionally correlated errors for incomplete panel data. In contrast to existing estimation approaches, the presented estimation method comprises two expectation-maximization algorithms and uses conditional factor moments in closed form. To determine the unknown factor dimension and autoregressive order, we propose a two-step information-based model selection criterion. The performance of our estimation procedure and the model selection criterion is investigated within a Monte Carlo study. Finally, we apply the Approximate Dynamic Factor Model to real-economy vintage data to support investment decisions and risk management. For this purpose, an autoregressive model with the estimated factor span of the mixed-frequency data as exogenous variables maps the behavior of weekly S&P500 log-returns. We detect the main drivers of the index development and define two dynamic trading strategies resulting from prediction intervals for the subsequent returns.


2000 ◽  
Vol 12 (8) ◽  
pp. 1889-1900 ◽  
Author(s):  
Yoshua Bengio

Many machine learning algorithms can be formulated as the minimization of a training criterion that involves a hyperparameter. This hyperparameter is usually chosen by trial and error with a model selection criterion. In this article we present a methodology to optimize several hyper-parameters, based on the computation of the gradient of a model selection criterion with respect to the hyperparameters. In the case of a quadratic training criterion, the gradient of the selection criterion with respect to the hyperparameters is efficiently computed by backpropagating through a Cholesky decomposition. In the more general case, we show that the implicit function theorem can be used to derive a formula for the hyper-parameter gradient involving second derivatives of the training criterion.


Sign in / Sign up

Export Citation Format

Share Document