Finite Mixture Dynamic Regression Modeling of Panel Data With Implications for Dynamic Response Analysis

2005 ◽  
Vol 30 (2) ◽  
pp. 169-187 ◽  
Author(s):  
David Kaplan

This article considers the problem of estimating dynamic linear regression models when the data are generated from finite mixture probability density function where the mixture components are characterized by different dynamic regression model parameters. Specifically, conventional linear models assume that the data are generated by a single probability density function characterized by a single set of regression model parameters. However, when the true generating model is finite mixture density function, then estimation of conventional linear models under the assumption of a single density function may lead to erroneous conclusions. Instead, it may be desirable to estimate the regression model under the assumption that the data are derived from a finite mixture density function and to examine differences in the parameters of the model within each mixture component. Dynamic regression models and subsequent dynamic response analysis using dynamic multipliers are also likely to be affected by the existence of a finite mixture density because dynamic multipliers are functions of the regression model parameters. Utilizing finite mixture modeling applied to two real data examples, this article shows that dynamic responses to changes in exogenous variables can be quite different depending on the number and nature of underlying mixture components. Implications for substantive conclusions based on the use of dynamic multipliers is discussed.

2019 ◽  
Vol 11 (01n02) ◽  
pp. 1950003
Author(s):  
Fábio Prataviera ◽  
Gauss M. Cordeiro ◽  
Edwin M. M. Ortega ◽  
Adriano K. Suzuki

In several applications, the distribution of the data is frequently unimodal, asymmetric or bimodal. The regression models commonly used for applications to data with real support are the normal, skew normal, beta normal and gamma normal, among others. We define a new regression model based on the odd log-logistic geometric normal distribution for modeling asymmetric or bimodal data with support in [Formula: see text], which generalizes some known regression models including the widely known heteroscedastic linear regression. We adopt the maximum likelihood method for estimating the model parameters and define diagnostic measures to detect influential observations. For some parameter settings, sample sizes and different systematic structures, various simulations are performed to verify the adequacy of the estimators of the model parameters. The empirical distribution of the quantile residuals is investigated and compared with the standard normal distribution. We prove empirically the usefulness of the proposed models by means of three applications to real data.


2016 ◽  
Vol 5 (3) ◽  
pp. 9 ◽  
Author(s):  
Elizabeth M. Hashimoto ◽  
Gauss M. Cordeiro ◽  
Edwin M.M. Ortega ◽  
G.G. Hamedani

We propose and study a new log-gamma Weibull regression model. We obtain explicit expressions for the raw and incomplete moments, quantile and generating functions and mean deviations of the log-gamma Weibull distribution. We demonstrate that the new regression model can be applied to censored data since it represents a parametric family of models which includes as sub-models several widely-known regression models and therefore can be used more effectively in the analysis of survival data. We obtain the maximum likelihood estimates of the model parameters by considering censored data and evaluate local influence on the estimates of the parameters by taking different perturbation schemes. Some global-influence measurements are also investigated. Further, for different parameter settings, sample sizes and censoring percentages, various simulations are performed. In addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be extended to a modified deviance residual in the proposed regression model applied to censored data. We demonstrate that our extended regression model is very useful to the analysis of real data and may give more realistic fits than other special regression models. 


2011 ◽  
Vol 255-260 ◽  
pp. 2601-2605
Author(s):  
Zhang Jun Liu ◽  
Yao Long Lei

An orthogonal expansion method for earthquake ground motion was introduced in the first part of the paper. In the method, seismic acceleration process is represented as a linear combination of deterministic functions modulated by 10 uncorrelated random variables. In the second part of the paper, the recently developed probability density evolution method (PDEM) is employed to study linear random response of structures subjected to the external excitations. In the PDEM, a completely uncoupled one-dimensional governing partial differential equation, the generalized density evolution equation, is derived first with regard to evolutionary probability density function of the stochastic response for nonlinear structures. The solution of this equation can put out the instantaneous probability density function. So it is natural to combine the PDEM and the orthogonal expansion of seismic ground motion to study the linear random earthquake response. Finally, combining an example of a linear frame structure subjected to non-stationary ground motions, this paper validate the proposed approach and expounds the application of this method.


2018 ◽  
Vol 41 (1) ◽  
pp. 75-86
Author(s):  
Taciana Shimizu ◽  
Francisco Louzada ◽  
Adriano Suzuki

In this paper, we consider to evaluate the efficiency of volleyball players according to the performance of attack, block and serve, but considering the compositional structure of the data related to the fundaments. The finite mixture of regression models better fitted the data in comparison with the usual regression model. The maximum likelihood estimates are obtained via an EM algorithm. A simulation study revels that the estimates are closer to the real values, the estimators are asymptotically unbiased for the parameters. A real Brazilian volleyball dataset related to the efficiency of the players is considered for the analysis.


2021 ◽  
Author(s):  
Jose Pina-Sánchez ◽  
David Buil-Gil ◽  
ian brunton-smith ◽  
Alexandru Cernat

Objectives: Assess the extent to which measurement error in police recorded crime rates impact the estimates of regression models exploring the causes and consequences of crime.Methods: We focus on linear models where crime rates are included either as the response or as an explanatory variable, in their original scale, or log-transformed. Two measurement error mechanisms are considered, systematic errors in the form of under-recorded crime, and random errors in the form of recording inconsistencies across areas. The extent to which such measurement error mechanisms impact model parameters is demonstrated algebraically, using formal notation, and graphically, using simulations.Results: Most coefficients and measures of uncertainty from models where crime rates are included in their original scale are severely biased. However, in many cases, this problem could be minimised, or altogether eliminated by log-transforming crime rates. This transforms the multiplicative measurement error observed in police recorded crime rates into a less harmful additive mechanism.Conclusions: The validity of findings from regression models where police recorded crime rates are used in their original scale is put into question. In interpreting the large evidence base exploring the effects and consequences of crime using police statistics we urge researchers to consider the biasing effects shown here. Equally, we urge researchers to log-transform crime rates before they are introduced in statistical models.


2019 ◽  
Vol 67 (4) ◽  
pp. 283-303
Author(s):  
Chettapong Janya-anurak ◽  
Thomas Bernard ◽  
Jürgen Beyerer

Abstract Many industrial and environmental processes are characterized as complex spatio-temporal systems. Such systems known as distributed parameter systems (DPSs) are usually highly complex and it is difficult to establish the relation between model inputs, model outputs and model parameters. Moreover, the solutions of physics-based models commonly differ somehow from the measurements. In this work, appropriate Uncertainty Quantification (UQ) approaches are selected and combined systematically to analyze and identify systems. However, there are two main challenges when applying the UQ approaches to nonlinear distributed parameter systems. These are: (1) how uncertainties are modeled and (2) the computational effort, as the conventional methods require numerous evaluations of the model to compute the probability density function of the response. This paper presents a framework to solve these two issues. Within the Bayesian framework, incomplete knowledge about the system is considered as uncertainty of the system. The uncertainties are represented by random variables, whose probability density function can be achieved by converting the knowledge of the parameters using the Principle of Maximum Entropy. The generalized Polynomial Chaos (gPC) expansion is employed to reduce the computational effort. The framework using gPC based on Bayesian UQ proposed in this work is capable of analyzing systems systematically and reducing the disagreement between model predictions and measurements of the real processes to fulfill user defined performance criteria. The efficiency of the framework is assessed by applying it to a benchmark model (neutron diffusion equation) and to a model of a complex rheological forming process. These applications illustrate that the framework is capable of systematically analyzing the system and optimally calibrating the model parameters.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 548
Author(s):  
Yuri S. Popkov

The problem of randomized maximum entropy estimation for the probability density function of random model parameters with real data and measurement noises was formulated. This estimation procedure maximizes an information entropy functional on a set of integral equalities depending on the real data set. The technique of the Gâteaux derivatives is developed to solve this problem in analytical form. The probability density function estimates depend on Lagrange multipliers, which are obtained by balancing the model’s output with real data. A global theorem for the implicit dependence of these Lagrange multipliers on the data sample’s length is established using the rotation of homotopic vector fields. A theorem for the asymptotic efficiency of randomized maximum entropy estimate in terms of stationary Lagrange multipliers is formulated and proved. The proposed method is illustrated on the problem of forecasting of the evolution of the thermokarst lake area in Western Siberia.


2020 ◽  
pp. 65-92
Author(s):  
Bendix Carstensen

This chapter evaluates regression models, focusing on the normal linear regression model. The normal linear regression model establishes a relationship between a quantitative response (also called outcome or dependent) variable, assumed to be normally distributed, and one or more explanatory (also called regression, predictor, or independent) variables about which no distributional assumptions are made. The model is usually referred to as 'the general linear model'. The chapter then differentiates between simple linear regression and multiple regression. The term 'simple linear regression' covers the regression model where there is one response variable and one explanatory variable, assuming a linear relationship between the two. The chapter also discusses the model formulae in R; generalized linear models; collinearity and aliasing; and logarithmic transformations.


Sign in / Sign up

Export Citation Format

Share Document