Improved Estimation of Coefficients in Regression Models with Incomplete Prior Information

1983 ◽  
Vol 25 (8) ◽  
pp. 775-782 ◽  
Author(s):  
V. K. Srivastava ◽  
A. K. Srivastava
Author(s):  
Kento Terashima ◽  
◽  
Hirotaka Takano ◽  
Junichi Murata

Reinforcement learning is applicable to complex or unknown problems because the solution search process is done by trial-and-error. However, the calculation time for the trial-and-error search becomes larger as the scale of the problem increases. Therefore, in order to decrease calculation time, some methods have been proposed using the prior information on the problem. This paper improves a previously proposed method utilizing options as prior information. In order to increase the learning speed even with wrong options, methods for option correction by forgetting the policy and extending initiation sets are proposed.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Manickavasagar Kayanan ◽  
Pushpakanthie Wijekoon

Among several variable selection methods, LASSO is the most desirable estimation procedure for handling regularization and variable selection simultaneously in the high-dimensional linear regression models when multicollinearity exists among the predictor variables. Since LASSO is unstable under high multicollinearity, the elastic-net (Enet) estimator has been used to overcome this issue. According to the literature, the estimation of regression parameters can be improved by adding prior information about regression coefficients to the model, which is available in the form of exact or stochastic linear restrictions. In this article, we proposed a stochastic restricted LASSO-type estimator (SRLASSO) by incorporating stochastic linear restrictions. Furthermore, we compared the performance of SRLASSO with LASSO and Enet in root mean square error (RMSE) criterion and mean absolute prediction error (MAPE) criterion based on a Monte Carlo simulation study. Finally, a real-world example was used to demonstrate the performance of SRLASSO.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Manickavasagar Kayanan ◽  
Pushpakanthie Wijekoon

The analysis of misspecification was extended to the recently introduced stochastic restricted biased estimators when multicollinearity exists among the explanatory variables. The Stochastic Restricted Ridge Estimator (SRRE), Stochastic Restricted Almost Unbiased Ridge Estimator (SRAURE), Stochastic Restricted Liu Estimator (SRLE), Stochastic Restricted Almost Unbiased Liu Estimator (SRAULE), Stochastic Restricted Principal Component Regression Estimator (SRPCRE), Stochastic Restricted r-k (SRrk) class estimator, and Stochastic Restricted r-d (SRrd) class estimator were examined in the misspecified regression model due to missing relevant explanatory variables when incomplete prior information of the regression coefficients is available. Further, the superiority conditions between estimators and their respective predictors were obtained in the mean square error matrix (MSEM) sense. Finally, a numerical example and a Monte Carlo simulation study were used to illustrate the theoretical findings.


1988 ◽  
Vol 5 (1) ◽  
pp. 49-57 ◽  
Author(s):  
A.K. Kashyap ◽  
P.A.V.B. Swamy ◽  
J.S. Mehta ◽  
R.D. Porter

2018 ◽  
Author(s):  
Paul - Christian Bürkner ◽  
Emmanuel Charpentier

Ordinal predictors are commonly used in regression models. They are often incorrectly treated as either nominal or metric, thus under- or overestimating the contained information. Such practices may lead to worse inference and predictions compared to methods which are specifically designed for this purpose. We propose a new method for modeling ordinal predictors that applies in situations in which it is reasonable to assume their effects to be monotonic. The parameterization of such monotonic effects is realized in terms of a scale parameter $b$ representing the direction and size of the effect and a simplex parameter $\zeta$ modeling the normalized differences between categories. This ensures that predictions increase or decrease monotonically, while changes between adjacent categories may vary across categories. This formulation generalizes to interaction terms as well as multilevel structures. Monotonic effects may not only be applied to ordinal predictors, but also to other discrete variables for which a monotonic relationship is plausible. In simulation studies, we show that the model is well calibrated and, in case of monotonicity, has similar or even better predictive performance than other approaches designed to handle ordinal predictors. Using Stan, we developed a Bayesian estimation method for monotonic effects, which allows to incorporate prior information and to check the assumption of monotonicity. We have implemented this method in the R package brms, so that fitting monotonic effects in a fully Bayesian framework is now straightforward.


2016 ◽  
Vol 24 (3) ◽  
pp. 339-355 ◽  
Author(s):  
Carlisle Rainey

When facing small numbers of observations or rare events, political scientists often encounter separation, in which explanatory variables perfectly predict binary events or nonevents. In this situation, maximum likelihood provides implausible estimates and the researcher might want incorporate some form of prior information into the model. The most sophisticated research uses Jeffreys’ invariant prior to stabilize the estimates. While Jeffreys’ prior has the advantage of being automatic, I show that it often provides too much prior information, producing smaller point estimates and narrower confidence intervals than even highly skeptical priors. To help researchers assess the amount of information injected by the prior distribution, I introduce the concept of a partial prior distribution and develop the tools required to compute the partial prior distribution of quantities of interest, estimate the subsequent model, and summarize the results.


Sign in / Sign up

Export Citation Format

Share Document