On Constraints in Parameter Estimation and Model Misspecification

Author(s):  
Christ D. Richmond
2020 ◽  
Vol 34 (04) ◽  
pp. 5692-5699
Author(s):  
Zheyan Shen ◽  
Peng Cui ◽  
Tong Zhang ◽  
Kun Kunag

We consider the problem of learning linear prediction models with model misspecification bias. In such case, the collinearity among input variables may inflate the error of parameter estimation, resulting in instability of prediction results when training and test distributions do not match. In this paper we theoretically analyze this fundamental problem and propose a sample reweighting method that reduces collinearity among input variables. Our method can be seen as a pretreatment of data to improve the condition of design matrix, and it can then be combined with any standard learning method for parameter estimation and variable selection. Empirical studies on both simulation and real datasets demonstrate the effectiveness of our method in terms of more stable performance across different distributed data.


1981 ◽  
Vol 18 (1) ◽  
pp. 87-93 ◽  
Author(s):  
Frank J. Carmone ◽  
Paul E. Green

Most applications of conjoint analysis have emphasized main-effects models, largely because fewer data points are needed to fit that type of model at the individual level. The authors suggest that such simplifications can lead to poor predictions when the underlying utility functions depart from the simplicity of a main-effects model. They also show how compromise designs, which allow orthogonal estimation of selected two-way interactions (as well as main effects), can provide a more general experimental design in cases where a specified set of two-way interactions is suspected.


2020 ◽  
Vol 34 (04) ◽  
pp. 4485-4492
Author(s):  
Kun Kuang ◽  
Ruoxuan Xiong ◽  
Peng Cui ◽  
Susan Athey ◽  
Bo Li

For many machine learning algorithms, two main assumptions are required to guarantee performance. One is that the test data are drawn from the same distribution as the training data, and the other is that the model is correctly specified. In real applications, however, we often have little prior knowledge on the test data and on the underlying true model. Under model misspecification, agnostic distribution shift between training and test data leads to inaccuracy of parameter estimation and instability of prediction across unknown test data. To address these problems, we propose a novel Decorrelated Weighting Regression (DWR) algorithm which jointly optimizes a variable decorrelation regularizer and a weighted regression model. The variable decorrelation regularizer estimates a weight for each sample such that variables are decorrelated on the weighted training data. Then, these weights are used in the weighted regression to improve the accuracy of estimation on the effect of each variable, thus help to improve the stability of prediction across unknown test data. Extensive experiments clearly demonstrate that our DWR algorithm can significantly improve the accuracy of parameter estimation and stability of prediction with model misspecification and agnostic distribution shift.


1981 ◽  
Vol 18 (1) ◽  
pp. 87 ◽  
Author(s):  
Frank J. Carmone ◽  
Paul E. Green

Author(s):  
H. Thomas Banks ◽  
Jared Catenacci ◽  
Shuhua Hu

AbstractNormalized differences of several adjacent observations, referred to as pseudo-measurement errors in this paper, are used in so-called difference-based estimation methods as building blocks for the variance estimate of measurement errors. Numerical results demonstrate that pseudo-measurement errors can be used to serve the role of measurement errors. Based on this information, we propose the use of pseudo-measurement errors to determine an appropriate statistical model and then to subsequently investigate whether there is a mathematical model misspecification or error. We also propose to use the information provided by pseudo-measurement errors to quantify uncertainty in parameter estimation by bootstrapping methods. A number of numerical examples are given to illustrate the effectiveness of these proposed methods.


Optimization ◽  
1976 ◽  
Vol 7 (5) ◽  
pp. 665-672
Author(s):  
H. Burke ◽  
C. Hennig ◽  
W H. Schmidt

2019 ◽  
Vol 24 (4) ◽  
pp. 492-515 ◽  
Author(s):  
Ken Kelley ◽  
Francis Bilson Darku ◽  
Bhargab Chattopadhyay

Sign in / Sign up

Export Citation Format

Share Document