Bayesian Versus Maximum Likelihood Estimation of Treatment Effects in Bivariate Probit Instrumental Variable Models

2018 ◽  
Vol 7 (3) ◽  
pp. 651-659 ◽  
Author(s):  
Florian M. Hollenbach ◽  
Jacob M. Montgomery ◽  
Adriana Crespo-Tenorio

Bivariate probit models are a common choice for scholars wishing to estimate causal effects in instrumental variable models where both the treatment and outcome are binary. However, standard maximum likelihood approaches for estimating bivariate probit models are problematic. Numerical routines in popular software suites frequently generate inaccurate parameter estimates and even estimated correctly, maximum likelihood routines provide no straightforward way to produce estimates of uncertainty for causal quantities of interest. In this note, we show that adopting a Bayesian approach provides more accurate estimates of key parameters and facilitates the direct calculation of causal quantities along with their attendant measures of uncertainty.

2017 ◽  
Vol 41 (6) ◽  
pp. 456-471 ◽  
Author(s):  
Yinhong He ◽  
Ping Chen ◽  
Yong Li ◽  
Shumei Zhang

Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates [Formula: see text] (obtained by maximum likelihood estimation [MLE]) as their true values [Formula: see text], thus the deviation of the estimated [Formula: see text] from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord’s bias-correction method (named as maximum likelihood estimation-Lord’s bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of [Formula: see text] which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.


2017 ◽  
Vol 12 (02) ◽  
pp. 1750010 ◽  
Author(s):  
K. FERGUSSON

A discounted equity index is computed as the ratio of an equity index to the accumulated savings account denominated in the same currency. In this way, discounting provides a natural way of separating the modeling of the short rate from the market price of risk component of the equity index. In this vein, we investigate the applicability of maximum likelihood estimation to stochastic models of a discounted equity index, providing explicit formulae for parameter estimates. We restrict our consideration to two important index models, namely the Black–Scholes model and the minimal market model of Platen, each having an explicit formula for the transition density function. Explicit formulae for estimates of the model parameters and their standard errors are derived and are used in fitting the two models to US data. Further, we demonstrate the effect of the model choice on the no-arbitrage assumption employed in risk neutral pricing.


2021 ◽  
pp. 263-280
Author(s):  
Timothy E. Essington

The chapter “Skills for Fitting Models to Data” provides worked examples of the model parameter estimation and model-selection examples presented in Part 2, both in spreadsheets and in R. This chapter presumes that the reader is reasonably comfortable setting up spreadsheets and R code and applying the modeling skills presented in Chapter 15. It begins with maximum likelihood estimation, presenting first a direct method and then numerical methods that yield more precise (usually) estimates of maximum likelihood parameter estimates. It then examines how to estimate parameters that do not appear in probability functions (e.g. a model in which survivorship rate is density dependent). The chapter concludes by discussing likelihood profiles.


2013 ◽  
Vol 172 (1) ◽  
pp. 77-89 ◽  
Author(s):  
Honglin Wang ◽  
Emma M. Iglesias ◽  
Jeffrey M. Wooldridge

2004 ◽  
Vol 1 (1) ◽  
pp. 109-118
Author(s):  
Ibrahim M. Abdalla ◽  
Mohamed Y. Hassan

In this paper the Lorenz curve proposed by Abdalla and Hassan is fitted to grouped income data of Abu-Dhabi Emirate family expenditure survey, 1997, using Maximum likelihood estimation method and assuming that income shares follow a Dirichlet distribution. Employing Abdalla and Hassan's together with some known parametric Lorenz models, estimates based on the maximum likelihood are compared with those based on nonlinear least squares techniques. Given the nature of the distribution of income and the distinct characteristics of Abu-Dhabi Emirate, it is evident that the maximum likelihood estimation approach produces comparable parameter estimates to the non-linear least squares techniques, but higher standard errors and less goodness of fit. Under the two estimation techniques, the model proposed by Abdalla and Hassan performed well better than some well known parametric models in the literature.


2010 ◽  
Vol 230 (5) ◽  
Author(s):  
Andreas Ziegler

SummaryThis paper analyzes small sample properties of several versions of z-tests in multinomial probit models under simulated maximum likelihood estimation. Our Monte Carlo experiments show that z-tests on utility function coefficients provide more robust results than z-tests on variance covariance parameters. As expected, both the number of observations and the number of random draws in the incorporated Geweke-Hajivassiliou-Keane (GHK) simulator have on average a positive impact on the conformities between the shares of type I errors and the nominal significance levels. Furthermore, an increase of the number of observations leads to an expected decrease of the shares of type II errors, whereas the number of random draws in the GHK simulator surprisingly has no significant effect in this respect. One main result of our study is that the use of the robust version of the simulated z-test statistics is not systematically more favorable than the use of other versions. However, the application of the z-test statistics that exclusively include the Hessian matrix of the simulated loglikelihood function to estimate the information matrix often leads to substantial computational problems.


Sign in / Sign up

Export Citation Format

Share Document