A Comment on the Economics of Labor Adjustment: Mind the Gap: Evidence from a Monte Carlo Experiment: Reply

2009 ◽  
Vol 99 (5) ◽  
pp. 2267-2276 ◽  
Author(s):  
Russell Cooper ◽  
Jonathan L. Willis

This note responds to Christian Bayer (2009). Cooper and Willis (2004), hereafter CW, find the aggregate nonlinearities reported in Ricardo Caballero and Eduardo Engel (1993) and Caballero, Engel, and John Haltiwanger (1997) reflect mismeasurement of the employment gap, not nonlinearities in plant-level adjustment. Bayer concludes the CW result is not robust to alternative aggregate shock processes. We concur, but argue that the nonlinearity created by mismeasurement does not disappear. Instead, it is directly related to the level of the aggregate shock. The CW findings are robust for the natural case of unobserved gaps. (JEL E24, J23)

1996 ◽  
Author(s):  
Daniel B. Nelson ◽  
Boaz A. Schwartz

2009 ◽  
Vol 99 (5) ◽  
pp. 2258-2266 ◽  
Author(s):  
Christian Bayer

This comment addresses a point raised in Russell Cooper and Jonathan Willis (2003, 2004), which discusses whether the “gap approach” is appropriate to describe the adjustment of production factors. They show that this approach to labor adjustment as applied in Ricardo J. Caballero, Eduardo Engel, and John C. Haltiwanger (1997) and Caballero and Engel (1993) can falsely generate evidence in favor of nonconvex adjustment costs, even if costs are quadratic. Simulating a dynamic model of firm-level employment decisions with quadratic adjustment costs and estimating a gap model from the simulated data, they identify two factors producing this spurious evidence: approximating dynamic adjustment targets by static ones, and estimating the static targets themselves. This comment reassesses whether the first factor indeed leads to spurious evidence in favor of fixed adjustment costs. We show that the numerical approximation of the productivity process is pivotal for Cooper and Willis's finding. With more precise approximations of the productivity process, it becomes rare to falsely reject the quadratic adjustment cost model due to the approximation of dynamic targets by static ones. (JEL E24, J3)


2002 ◽  
Vol 18 (2) ◽  
pp. 420-468 ◽  
Author(s):  
Oliver Linton ◽  
Yoon-Jae Whang

We introduce a kernel-based estimator of the density function and regression function for data that have been grouped into family totals. We allow for a common intrafamily component but require that observations from different families be independent. We establish consistency and asymptotic normality for our procedures. As usual, the rates of convergence can be very slow depending on the behavior of the characteristic function at infinity. We investigate the practical performance of our method in a simple Monte Carlo experiment.


1994 ◽  
Vol 10 (2) ◽  
pp. 357-371 ◽  
Author(s):  
Masahito Kobayashi

This paper compares the local power of tests for a nonlinear transformation of the dependent variable in a regression model against the alternative hypothesis of a linear transformation. It is shown that the local power of the Cox test is higher than those of the extended projection test of MacKinnon, White, and Davidson, and Bera and McAleer's test. The theoretical result is supported by a Monte-Carlo experiment in testing for a regression model with a logarithmically transformed dependent variable against a linear regression model.


2019 ◽  
Vol 289 (2) ◽  
pp. 495-501
Author(s):  
Mike G. Tsionas ◽  
Athanasios Andrikopoulos

AbstractWe extend the uniform mixture model of Gao et al. (Ann Oper Res, 2019. 10.1007/s10479-019-03236-9) to the case of linear regression. Gao et al. (Ann Oper Res, 2019. 10.1007/s10479-019-03236-9) proposed that to characterize the probability distributions of multimodal and irregular data observed in engineering, a uniform mixture model can be used. This model is a weighted combination of multiple uniform distribution components. This case is of empirical interest since, in many instances, the distribution of the error term in a linear regression model cannot be assumed unimodal. Bayesian methods of inference organized around Markov chain Monte Carlo are proposed. In a Monte Carlo experiment, significant efficiency gains are found in comparison to least squares justifying the use of the uniform mixture model.


2011 ◽  
Vol 24 (14) ◽  
pp. 3781-3795 ◽  
Author(s):  
Quan Dong ◽  
Xing Chen ◽  
Tiexi Chen

Abstract Many works suggest that the intensity of extreme precipitation might be changing under the background of global warming. Because of the importance of extreme precipitation in the Yellow–Huaihe and Yangtze–Huaihe River basins of China and to compare the spatial difference, the generalized Pareto distribution (GPD) function is used to fit the daily precipitation series in these basins and an estimate of the extreme precipitation spatial distribution is presented. At the same time, its long-term trends are estimated for the period between 1951 and 2004 by using a generalized linear model (GLM), which is based on GPD. High quality daily precipitation data from 215 observation stations over the area are used in this study. The statistical significance of the trend fields is tested with a Monte Carlo experiment based on a two-dimensional Hurst coefficient, H2. The spatial distribution of the shape parameter of GPD indicates that the upper reaches of the Huaihe River (HuR) basin have the largest probability of extreme rainfall events, which is consistent with most historical flood records in this region. Spatial variations in extreme precipitation trends are found and show significant positive trends in the upper reaches of Poyang Lake in the Yangtze River (YaR) basin and a significant negative trend in the mid- to lower reaches of the Yellow River (YeR) and Haihe River (HaR) basins. The trends in the HuR basin and the lower reaches of Poyang Lake in the YaR basin are nearly neutral. All trend fields are significant at the 5% level of significance from the Monte Carlo experiments.


Sign in / Sign up

Export Citation Format

Share Document