scholarly journals Decision Makers' Ability To Identify Unusual Costs And Implications For Alternative Estimation Procedures

2011 ◽  
Vol 7 (4) ◽  
pp. 36
Author(s):  
MaryAnne Atkinson ◽  
Scott Jones

This paper reports the results of an experiment in which individuals visually fitted a cost function to data. The inclusion or omission of unusual data points within the data set was experimentally manipulated. The results indicate that individuals omit outliers from their visual fits, but do not omit influential points. Evidence also suggests that the weighting rule used by individuals is more robust that the weighting rule used in the ordinary least squares criterion.

2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.


1979 ◽  
Vol 25 (3) ◽  
pp. 432-438 ◽  
Author(s):  
P J Cornbleet ◽  
N Gochman

Abstract The least-squares method is frequently used to calculate the slope and intercept of the best line through a set of data points. However, least-squares regression slopes and intercepts may be incorrect if the underlying assumptions of the least-squares model are not met. Two factors in particular that may result in incorrect least-squares regression coefficients are: (a) imprecision in the measurement of the independent (x-axis) variable and (b) inclusion of outliers in the data analysis. We compared the methods of Deming, Mandel, and Bartlett in estimating the known slope of a regression line when the independent variable is measured with imprecision, and found the method of Deming to be the most useful. Significant error in the least-squares slope estimation occurs when the ratio of the standard deviation of measurement of a single x value to the standard deviation of the x-data set exceeds 0.2. Errors in the least-squares coefficients attributable to outliers can be avoided by eliminating data points whose vertical distance from the regression line exceed four times the standard error the estimate.


2011 ◽  
Vol 19 (01) ◽  
pp. 71-100 ◽  
Author(s):  
A. R. ORTIZ ◽  
H. T. BANKS ◽  
C. CASTILLO-CHAVEZ ◽  
G. CHOWELL ◽  
X. WANG

A method for estimating parameters in dynamic stochastic (Markov Chain) models based on Kurtz's limit theory coupled with inverse problem methods developed for deterministic dynamical systems is proposed and illustrated in the context of disease dynamics. This methodology relies on finding an approximate large-population behavior of an appropriate scaled stochastic system. The approach leads to a deterministic approximation obtained as solutions of rate equations (ordinary differential equations) in terms of the large sample size average over sample paths or trajectories (limits of pure jump Markov processes). Using the resulting deterministic model, we select parameter subset combinations that can be estimated using an ordinary-least-squares (OLS) or generalized-least-squares (GLS) inverse problem formulation with a given data set. The selection is based on two criteria of the sensitivity matrix: the degree of sensitivity measured in the form of its condition number and the degree of uncertainty measured in the form of its parameter selection score. We illustrate the ideas with a stochastic model for the transmission of vancomycin-resistant enterococcus (VRE) in hospitals and VRE surveillance data from an oncology unit.


2014 ◽  
Vol 3 (2) ◽  
pp. 174
Author(s):  
Yaser Abdelhadi

Linear transformations are performed for selected exponential engineering functions. The Optimum values of parameters of the linear model equation that fits the set of experimental or simulated data points are determined by the linear least squares method. The classical and matrix forms of ordinary least squares are illustrated. Keywords: Exponential Functions; Linear Modeling; Ordinary Least Squares; Parametric Estimation; Regression Steps.


2018 ◽  
Vol 22 (5) ◽  
pp. 358-371 ◽  
Author(s):  
Radoslaw Trojanek ◽  
Michal Gluszak ◽  
Justyna Tanas

In the paper, we analysed the impact of proximity to urban green areas on apartment prices in Warsaw. The data-set contained in 43 075 geo-coded apartment transactions for the years 2010 to 2015. In this research, the hedonic method was used in Ordinary Least Squares (OLS), Weighted Least Squares (WLS) and Median Quantile Regression (Median QR) models. We found substantial evidence that proximity to an urban green area is positively linked with apartment prices. On an average presence of a green area within 100 meters from an apartment increases the price of a dwelling by 2,8% to 3,1%. The effect of park/forest proximity on house prices is more significant for newer apartments than those built before 1989. We found that proximity to a park or a forest is particularly important (and has a higher implicit price as a result) in the case of buildings constructed after 1989. The impact of an urban green was particularly high in the case of a post-transformation housing estate. Close vicinity (less than 100 m distance) to an urban green increased the sales prices of apartments in new residential buildings by 8,0–8,6%, depending on a model.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mao-Feng Kao ◽  
Lynn Hodgkinson ◽  
Aziz Jaafar

Purpose Using a data set of Taiwanese listed firms from 2002 to 2015, this paper aims to examine the determinants to voluntarily appoint independent directors. Design/methodology/approach This study uses panel estimation to exploit both the cross-section and time-series nature of the data. Further, this paper uses Tobit regression, generalized linear model (GLM) in the additional analysis and the two-stage least squares to mitigate for a possible endogeneity issue. Findings The main findings show that Taiwanese firms with large board sizes tend to voluntarily appoint independent directors and firms that already have independent supervisors more willingly to accept additional independent directors onto the board. Furthermore, ownership concentration and institutional ownership are positively associated with the voluntary appointment of independent directors. On the contrary, firms controlled by family members are generally reluctant to voluntarily appoint independent directors. Research limitations/implications The findings are important for managers, shareholders, creditors and policymakers. In particular, when considering the determinants of the voluntary appointment of independent directors, the results indicate that independent supervisors, outside shareholders and institutional investors are significant factors in influencing effective internal and external corporate governance mechanisms. This research work focuses on the voluntary appointment of independent directors. It would be interesting to compare the effectiveness of voluntary appointments with a mandatory appointment within Taiwan and with other jurisdictions. Originality/value This study incrementally contributes to the corporate governance literature in several ways. First, this study extends the earlier research by using a more comprehensive data set of non-financial Taiwanese firms and using alternative methodologies to investigate the determinants of voluntary appointment of independent directors. Second, prior studies tend to neglect the possible issue of using a censored and fractional dependent variable, the proportion of independent directors, which might yield biased and inconsistent parameter estimates when using ordinary least squares regression estimation. Finally, this study addresses the relevant econometric issues by using the Tobit, GLM and the two-stage least squares for a possible endogeneity concern.


1993 ◽  
Vol 24 (4) ◽  
pp. 118-123 ◽  
Author(s):  
D. C. Bowie ◽  
D. J. Bradfield

In this article we focus on beta estimation in the thinly-traded environment of the Johannesburg Stock Exchange (JSE). We build on existing literature by evaluating a beta estimation procedure known as the trade-to-trade which has not until now been considered in the context of the JSE. We contrast our results with two known estimation procedures, i.e. the Cohen et al. and the traditional ordinary least squares (OLS). The trade-to-trade methodology, the estimator proposed by Cohen et al. and OLS are objectively assessed for shares typical of the JSE on the basis of unbiasedness and efficiency in the controlled environment of a simulation study. The trade-to-trade technique is found to be superior on both counts and is recommended as the appropriate technique for beta estimation on the JSE.


1989 ◽  
Vol 19 (5) ◽  
pp. 664-673 ◽  
Author(s):  
Andrew J. R. Gillespie ◽  
Tiberius Cunia

Biomass tables are often constructed from cluster samples by means of ordinary least squares regression estimation procedures. These procedures assume that sample observations are uncorrelated, which ignores the intracluster correlation of cluster samples and results in underestimates of the model error. We tested alternative estimation procedures by simulation under a variety of cluster sampling methods, to determine combinations of sampling and estimation procedures that yield accurate parameter estimates and reliable estimates of error. Modified, generalized, and jack-knife least squares procedures gave accurate parameter and error estimates when sample trees were selected with equal probability. Regression models that did not include height as a predictor variable yielded biased parameter estimates when sample trees were selected with probability proportional to tree size. Models that included height did not yield biased estimates. There was no discernible gain in precision associated with sampling with probability proportional to size. Random coefficient regressions generally gave biased point estimates with poor precision, regardless of sampling method.


2010 ◽  
Vol 62 (4) ◽  
pp. 875-882 ◽  
Author(s):  
A. Dembélé ◽  
J.-L. Bertrand-Krajewski ◽  
B. Barillon

Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012099
Author(s):  
Elena Rodríguez-Rojo ◽  
Javier Cubas ◽  
Santiago Pindado

Abstract In the present work, a method for magnetometer calibration through least squares fitting is presented. This method has been applied over the magnetometer’s data set obtained during the integration tests of the Attitude Determination and Control Subsystem (ADCS) of UPMSat-2. The UPMSat-2 mission is a 50-kg satellite designed and manufactured by the Technical University of Madrid (Universidad Politécnica de Madrid), and finally launched in September 2020. The satellite has three fluxgate magnetometers (one of them experimental) whose calibration is critical to obtain correct measurements to be used by the ADCS. Among several mathematical methods suitable to obtain the calibration parameters, an ordinary least squares fitting algorithm is selected as a first step of the calibration process. The surface estimated is an ellipsoid, surface represented by the magnetometer’s measures of the Earth magnetic field in a point of the space. The calibration elements of the magnetometers are related to the coefficients of the estimated ellipsoid.


Sign in / Sign up

Export Citation Format

Share Document