scholarly journals Tolerance Intervals in a Heteroscedastic Linear Regression Context with Applications to Aerospace Equipment Surveillance

2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.

2014 ◽  
Vol 71 (1) ◽  
Author(s):  
Bello Abdulkadir Rasheed ◽  
Robiah Adnan ◽  
Seyed Ehsan Saffari ◽  
Kafi Dano Pati

In a linear regression model, the ordinary least squares (OLS) method is considered the best method to estimate the regression parameters if the assumptions are met. However, if the data does not satisfy the underlying assumptions, the results will be misleading. The violation for the assumption of constant variance in the least squares regression is caused by the presence of outliers and heteroscedasticity in the data. This assumption of constant variance (homoscedasticity) is very important in linear regression in which the least squares estimators enjoy the property of minimum variance. Therefor e robust regression method is required to handle the problem of outlier in the data. However, this research will use the weighted least square techniques to estimate the parameter of regression coefficients when the assumption of error variance is violated in the data. Estimation of WLS is the same as carrying out the OLS in a transformed variables procedure. The WLS can easily be affected by outliers. To remedy this, We have suggested a strong technique for the estimation of regression parameters in the existence of heteroscedasticity and outliers. Here we apply the robust regression of M-estimation using iterative reweighted least squares (IRWLS) of Huber and Tukey Bisquare function and resistance regression estimator of least trimmed squares to estimating the model parameters of state-wide crime of united states in 1993. The outcomes from the study indicate the estimators obtained from the M-estimation techniques and the least trimmed method are more effective compared with those obtained from the OLS.


2011 ◽  
Vol 130-134 ◽  
pp. 730-733
Author(s):  
Narong Phothi ◽  
Somchai Prakancharoen

This research proposed a comparison of accuracy based on data imputation between unconstrained structural equation modeling (Uncon-SEM) and weighted least squares (WLS) regression. This model is developed by University of California, Irvine (UCI) and measured using the mean magnitude of relative error (MMRE). Experimental data set is created using the waveform generator that contained 21 indicators (1,200 samples) and divided into two groups (1,000 for training and 200 for testing groups). In fact, training group was analyzed by three main factors (F1, F2, and F3) for creating the models. The result of the experiment show MMRE of Uncon-SEM method based on the testing group is 34.29% (accuracy is 65.71%). In contrast, WLS method produces MMRE for testing group is 55.54% (accuracy is 44.46%). So, Uncon-SEM is high accuracy and MMRE than WLS method that is 21.25%.


2018 ◽  
Vol 22 (5) ◽  
pp. 358-371 ◽  
Author(s):  
Radoslaw Trojanek ◽  
Michal Gluszak ◽  
Justyna Tanas

In the paper, we analysed the impact of proximity to urban green areas on apartment prices in Warsaw. The data-set contained in 43 075 geo-coded apartment transactions for the years 2010 to 2015. In this research, the hedonic method was used in Ordinary Least Squares (OLS), Weighted Least Squares (WLS) and Median Quantile Regression (Median QR) models. We found substantial evidence that proximity to an urban green area is positively linked with apartment prices. On an average presence of a green area within 100 meters from an apartment increases the price of a dwelling by 2,8% to 3,1%. The effect of park/forest proximity on house prices is more significant for newer apartments than those built before 1989. We found that proximity to a park or a forest is particularly important (and has a higher implicit price as a result) in the case of buildings constructed after 1989. The impact of an urban green was particularly high in the case of a post-transformation housing estate. Close vicinity (less than 100 m distance) to an urban green increased the sales prices of apartments in new residential buildings by 8,0–8,6%, depending on a model.


2021 ◽  
Author(s):  
Mohammad Sina Jahangir ◽  
John Quilty

<p>Hydrological forecasts at different horizons are often made using different models. These forecasts are usually temporally inconsistent (e.g., monthly forecasts may not sum to yearly forecasts), which may lead to misaligned or conflicting decisions. Temporal hierarchal reconciliation (or simply, hierarchical reconciliation) methods can be used for obtaining consistent forecasts at different horizons. However, their effectiveness in the field of hydrology has not yet been investigated. Thus, this research assesses hierarchal reconciliation for precipitation forecasting due to its high importance in hydrological applications (e.g., reservoir operations, irrigation, drought and flood forecasting). Original precipitation forecasts (ORF) were produced using three different models, including ‘automatic’ Exponential Time-Series Smoothing (ETS), Artificial Neural Networks (ANN), and Seasonal Auto-Regressive Integrated Moving Average (SARIMA). The forecasts were produced at six timescales, namely, monthly, 2-monthly, quarterly, 4-monthly, bi-annual, and annual, for 84 basins selected from the Canadian model parameter experiment (CANOPEX) dataset. Hierarchical reconciliation methods including Hierarchical Least Squares (HLS), Weighted Least Squares (WLS), and Ordinary Least Squares (OLS) along with the Bottom-Up (BU) method were applied to obtain consistent forecasts at all timescales.</p><p>Generally, ETS and ANN showed the best and worst performance, respectively, according to a wide range of performance metrics (root mean square error (RMSE), normalized RMSE (nRMSE), mean absolute error (MAE), normalized MAE (nMAE), and Nash-Sutcliffe Efficiency index (NSE)). The results indicated that hierarchal reconciliation has a dissimilar impact on the ORFs’ accuracy in different basins and timescales, improving the RMSE in some cases while decreasing it in others. Also, it was highlighted that for different forecast models, hierarchical reconciliation methods showed different levels of performance. According to the RMSE and MAE, the BU method outperformed the hierarchical methods for ETS forecasts, while for ANN and SARIMA forecasts, HLS and OLS improved the forecasts more substantially, respectively. The sensitivity of ORF to hierarchical reconciliation was assessed using the RMSE. It was shown that both accurate and inaccurate ORF could be improved through hierarchical reconciliation; in particular, the effectiveness of hierarchical reconciliation appears to be more dependent on the ORF accuracy than it is on the type of hierarchical reconciliation method.</p><p>While in the present work, the effectiveness of hierarchical reconciliation for hydrological forecasting was assessed via data-driven models, the methodology can easily be extended to process-based or hybrid (process-based data-driven) models. Further, since hydrological forecasts at different timescales may have different levels of importance to water resources managers and/or policymakers, hierarchical reconciliation can be used to weight the different timescales according to the user’s preference/desired goals.</p>


2010 ◽  
Vol 62 (4) ◽  
pp. 875-882 ◽  
Author(s):  
A. Dembélé ◽  
J.-L. Bertrand-Krajewski ◽  
B. Barillon

Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.


Separations ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 49 ◽  
Author(s):  
Juan Sanchez

It is necessary to determine the limit of detection when validating any analytical method. For methods with a linear response, a simple and low labor-consuming procedure is to use the linear regression parameters obtained in the calibration to estimate the blank standard deviation from the residual standard deviation (sres), or the intercept standard deviation (sb0). In this study, multiple experimental calibrations are evaluated, applying both ordinary and weighted least squares. Moreover, the analyses of replicated blank matrices, spiked at 2–5 times the lowest calculated limit values with the two regression methods, are performed to obtain the standard deviation of the blank. The limits of detection obtained with ordinary least squares, weighted least squares, the signal-to-noise ratio, and replicate blank measurements are then compared. Ordinary least squares, which is the simplest and most commonly applied calibration regression methodology, always overestimate the values of the standard deviations at the lower levels of calibration ranges. As a result, the detection limits are up to one order of magnitude greater than those obtained with the other approaches studied, which all gave similar limits.


2015 ◽  
Vol 31 (1) ◽  
pp. 61-75 ◽  
Author(s):  
Jianzhu Li ◽  
Richard Valliant

Abstract An extensive set of diagnostics for linear regression models has been developed to handle nonsurvey data. The models and the sampling plans used for finite populations often entail stratification, clustering, and survey weights, which renders many of the standard diagnostics inappropriate. In this article we adapt some influence diagnostics that have been formulated for ordinary or weighted least squares for use with stratified, clustered survey data. The statistics considered here include DFBETAS, DFFITS, and Cook's D. The differences in the performance of ordinary least squares and survey-weighted diagnostics are compared using complex survey data where the values of weights, response variables, and covariates vary substantially.


Talanta ◽  
2010 ◽  
Vol 80 (3) ◽  
pp. 1102-1109 ◽  
Author(s):  
Rosilene S. Nascimento ◽  
Roberta E.S. Froes ◽  
Nilton O.C. e Silva ◽  
Rita L.P. Naveira ◽  
Denise B.C. Mendes ◽  
...  

2016 ◽  
Vol 23 (5) ◽  
pp. 1138-1145 ◽  
Author(s):  
António Almeida ◽  
Brian Garrod

Mature tourism destinations are increasingly needing to diversify their products and markets. To be successful, such strategies require a very detailed understanding of potential tourists’ levels and patterns of spending. Empirical studies of tourist expenditure have tended to employ ordinary least squares regression for this purpose. There are, however, a number of important limitations to this technique, chief among which is its inability to distinguish between tourists who have higher- and lower-than-average levels of spending. As such, some researchers recommend the use of an alternative estimation technique, known as quantile regression, which does allow such distinctions to be made. This study uses a single data set, collected among rural tourists in Madeira, to analyse the determinants of tourist expenditure using both techniques. This enables direct comparison to be made and illustrates the additional insights to be gained using quantile regression.


Sign in / Sign up

Export Citation Format

Share Document