scholarly journals Predictors of age-associated decline in maximal aerobic capacity: a comparison of four statistical models

1998 ◽  
Vol 84 (6) ◽  
pp. 2163-2170 ◽  
Author(s):  
Mitchell J. Rosen ◽  
John D. Sorkin ◽  
Andrew P. Goldberg ◽  
James M. Hagberg ◽  
Leslie I. Katzel

Studies assessing changes in maximal aerobic capacity (V˙o 2 max) associated with aging have traditionally employed the ratio ofV˙o 2 max to body weight. Log-linear, ordinary least-squares, and weighted least-squares models may avoid some of the inherent weaknesses associated with the use of ratios. In this study we used four different methods to examine the age-associated decline inV˙o 2 max in a cross-sectional sample of 276 healthy men, aged 45–80 yr. Sixty-one of the men were aerobically trained athletes, and the remainder were sedentary. The model that accounted for the largest proportion of variance was a weighted least-squares model that included age, fat-free mass, and an indicator variable denoting exercise training status. The model accounted for 66% of the variance inV˙o 2 max and satisfied all the important general linear model assumptions. The other approaches failed to satisfy one or more of these assumptions. The results indicated thatV˙o 2 max declines at the same rate in athletic and sedentary men (0.24 l/min or 9%/decade) and that 35% of this decline (0.08 l ⋅ min−1 ⋅ decade−1) is due to the age-associated loss of fat-free mass.

Author(s):  
Daniel Hoechle

I present a new Stata program, xtscc, that estimates pooled ordinary least-squares/weighted least-squares regression and fixed-effects (within) regression models with Driscoll and Kraay (Review of Economics and Statistics 80: 549–560) standard errors. By running Monte Carlo simulations, I compare the finite-sample properties of the cross-sectional dependence–consistent Driscoll–Kraay estimator with the properties of other, more commonly used covariance matrix estimators that do not account for cross-sectional dependence. The results indicate that Driscoll–Kraay standard errors are well calibrated when cross-sectional dependence is present. However, erroneously ignoring cross-sectional correlation in the estimation of panel models can lead to severely biased statistical results. I illustrate the xtscc program by considering an application from empirical finance. Thereby, I also propose a Hausman-type test for fixed effects that is robust to general forms of cross-sectional and temporal dependence.


2009 ◽  
Vol 12 (03) ◽  
pp. 297-317 ◽  
Author(s):  
ANOUAR BEN MABROUK ◽  
HEDI KORTAS ◽  
SAMIR BEN AMMOU

In this paper, fractional integrating dynamics in the return and the volatility series of stock market indices are investigated. The investigation is conducted using wavelet ordinary least squares, wavelet weighted least squares and the approximate Maximum Likelihood estimator. It is shown that the long memory property in stock returns is approximately associated with emerging markets rather than developed ones while strong evidence of long range dependence is found for all volatility series. The relevance of the wavelet-based estimators, especially, the approximate Maximum Likelihood and the weighted least squares techniques is proved in terms of stability and estimation accuracy.


2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.


2020 ◽  
pp. 636-645
Author(s):  
Hussain Karim Nashoor ◽  
Ebtisam Karim Abdulah

Examination of skewness makes academics more aware of the importance of accurate statistical analysis. Undoubtedly, most phenomena contain a certain percentage of skewness which resulted to the appearance of what is -called "asymmetry" and, consequently, the importance of the skew normal family . The epsilon skew normal distribution ESN (μ, σ, ε) is one of the probability distributions which provide a more flexible model because the skewness parameter provides the possibility to fluctuate from normal to skewed distribution. Theoretically, the estimation of linear regression model parameters, with an average error value that is not zero, is considered a major challenge due to having difficulties, as no explicit formula to calculate these estimates can be obtained. Practically, values for these estimates can be obtained only by referring to numerical methods. This research paper is dedicated to estimate parameters of the Epsilon Skew Normal General Linear Model (ESNGLM) using an adaptive least squares method, as along with the employment of the ordinary least squares method for estimating parameters of the General Linear Model (GLM). In addition, the coefficient of determination was used as a criterion to compare the models’ preference. These methods were applied to real data represented by dollar exchange rates. The Matlab software was applied in this work and the results showed that the ESNGLM represents a satisfactory model. 


Author(s):  
M Kavousian ◽  
A Salehi sashlabadi ◽  
MJ Jafari ◽  
S Khodakarim ◽  
H Rabiei

Introduction: Given the importance of adapting workers' physical and mental capabilities to their job needs, measuring their ability to work, maintaining, and upgrading, it has become an essential task. This study aimed to investigate WAI and its relationship with VO2max at one of the cement companies. Materials and Methods: This cross-sectional study was conducted among 130 employees of a cement company in Iran in 2018. For data collection, the WAI, Queens's test for maximum oxygen consumption, and a questionnaire designed by researchers (socio-demographic and work-related factors) were used. SPSS 21 was used to analyze the data.  Results: The results showed that the mean ± standard deviation of WAI in staff was 39.35 ± 4.64. Among the demographic and related variables, sports activity (P > 0.04) and sleep quality (P < 0.001), and work experience (P> 0.046) were significantly correlated with WAI. There was a significant positive correlation between the mean score of WAI and Vo2max (r = 0.21, p < 0.05). Regression modeling showed that Vo2max was the only significant predictor of WAI. Conclusion: According to the results of the study, to control and enhance the ability of the staff of the study, occupational intervention programs should focus on improving sleep quality and increased exercise. Also, considering the positive relationship of Vo2max to the WAI of the surveyed staff, it is recommended to select suitable employees in terms of aerobic capacity according to the workload of the job.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 95
Author(s):  
Pontus Söderbäck ◽  
Jörgen Blomvall ◽  
Martin Singull

Liquid financial markets, such as the options market of the S&P 500 index, create vast amounts of data every day, i.e., so-called intraday data. However, this highly granular data is often reduced to single-time when used to estimate financial quantities. This under-utilization of the data may reduce the quality of the estimates. In this paper, we study the impacts on estimation quality when using intraday data to estimate dividends. The methodology is based on earlier linear regression (ordinary least squares) estimates, which have been adapted to intraday data. Further, the method is also generalized in two aspects. First, the dividends are expressed as present values of future dividends rather than dividend yields. Second, to account for heteroscedasticity, the estimation methodology was formulated as a weighted least squares, where the weights are determined from the market data. This method is compared with a traditional method on out-of-sample S&P 500 European options market data. The results show that estimations based on intraday data have, with statistical significance, a higher quality than the corresponding single-times estimates. Additionally, the two generalizations of the methodology are shown to improve the estimation quality further.


2014 ◽  
Vol 71 (1) ◽  
Author(s):  
Bello Abdulkadir Rasheed ◽  
Robiah Adnan ◽  
Seyed Ehsan Saffari ◽  
Kafi Dano Pati

In a linear regression model, the ordinary least squares (OLS) method is considered the best method to estimate the regression parameters if the assumptions are met. However, if the data does not satisfy the underlying assumptions, the results will be misleading. The violation for the assumption of constant variance in the least squares regression is caused by the presence of outliers and heteroscedasticity in the data. This assumption of constant variance (homoscedasticity) is very important in linear regression in which the least squares estimators enjoy the property of minimum variance. Therefor e robust regression method is required to handle the problem of outlier in the data. However, this research will use the weighted least square techniques to estimate the parameter of regression coefficients when the assumption of error variance is violated in the data. Estimation of WLS is the same as carrying out the OLS in a transformed variables procedure. The WLS can easily be affected by outliers. To remedy this, We have suggested a strong technique for the estimation of regression parameters in the existence of heteroscedasticity and outliers. Here we apply the robust regression of M-estimation using iterative reweighted least squares (IRWLS) of Huber and Tukey Bisquare function and resistance regression estimator of least trimmed squares to estimating the model parameters of state-wide crime of united states in 1993. The outcomes from the study indicate the estimators obtained from the M-estimation techniques and the least trimmed method are more effective compared with those obtained from the OLS.


1985 ◽  
Vol 15 (2) ◽  
pp. 331-340 ◽  
Author(s):  
T. Cunia ◽  
R. D. Briggs

To construct biomass tables for various tree components that are consistent with each other, one may use linear regression techniques with dummy variables. When the biomass of these components is measured on the same sample trees, one should also use the generalized rather than ordinary least squares method. A procedure is shown which allows the estimation of the covariance matrix of the sample biomass values and circumvents the problem of storing and inverting large covariance matrices. Applied to 20 sets of sample tree data, the generalized least squares regressions generated estimates which, on the average were slightly higher (about 1%) than the sample data. The confidence and prediction bands about the regression function were wider, sometimes considerably wider than those estimated by the ordinary weighted least squares.


1986 ◽  
Vol 16 (2) ◽  
pp. 249-255 ◽  
Author(s):  
Edwin J. Green ◽  
William E. Strawderman

A Stein-rule estimator, which shrinks least squares estimates of regression parameters toward their weighted average, was employed to estimate the coefficient in the constant form factor volume equation for 18 species simultaneously. The Stein-rule procedure was applied to ordinary least squares estimates and weighted least squares estimates. Simulation tests on independent validation data sets revealed that the Stein-rule estimates were biased, but predicted better than the corresponding least squares estimates. The Stein-rule procedures also yielded lower estimated mean square errors for the volume equation coefficient than the corresponding least squares procedure. Different methods of withdrawing sample data from the total sample available for each species revealed that the superiority of Stein-rule procedures over least squares decreased as the sample size increased and that the Stein-rule procedures were robust to unequal sample sizes, at least on the scale studied here.


Sign in / Sign up

Export Citation Format

Share Document