autocorrelated error
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 3)

H-INDEX

9
(FIVE YEARS 0)

2020 ◽  
Vol 5 (3) ◽  
pp. 70
Author(s):  
Nayla Desviona ◽  
Ferra Yanuar

  The purpose of this study is to compare the ability of the Classical Quantile Regression method and the Bayesian Quantile Regression method in estimating models that contain autocorrelated error problems using simulation studies. In the quantile regression approach, the data response is divided into several pieces or quantiles conditions on indicator variables. Then, The parameter model is estimated for each selected quantiles. The parameters are estimated using conditional quantile functions obtained by minimizing absolute asymmetric errors. In the Bayesian quantile regression method, the data error is assumed to be asymmetric Laplace distribution. The Bayesian approach for quantile regression uses the Markov Chain Monte Carlo Method with the Gibbs sample algorithm to produce a converging posterior mean. The best method for estimating parameter is the method that produces the smallest absolute value of bias and the smallest confidence interval. This study resulted that the Bayesian Quantile method produces smaller absolute bias values and confidence intervals than the quantile regression method. These results proved that the Bayesian Quantile Regression method tends to produce better estimate values than the Quantile Regression method in the case of autocorrelation errors.                                                                                                                                                                                     Keywords: Quantile Regression Method, Bayesian Quantile Regression Method, Confidence Interval, Autocorrelation.


2020 ◽  
Author(s):  
Kris Villez ◽  
Dario Del Giudice ◽  
Marc B. Neumann ◽  
Jörg Rieckermann

In engineering practice, model-based design requires not only a good process-based model, but also a good description of stochastic disturbances and measurement errors to learn credible parameter values from observations. However, typical methods use Gaussian error models, which often cannot describe the complex temporal patterns of residuals. Consequently, this results in overconfidence in the identified parameters and, in turn, optimistic reactor designs. In this work, we assess the strengths and weaknesses of a method to statistically describe these patterns with autocorrelated error models. This method produces increased widths of the credible prediction intervals following the inclusion of the bias term, in turn leading to more conservative design choices. However, we also show that the augmented error model is not a universal tool, as its application cannot guarantee the desired reliability of the resulting wastewater reactor design.


Author(s):  
Samuel Olorunfemi Adams ◽  
Rueben Adeyemi Ipinyomi

Spline Smoothing is used to filter out noise or disturbance in an observation, its performance depends on the choice of smoothing parameters. There are many methods of estimating smoothing parameters; most popular among them are; Generalized Maximum Likelihood (GML), Generalized Cross-Validation (GCV), and Unbiased Risk (UBR), this methods tend to overfit smoothing parameters in the presence of autocorrelation error. A new Spline Smoothing estimation method is proposed and compare with three existing methods in order to eliminate the problem of over fitting associated with the presence of Autocorrelation in the error term. It is demonstrated through a simulation study performed by using a program written in R based on the predictive Mean Score Error criteria. The result indicated that the predictive mean square error (PMSE) of the four smoothing methods decreases as the smoothing parameters increases and decreases as the sample sizes increases. This study discovered that the proposed smoothing method is the best for time series observations with Autocorrelated error because it doesn’t over fit and works well for large sample sizes. This study will help researchers overcome the problem of over fitting associated with applying Smoothing spline method time series observation.


Author(s):  
Monday Osagie Adenomon ◽  
Benjamin Agboola Oyejola

The goal of VAR or BVAR is the characterization of the dynamics and endogenous relationships among time series. Also the VAR models are known for their applications to forecasting and policy analysis. This paper compare the performance of VAR and Sims-Zha Bayesian VAR models when the multiple time series are jointly influenced by different levels of collinearity and autocorrelation in the short term (T=16, 32, 64 and 128). Five levels (-0.9,-0.5, 0,+0.5,+0.9) of collinearity and autocorrelation were considered and the results from the simulation study revealed that VAR(2) model dominated for no and moderate levels of autocorrelation (-0.5, 0, +0.5) irrespective of the collinearity level except in few cases when T=16. While the BVAR models dominated for high autocorrelation levels (-0.9 and +0.9) irrespective of the collinearity level except in few cases when T=128. The performance of the models varies at different levels of the collinearity and autocorrelated error, and also varies with the short term periods. Furthermore, the values of the RMSE and MAE criteria decrease as a result of increase in the time series length. In conclusion, the performance of the forecasting models depend on the time series data structure and the time series length. It is therefore recommended that the data structure and series length should be considered in using an appropriate model for forecasting.


2017 ◽  
Vol 52 (12) ◽  
pp. 1241-1252 ◽  
Author(s):  
Leandro Félix Demuner ◽  
Diana Suckeveris ◽  
Julian Andrés Muñoz ◽  
Vinicius Camargo Caetano ◽  
Cesar Gonçalves de Lima ◽  
...  

Abstract: The objective of this work was to investigate adjustments of the Gompertz, Logistic, von Bertalanffy, and Richards growth models, in male and female chickens of the Cobb 500, Ross 308, and Hubbard Flex lines. Initially, 1,800 chickens were randomly housed in 36 pens, with six replicates per lineage and sex, fed ad libitum with feed according to gender, and bred until 56 days of age. Average weekly body weight for each line and sex was used to estimate model parameters using the ordinary least squares, weighted by the inverse variance of the body weight and weighted with a first-order autocorrelated error structure. Weighted models and weighted autocorrelated error models showed different parameter values when compared with the unweighted models, modifying the inflection point of the curve and according to the adjusted coefficient of determination, and the standard deviation of the residue and Akaike information criteria exhibited optimal adjustments. Among the models studied, the Richards and the Gompertz models had the best adjustments in all situations, with more realistic parameter estimates. However, the weighted Richards model, with or without ponderation with the autoregressive first order model AR (1), exhibited the best adjustments in females and males, respectively.


2015 ◽  
Vol 172 ◽  
pp. 325-334 ◽  
Author(s):  
John Wiedenmann ◽  
Michael J. Wilberg ◽  
Andrea Sylvia ◽  
Thomas J. Miller

Sign in / Sign up

Export Citation Format

Share Document