The Assumptions of Randomness, Zero Mean, Constant Variance and Normality of the Disturbance Variable u

1977 ◽  
pp. 179-199
Author(s):  
A. Koutsoyiannis
Keyword(s):  
1998 ◽  
Vol 7 (2) ◽  
pp. 149-171
Author(s):  
Steve Hadjiyannakis ◽  
Louis Culumovic ◽  
Robert L. Welch

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2964 ◽  
Author(s):  
Gaël Kermarrec ◽  
Hamza Alkhatib ◽  
Ingo Neumann

For a trustworthy least-squares (LS) solution, a good description of the stochastic properties of the measurements is indispensable. For a terrestrial laser scanner (TLS), the range variance can be described by a power law function with respect to the intensity of the reflected signal. The power and scaling factors depend on the laser scanner under consideration, and could be accurately determined by means of calibrations in 1d mode or residual analysis of LS adjustment. However, such procedures complicate significantly the use of empirical intensity models (IM). The extent to which a point-wise weighting is suitable when the derived variance covariance matrix (VCM) is further used in a LS adjustment remains moreover questionable. Thanks to closed loop simulations, where both the true geometry and stochastic model are under control, we investigate how variations of the parameters of the IM affect the results of a LS adjustment. As a case study, we consider the determination of the Cartesian coordinates of the control points (CP) from a B-splines curve. We show that a constant variance can be assessed to all the points of an object having homogeneous properties, without affecting the a posteriori variance factor or the loss of efficiency of the LS solution. The results from a real case scenario highlight that the conclusions of the simulations stay valid even for more challenging geometries. A procedure to determine the range variance is proposed to simplify the computation of the VCM.


1983 ◽  
pp. 35-71 ◽  
Author(s):  
P. McCullagh ◽  
J. A. Nelder

1989 ◽  
pp. 48-97
Author(s):  
P. McCullagh ◽  
J. A. Nelder

2018 ◽  
Vol 615 ◽  
pp. A111 ◽  
Author(s):  
N. Olspert ◽  
J. Pelt ◽  
M. J. Käpylä ◽  
J. Lehtinen

Context. Period estimation is one of the central topics in astronomical time series analysis, in which data is often unevenly sampled. Studies of stellar magnetic cycles are especially challenging, as the periods expected in those cases are approximately the same length as the datasets themselves. The datasets often contain trends, the origin of which is either a real long-term cycle or an instrumental effect. But these effects cannot be reliably separated, while they can lead to erroneous period determinations if not properly handled. Aims. In this study we aim at developing a method that can handle the trends properly. By performing an extensive set of testing, we show that this is the optimal procedure when contrasted with methods that do not include the trend directly in the model. The effect of the form of the noise (whether constant or heteroscedastic) on the results is also investigated. Methods. We introduced a Bayesian generalised Lomb-Scargle periodogram with trend (BGLST), which is a probabilistic linear regression model using Gaussian priors for the coefficients of the fit and a uniform prior for the frequency parameter. Results. We show, using synthetic data, that when there is no prior information on whether and to what extent the true model of the data contains a linear trend, the introduced BGLST method is preferable to the methods that either detrend the data or opt not to detrend the data before fitting the periodic model. Whether to use noise with other than constant variance in the model depends on the density of the data sampling and on the true noise type of the process.


2000 ◽  
Vol 46 (10) ◽  
pp. 1669-1680 ◽  
Author(s):  
James M Davenport ◽  
Brian Schlain

Abstract Manufacturers and users of medical diagnostic devices are provided a statistical decision tool for investigating a claimed minimal detectable concentration (MDC). The MDC is defined by setting two respective probabilities: that the blank sample being analyzed is determined to have analyte and that the device fails to determine a low concentration of analyte at the MDC. The statistical procedure for simultaneously testing the two aforementioned analytical decision errors assumes that signal responses follow a gaussian distribution but does not require a fitted calibration curve, knowledge of distribution parameters, or the assumption of constant variance in the low assay range. Evaluation of the operating characteristics of the procedure requires knowledge only of the variance ratio between the MDC and zero-dose signal distributions, which usually is well known.


2014 ◽  
Vol 71 (1) ◽  
Author(s):  
Bello Abdulkadir Rasheed ◽  
Robiah Adnan ◽  
Seyed Ehsan Saffari ◽  
Kafi Dano Pati

In a linear regression model, the ordinary least squares (OLS) method is considered the best method to estimate the regression parameters if the assumptions are met. However, if the data does not satisfy the underlying assumptions, the results will be misleading. The violation for the assumption of constant variance in the least squares regression is caused by the presence of outliers and heteroscedasticity in the data. This assumption of constant variance (homoscedasticity) is very important in linear regression in which the least squares estimators enjoy the property of minimum variance. Therefor e robust regression method is required to handle the problem of outlier in the data. However, this research will use the weighted least square techniques to estimate the parameter of regression coefficients when the assumption of error variance is violated in the data. Estimation of WLS is the same as carrying out the OLS in a transformed variables procedure. The WLS can easily be affected by outliers. To remedy this, We have suggested a strong technique for the estimation of regression parameters in the existence of heteroscedasticity and outliers. Here we apply the robust regression of M-estimation using iterative reweighted least squares (IRWLS) of Huber and Tukey Bisquare function and resistance regression estimator of least trimmed squares to estimating the model parameters of state-wide crime of united states in 1993. The outcomes from the study indicate the estimators obtained from the M-estimation techniques and the least trimmed method are more effective compared with those obtained from the OLS.


Author(s):  
Carlos A. P. Bengaly ◽  
Uendert Andrade ◽  
Jailson S. Alcaniz

Abstract We address the $$\simeq 4.4\sigma $$≃4.4σ tension between local and the CMB measurements of the Hubble Constant using simulated Type Ia Supernova (SN) data-sets. We probe its directional dependence by means of a hemispherical comparison through the entire celestial sphere as an estimator of the $$H_0$$H0 cosmic variance. We perform Monte Carlo simulations assuming isotropic and non-uniform distributions of data points, the latter coinciding with the real data. This allows us to incorporate observational features, such as the sample incompleteness, in our estimation. We obtain that this tension can be alleviated to $$3.4\sigma $$3.4σ for isotropic realizations, and $$2.7\sigma $$2.7σ for non-uniform ones. We also find that the $$H_0$$H0 variance is largely reduced if the data-sets are augmented to 4 and 10 times the current size. Future surveys will be able to tell whether the Hubble Constant tension happens due to unaccounted cosmic variance, or whether it is an actual indication of physics beyond the standard cosmological model.


Environments ◽  
2019 ◽  
Vol 6 (12) ◽  
pp. 124
Author(s):  
Johannes Ranke ◽  
Stefan Meinecke

In the kinetic evaluation of chemical degradation data, degradation models are fitted to the data by varying degradation model parameters to obtain the best possible fit. Today, constant variance of the deviations of the observed data from the model is frequently assumed (error model “constant variance”). Allowing for a different variance for each observed variable (“variance by variable”) has been shown to be a useful refinement. On the other hand, experience gained in analytical chemistry shows that the absolute magnitude of the analytical error often increases with the magnitude of the observed value, which can be explained by an error component which is proportional to the true value. Therefore, kinetic evaluations of chemical degradation data using a two-component error model with a constant component (absolute error) and a component increasing with the observed values (relative error) are newly proposed here as a third possibility. In order to check which of the three error models is most adequate, they have been used in the evaluation of datasets obtained from pesticide evaluation dossiers published by the European Food Safety Authority (EFSA). For quantitative comparisons of the fits, the Akaike information criterion (AIC) was used, as the commonly used error level defined by the FOrum for the Coordination of pesticide fate models and their USe(FOCUS) is based on the assumption of constant variance. A set of fitting routines was developed within the mkin software package that allow for robust fitting of all three error models. Comparisons using parent only degradation datasets, as well as datasets with the formation and decline of transformation products showed that in many cases, the two-component error model proposed here provides the most adequate description of the error structure. While it was confirmed that the variance by variable error model often provides an improved representation of the error structure in kinetic fits with metabolites, it could be shown that in many cases, the two-component error model leads to a further improvement. In addition, it can be applied to parent only fits, potentially improving the accuracy of the fit towards the end of the decline curve, where concentration levels are lower.


Sign in / Sign up

Export Citation Format

Share Document