Uncertainty of a Least Squares Curve Fit through Points with Known Uncertainties

1995 ◽  
Vol 23 (4) ◽  
pp. 315-326
Author(s):  
Ronald D. Flack

Uncertainties in least squares curve fits to data with uncertainties are examined. First, experimental data with nominal curve shapes, representing property profiles between boundaries, are simulated by adding known uncertainties to individual points. Next, curve fits to the simulated data are achieved and compared to the nominal curves. By using a large number of different sets of data, statistical differences between the two curves are quantified and, thus, the uncertainty of the curve fit is derived. Studies for linear, quadratic, and higher-order nominal curves with curve fits up to fourth order are presented herein. Typically, curve fits have uncertainties that are 50% or less than those of the individual data points. These uncertainties increase with increasing order of the least squares curve fit. The uncertainties decrease with increasing number of data points on the curves.

1978 ◽  
Vol 24 (4) ◽  
pp. 611-620 ◽  
Author(s):  
R B Davis ◽  
J E Thompson ◽  
H L Pardue

Abstract This paper discusses properties of several statistical parameters that are useful in judging the quality of least-squares fits of experimental data and in interpreting least-squares results. The presentation includes simplified equations that emphasize similarities and dissimilarities among the standard error of estimate, the standard deviations of slopes and intercepts, the correlation coefficient, and the degree of correlation between the least-squares slope and intercept. The equations are used to illustrate dependencies of these parameters upon experimentally controlled variables such as the number of data points and the range and average value of the independent variable. Results are interpreted in terms of which parameters are most useful for different kinds of applications. The paper also includes a discussion of joint confidence intervals that should be used when slopes and intercepts are highly correlated and presents equations that can be used to judge the degree of correlation between these coefficients and to compute the elliptical joint confidence intervals. The parabolic confidence intervals for calibration cures are also discussed briefly.


2014 ◽  
Vol 3 (2) ◽  
pp. 174
Author(s):  
Yaser Abdelhadi

Linear transformations are performed for selected exponential engineering functions. The Optimum values of parameters of the linear model equation that fits the set of experimental or simulated data points are determined by the linear least squares method. The classical and matrix forms of ordinary least squares are illustrated. Keywords: Exponential Functions; Linear Modeling; Ordinary Least Squares; Parametric Estimation; Regression Steps.


Author(s):  
Duncan C. Thomas

The biological effects of genes depend upon how they are expressed in target tissues at various points in time, which is determined by their epigenetic state and in turn may be influenced by the environment.Some experimental data suggests that such influences can be transmitted across generations.In this chapter, I propose a general statistical framework for modelling how environmental and germline genetic influences on disease is mediated by epigenetics, both within the individual and across generations.The approach is illustrated on simulated data and on a study of the effect of air pollution and the ARG/NOS family of genes on childhood respiratory disease.


1980 ◽  
Vol 34 (5) ◽  
pp. 539-548 ◽  
Author(s):  
David M. Haaland ◽  
Robert G. Easterling

Improved sensitivity and precision in the quantitative analysis of trace gases by Fourier transform infrared spectroscopy have been achieved by the application of new spectral least squares methods. By relating all of the spectral information present in the reference spectrum of a trace gas to that of the unknown sample and by appropriately fitting the baseline, detections of trace gases can be obtained even though the individual spectral features may lie well below the noise level. Four least squares methods incorporating different baseline assumptions were investigated and compared using calibrated gases of CO, N2O, and CO2 in dry air. These methods include: (I) baseline known, (II) baseline linear over the spectral region of interest, (III) baseline linear over each spectral peak, and (IV) negligible baseline shift between successive data points. Methods III and IV were found to be most reliable for the gases studied. When method III is applied to the spectra of these trace gases, detection limits improved by factors of 5 to 7 over conventional methods applied to the same data. “Three sigma” detection limits are equal to 0.6, 0.2, and 0.08 ppm for CO, N2O, and CO2, respectively, when a 10-cm pathlength at a total pressure of 640 Torr is used with a ∼35 min measurement time at 0.06 cm−1 resolution.


Geophysics ◽  
1977 ◽  
Vol 42 (6) ◽  
pp. 1265-1276 ◽  
Author(s):  
Anthony F. Gangi ◽  
James N. Shapiro

An algorithm is described which iteratively solves for the coefficients of successively higher‐order, least‐squares polynomial fits in terms of the results for the previous, lower‐order polynomial fit. The technique takes advantage of the special properties of the least‐squares or Hankel matrix, for which [Formula: see text]. Only the first and last column vectors of the inverse matrix are needed at each stage to continue the iteration to the next higher stage. An analogous procedure may be used to determine the inverse of such least‐squares type matrices. The inverse of each square submatrix is determined from the inverse of the previous, lower‐order submatrix. The results using this algorithm are compared with the method of fitting orthogonal polynomials to data points. While the latter method gives higher accuracy when high‐order polynomials are fitted to the data, it requires many more computations. The increased accuracy of the orthogonal‐polynomial fit is valuable when high precision of fitting is required; however, for experimental data with inherent inaccuracies, the added computations outweigh the possible benefit derived from the more accurate fitting. A Fortran listing of the algorithm is given.


2021 ◽  
pp. 0272989X2110680
Author(s):  
Loukia M. Spineli

Background The unrelated mean effects (UME) model has been proposed for evaluating the consistency assumption globally in the network of interventions. However, the UME model does not accommodate multiarm trials properly and omits comparisons between nonbaseline interventions in the multiarm trials not investigated in 2-arm trials. Methods We proposed a refinement of the UME model that tackles the limitations mentioned above. We also accompanied the scatterplots on the posterior mean deviance contributions of the trial arms under the network meta-analysis (NMA) and UME models with Bland-Altman plots to detect outlying trials contributing to poor model fit. We applied the refined and original UME models to 2 networks with multiarm trials. Results The original UME model omitted more than 20% of the observed comparisons in both networks. The thorough inspection of the individual data points’ deviance contribution using complementary plots in conjunction with the measures of model fit and the estimated between-trial variance indicated that the refined and original UME models revealed possible inconsistency in both examples. Conclusions The refined UME model allows proper accommodation of the multiarm trials and visualization of all observed evidence in complex networks of interventions. Furthermore, considering several complementary plots to investigate deviance helps draw informed conclusions on the possibility of global inconsistency in the network. Highlights We have refined the unrelated mean effects (UME) model to incorporate multiarm trials properly and to estimate all observed comparisons in complex networks of interventions. Forest plots with posterior summaries of all observed comparisons under the network meta-analysis and refined UME model can uncover the consequences of potential inconsistency in the network. Using complementary plots to investigate the individual data points’ deviance contribution in conjunction with model fit measures and estimated heterogeneity aid in detecting possible inconsistency.


RSC Advances ◽  
2014 ◽  
Vol 4 (94) ◽  
pp. 52379-52383 ◽  
Author(s):  
Kunihiro Ichimura ◽  
Shusaku Nagano

The fourth-order derivative spectra reveal the individual generation of photodichroism of non-aggregated and aggregated species of a liquid-crystalline azo-polymer.


Author(s):  
E. M. Hudson

This paper describes a technique for conducting multiparameter experiments in a manner such that the number of data points investigated is reduced to a minimum. The method is based upon the observation that human responses to psychophysiological inputs are lawful rather than random, and hence can be predicted from mathematical equations. The procedure is to: (a) collect data on human responses at a few points in the experimental matrix, (b) fit this data with a low-order polynominal, using a computer program to evaluate the coefficients of the equation as a function of the collected data points, and (c) then, using the developed equation, the computer predicts the values that would be observed at other data points. If these computed values are close enough to the observed values at these points, it is assumed that the equation is correct. If the values are not close enough, the new data is entered into the computer and a higher order equation is fitted by a method of least squares. The procedure is iterative, and is continued until the residual error between computed and observed values for all points falls below some desired value. The importance of the technique is that in multiparameter experiments such a technique can reduce the necessary number of observations by several orders of magnitude compared to what would be necessary by conventional techniques.


2019 ◽  
Author(s):  
Christine Nothelfer ◽  
Steven Franconeri

The power of data visualization is not to convey absolute values of individual data points, but to allow the exploration of relations (increases or decreases in a data value) among them. One approach to highlighting these relations is to explicitly encode the numeric differences (deltas) between data values. Because this approach removes the context of the individual data values, it is important to measure how much of a performance improvement it actually offers, especially across differences in encodings and tasks, to ensure that it is worth adding to a visualization design. Across 3 different tasks, we measured the increase in visual processing efficiency for judging the relations between pairs of data values, from when only the values were shown, to when the deltas between the values were explicitly encoded, across position and length visual feature encodings (and slope encodings in Experiments 1 & 2). In Experiment 1, the participant’s task was to locate a pair of data values with a given relation (e.g., Find the ‘small bar to the left of a tall bar’ pair) among pairs of the opposite relation, and we measured processing efficiency from the increase in response times as the number of pairs increased. In Experiment 2, the task was to judge which of two relation types was more prevalent in a briefly presented display of 10 data pairs (e.g., Are there more ‘small bar to the left of a tall bar’ pairs or more ‘tall bar to the left of a small bar’ pairs?). In the final experiment, the task was to estimate the average delta within a briefly presented display of 6 data pairs (e.g., What is the average bar height difference across all ‘small bar to the left of a tall bar’ pairs?). Across all three experiments, visual processing of relations between data value pairs was significantly better when directly encoded as deltas rather than implicitly between individual data points, and varied substantially depending on the task (improvement ranged from 25% to 95%). Considering the ubiquity of bar charts and dot plots, relation perception for individual data values is highly inefficient, and confirms the need for alternative designs that provide not only absolute values, but also direct encoding of critical relationships between those values.


2019 ◽  
Vol 11 (3) ◽  
pp. 396-409
Author(s):  
Abbas Khalaf MOHAMMAD ◽  
Nawras Shareef SABEEH

Adsorption and desorption kinetic curves for equimolar hydrogen – methane mixture on molecular sieve type 5A were experimentally obtained for pressure range 0.122 – 3.546 MPa.The linear driving force rate expression model was used to simulate the dynamic of adsorption and desorption in adiabatic fixed bed adsorber. The model takes into account the interference effects for non-linear isotherms and non-isothermal system. The equations were solved by backward finite difference method with a fixed gridding technique. The individual mass transfer parameters were obtained by matching the theoretical with the experimental data and found to be equal to 8.510 s-1 and 0.783 s-1 for hydrogen and methane, respectively.The predicted effluent histories were shown to be in close agreement with the experimental data for the system. The lowest relative capacity of the bed for methane was almost approximately 95% of that predicted equilibrium capacity. The predicted temperature profiles tracked the experimental temperature data points, but with higher values. Furthermore, the maximum temperature increasing was observed for the adsorption of methane onto 5A molecular sieve at 35 atmospheres and was recorded as 44 K.


Sign in / Sign up

Export Citation Format

Share Document