THE EFFECT OF RANDOM ERRORS IN GRAVITY DATA ON SECOND DERIVATIVE VALUES

Geophysics ◽  
1952 ◽  
Vol 17 (1) ◽  
pp. 70-88 ◽  
Author(s):  
Thomas A. Elkins

Two random error grids were prepared using a set of 111 balls marked according to the Gaussian normal error law. For these grids, considered as grids of errors in gravity data, the second derivative values were computed and contoured. The resulting maps show strikingly the dangers in uncritical interpretations of second derivative maps based on insufficiently accurate data. Statistical checks were applied both to the random error grids and to the computed second derivative values. The check on the latter necessitated the development of a theory of the correlation between second derivative values which is also applicable to many other quantities, besides second derivatives, which are computed by coefficients.

Geophysics ◽  
1953 ◽  
Vol 18 (3) ◽  
pp. 720-724
Author(s):  
G. Ramaswamy

In an interesting paper “The Effect of Random Errors in Gravity Data on Second Derivative Values,” Thomas A. Elkins (1952) points out the need for eliminating the random component of gravity data before proceeding to interpret them with the aid of second derivative maps. There is a prima facie case for the elimination of random errors, if only to ensure reliability in results. The need, however, becomes more apparent if it is remembered: a. that the Second Derivative method of interpreting gravity (or magnetic) data is one of high resolving power, and b. that, therefore, errors creeping into those data may considerably vitiate the interpretation of those data.


2017 ◽  
Vol 919 (1) ◽  
pp. 7-12
Author(s):  
N.A Sorokin

The method of the geopotential parameters determination with the use of the gradiometry data is considered. The second derivative of the gravitational potential in the correction equation on the rectangular coordinates x, y, z is used as a measured variable. For the calculated value of the measured quantity required for the formation of a free member of the correction equation, the the Cunningham polynomials were used. We give algorithms for computing the second derivatives of the Cunningham polynomials on rectangular coordinates x, y, z, which allow to calculate the second derivatives of the geopotential at the rectangular coordinates x, y, z.Then we convert derivatives obtained from the Cartesian coordinate system in the coordinate system of the gradiometer, which allow to calculate the free term of the correction equation. Afterwards the correction equation coefficients are calculated by differentiating the formula for calculating the second derivative of the gravitational potential on the rectangular coordinates x, y, z. The result is a coefficient matrix of the correction equations and corrections vector of the free members of equations for each component of the tensor of the geopotential. As the number of conditional equations is much more than the number of the specified parameters, we go to the drawing up of the system of normal equations, from which solutions we determine the required corrections to the harmonic coefficients.


1937 ◽  
Vol 33 (4) ◽  
pp. 444-450 ◽  
Author(s):  
Harold Jeffreys

1. It often happens that we have a series of observed data for different values of the argument and with known standard errors, and wish to remove the random errors as far as possible before interpolation. In many cases previous considerations suggest a form for the true value of the function; then the best method is to determine the adjustable parameters in this function by least squares. If the number required is not initially known, as for a polynomial where we do not know how many terms to retain, the number can be determined by finding out at what stage the introduction of a new parameter is not supported by the observations*. In many other cases, again, existing theory does not suggest a form for the solution, but the observations themselves suggest one when the departures from some simple function are found to be much less than the whole range of variation and to be consistent with the standard errors. The same method can then be used. There are, however, further cases where no simple function is suggested either by previous theory or by the data themselves. Even in these the presence of errors in the data is expected. If ε is the actual error of any observed value and σ the standard error, the expectation of Σε2/σ2 is equal to the number of observed values. Part, at least, of any irregularity in the data, such as is revealed by the divided differences, can therefore be attributed to random error, and we are entitled to try to reduce it.


2002 ◽  
Vol 5 (6a) ◽  
pp. 969-976 ◽  
Author(s):  
Rudolf Kaaks ◽  
Pietro Ferrari ◽  
Antonio Ciampi ◽  
Martyn Plummer ◽  
Elio Riboli

AbstractObjective:To examine statistical models that account for correlation between random errors of different dietary assessment methods, in dietary validation studies.Setting:In nutritional epidemiology, sub-studies on the accuracy of the dietary questionnaire measurements are used to correct for biases in relative risk estimates induced by dietary assessment errors. Generally, such validation studies are based on the comparison of questionnaire measurements (Q) with food consumption records or 24-hour diet recalls (R). In recent years, the statistical analysis of such studies has been formalised more in terms of statistical models. This made the need of crucial model assumptions more explicit. One key assumption is that random errors must be uncorrelated between measurements Q and R, as well as between replicate measurements R1 and R2 within the same individual. These assumptions may not hold in practice, however. Therefore, more complex statistical models have been proposed to validate measurements Q by simultaneous comparisons with measurements R plus a biomarker M, accounting for correlations between the random errors of Q and R.Conclusions:The more complex models accounting for random error correlations may work only for validation studies that include markers of diet based on physiological knowledge about the quantitative recovery, e.g. in urine, of specific elements such as nitrogen or potassium, or stable isotopes administered to the study subjects (e.g. the doubly labelled water method for assessment of energy expenditure). This type of marker, however, eliminates the problem of correlation of random errors between Q and R by simply taking the place of R, thus rendering complex statistical models unnecessary.


1995 ◽  
Vol 28 (5) ◽  
pp. 590-593 ◽  
Author(s):  
R. A. Winholtz

Two corrections are made to the equations for estimating the counting statistical errors in diffraction stress measurements. It is shown that the previous equations provide a conservative estimate of the counting-statistical component of the random errors in stress measurements. The results from the corrected equations are compared to a Monte Carlo model and to replicated measurements. A procedure to handle other sources of random error is also suggested.


Geophysics ◽  
1953 ◽  
Vol 18 (4) ◽  
pp. 907-909 ◽  
Author(s):  
L. J. Peters ◽  
T. A. Elkins

We would like to call attention to a point of considerable practical importance which is neglected in this interesting and ingenious paper on the computation of the second derivative. This is the fact that gravity field data inevitably contain errors so that the second derivative values computed by coefficients from this gravity data also will contain errors, which may be of such magnitude as to mask the real effects caused by geologic structure, the finding of which was the purpose of the gravity survey.


2006 ◽  
Vol 36 (10) ◽  
pp. 2515-2522 ◽  
Author(s):  
Michael Newton ◽  
Elizabeth C Cole

Deceleration of growth rates can give an indication of competition and the need for thinning in early years but can be difficult to detect. We computed the first and second derivatives of the von Bertalanffy – Richards equation to assess impacts of density and vegetation control in young plantations in western Oregon. The first derivative describes the response in growth and the second derivative describes the change in growth over time. Three sets of density experiments were used: (i) pure Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco), (ii) mixed Douglas-fir and grand fir (Abies grandis (Dougl. ex D. Don) Lindl.), and (iii) mixed western hemlock (Tsuga heterophylla (Raf.) Sarg.) and red alder (Alnus rubra Bong.). Original planting densities ranged from 475 to 85 470 trees·ha–1 (4.6 m × 4.6 m to 0.34 m × 0.34 m spacing); western hemlock and red alder plots were weeded and unweeded. For the highest densities, the second derivative was rarely above zero for any of the time periods, indicating that the planting densities were too high for tree growth to enter an exponential phase. As expected, the lower the density, the greater and later the peak in growth for both the first and second derivatives. Weeding increased the growth peaks, and peaks were reached earlier in weeded than in unweeded plots. Calculations of this sort may help modelers identify when modifiers for competition and density are needed in growth equations. Specific applications help define onset of competition, precise determining of timing of peak growth, period of acceleration of growth, and interaction of spacing and age in determination of peaks of increment or acceleration or deceleration.


2011 ◽  
Vol 11 (4) ◽  
pp. 239-248 ◽  
Author(s):  
Saikat Das ◽  
Subhashini John ◽  
Paul Ravindran ◽  
Rajesh Isiah ◽  
Rajesh B ◽  
...  

AbstractContext: Setup error significantly affects the accuracy of treatment and outcome in high precision radiotherapy.Aims: To determine total, systematic, random error and clinical target volume (CTV) to planning target volume (PTV) margin with alpha cradle (VL) and ray cast (RC) immobilisation in abdominopelvic region.Methods and material: Setup error was compared by using digitally reconstructed radiograph (DRR) as reference image with electronic portal image (EPI) taken during the treatment. Statistical analysis used: The total errors in mediolateral (ML), craniocaudal (CC) and anteroposterior (AP) directions were compared by t-test. For systematic and random errors variance ratio test (F-statistics) was used. Margins were calculated using International Commission of Radiation Units (ICRU), Stroom’s and van Herk’s formula.Results: A total number of 306 portal images were analysed with 144 images in RC group and 162 images in VL group. For VL, in ML, CC, AP directions systematic errors were, in cm, (0.45, 0.29, 0.41), random errors (0.48, 0.32, 0.58), CTV to PTV margins (1.24, 0.80, 1.25), respectively. For RC, systematic errors were (0.25, 0.37, 0.80), random error (0.46, 0.80, 0.33), CTV to PTV margins (0.82, 1.30, 1.08), respectively. The difference of random error in CC and AP directions were statistically significant.Conclusions: Geometric errors and CTV to PTV margins are different in different directions. For abdomen and pelvis in VL immobilisation, the margin ranged from 8 mm to 12.4 mm and for RC it was 8.2 mm to 13 mm. Therefore, a margin of 10 mm with online correction would be adequate.


Geophysics ◽  
1993 ◽  
Vol 58 (12) ◽  
pp. 1779-1784 ◽  
Author(s):  
El‐Sayed M. Abdelrahman ◽  
Tarek M. El‐Araby

We have developed a least‐squares minimization method to estimate the depth of a buried structure from moving average residual gravity anomalies. The method involves fitting simple models convolved with the same moving average filter as applied to the observed gravity data. As a result, our method can be applied not only to residuals but also to the Bouguer gravity data of a short profile length. The method is applied to synthetic data with and without random errors. The validity of the method is tested in detail on two field examples from the United States and Senegal.


Sign in / Sign up

Export Citation Format

Share Document