Estimating the distribution of a variable measured with error: stand densities in a forest inventory

1991 ◽  
Vol 21 (4) ◽  
pp. 469-473 ◽  
Author(s):  
Juha Lappi

If a variable is measured (or estimated) with error, then the distribution of the measurements is flatter than the true distribution. The variance of a measured variable is the sum of the true variance and the measurement error variance. If we shrink measured values towards their mean so that the variance will be equal to the true population variance, or its estimate, the obtained empirical distribution is more similar to the true distribution than is the distribution of measured values. To estimate the population variance, an estimate of the variance of measurement errors is required. If stand densities are measured by counting trees on fixed area or angle gauge plots, then a first approximation for the measurement (sampling) error variance can be computed assuming random (Poisson) spatial pattern of trees. The suggested estimation method is illustrated using an assumed distribution of stand densities.

Author(s):  
Dragan Popović ◽  
Miloš Popović ◽  
Evagelia Boli ◽  
Hamidovoć Mensur ◽  
Marina Jovanović

Due to its simplicity and explicit algebraic and geometric meanings, latent dimensions, and identification structures associated with these dimensions, reliability of the latent dimensions obtained by orthoblique transformation of principal components can be determined in a clear and unambiguous manner. Let G = (gij); i = 1, ..., n; j = 1, ..., m is an acceptably unknown matrix of measurement errors in the description of a set E on a set V. Then the matrix of true results of entities from E on the variables from V will be Y = Z - G. Assume, in accordance with the classical theory of measurement (Gulliksen, 1950, Lord - Novick, 1968; Pfanzagl, 1968), that matrix G is such that YtG = 0 and GtGn-1 = E2 = (ejj2) where E2 is a diagonal matrix, the covariance matrix of true results will be H = YtYn-1 = R - E2 if R = ZtZn-1 is an intercorrelation matrix of variables from V defined on set E. Suppose that the reliability coefficients of variables from V are known; let P be a diagonal matrix whose elements j are these reliability coefficients. Then the variances of measurement errors for the standardized results on variables from V will be just elements of the matrix E2 = I - . Now the true values on the latent dimensions will be elements of the matrix  = (Z - G)Q with the covariance matrix  = tn-1 = QtHQ = QtRQ - QtE2Q = (pq). Therefore, the true variances of the latent dimensions will be the diagonal elements of matrix ; denote those elements with p2. Based on the formal definition of the reliability coefficient of some variable  = t2 /  where t2 is a true variance of the variable and  is the total variance of the variable, or the variance that also includes the error variance, the reliability coefficients of the latent dimensions, if the reliability coefficients of the variables from which these dimensions have been derived are known, will be p = p2 / sp2 = 1 - (qptE2qp )(qptRq )-1 p = 1,...,k


2020 ◽  
Vol 29 (9) ◽  
pp. 2411-2444
Author(s):  
Anna R S Marinho ◽  
Rosangela H Loschi

Cure fraction models have been widely used to model time-to-event data when part of the individuals survives long-term after disease and are considered cured. Most cure fraction models neglect the measurement error that some covariates may experience which leads to poor estimates for the cure fraction. We introduce a Bayesian promotion time cure model that accounts for both mismeasured covariates and atypical measurement errors. This is attained by assuming a scale mixture of the normal distribution to describe the uncertainty about the measurement error. Extending previous works, we also assume that the measurement error variance is unknown and should be estimated. Three classes of prior distributions are assumed to model the uncertainty about the measurement error variance. Simulation studies are performed evaluating the proposed model in different scenarios and comparing it to the standard promotion time cure fraction model. Results show that the proposed models are competitive ones. The proposed model is fitted to analyze a dataset from a melanoma clinical trial assuming that the Breslow depth is mismeasured.


1981 ◽  
Vol 38 (6) ◽  
pp. 711-720 ◽  
Author(s):  
Donald Ludwig ◽  
Carl J. Walters

A procedure is developed for estimating stock and recruitment parameters in the presence of measurement errors. It requires an independent assessment of the ratio of environmental and measurement error variances, and provides maximum likelihood estimates of the time series of errors as well as the average stock–recruit parameters. Measures of parameter uncertainty are also provided and are incorporated into an analysis of optimum spawning stocks. This analysis indicates that much higher, or at least more variable, spawning runs should be allowed in many Pacific salmon stocks. An immediate need in salmon management is to obtain estimates of the measurement error variance, so that recent historical data can be made more useful.Key words: statistics, stock recruitment, optimum spawning, uncertainty


2000 ◽  
Vol 86 (1) ◽  
pp. 321-332 ◽  
Author(s):  
Laura T. Flannelly ◽  
Kevin J. Flannelly ◽  
Malcolm S. McLeod

Three surveys compared the accuracy of predictions based on forced-choice and subjective probability scales. The latter produced significantly more accurate election predictions and significantly reduced the percentage of undecided, or “Don't Know” responses, compared to forced-choice scales in all three surveys. Analysis indicates subjective probability scales decrease sampling error and confirms there is an inherent source of error in traditional forced-choice questions about voting intentions not attributable to sampling error. The results are discussed with respect to (1) sampling and measurement errors in forced-choice and subjective probability scales measuring behavioral intentions, (2) their practical application, and (3) cognitive theory, especially support theory.


1986 ◽  
Vol 67 (2) ◽  
pp. 177-185 ◽  
Author(s):  
Lauren L. Morone

Data collected from aircraft equipped with AIDS (Aircraft Integrated Data System) instrumentation during the Global Weather Experiment year of 1979 are used to estimate the observational error of winds at flight level from this and other aircraft automated wind-reporting systems. Structure functions are computed from reports that are paired using specific criteria. The value of this function extrapolated to zero separation distance is an estimate of twice the random measurement-error variance of the AIDS-measured winds. Component-wind errors computed in this way range from 2.1 to 3.1 m · s−1 for the two months of data examined, January and August 1979. Observational error, specified in optimum-interpolation analyses to allow the analysis to distinguish among observations of differing quality, is composed of both measurement error and the error of unrepresentativeness. The latter type of error is a function of the resolvable scale of the analysis-prediction system. The structure function, which measures the variability of a field as a function of separation distance, includes both of these types of error. If the resolvable scale of an analysis procedure is known, an estimate of the observational error can be computed from the structure function at that particular distance. An observational error of 5.3 m · s−1 was computed for the u and v wind components for a sample resolvable scale of 300 km. The errors computed from the structure functions are compared to colocation statistics from radiosondes. The errors associated with automated wind reports are found to compare favorably with those estimated for radiosonde winds at that level.


2021 ◽  
Author(s):  
Hongmei Xu ◽  
Juan Liu ◽  
Kun Wang ◽  
Songtao Kong ◽  
Yong Shi

Abstract A hybrid fuzzy inference-quantum particle swarm optimization (FI-QPSO) algorithm is developed to estimate the temperature-dependent thermal properties of grain. The fuzzy inference scheme is established to determine the contraction-expansion coefficient according to the aggregation degree of particles. The heat transfer process in the grain bulk is solved using the finite element method (FEM), and the estimation task is formulated as an inverse problem. Numerical experiments are performed to study the effects of the surface heat flux, number of measurement points, measurement errors and the individual space on the estimation results. Comparison with the quantum particle swarm optimization (QPSO) algorithm and conjugate gradient method (CGM) is also conducted, and it shows the validity of the estimation method established in this paper.


2019 ◽  
Vol 9 (4) ◽  
pp. 813-850 ◽  
Author(s):  
Jay Mardia ◽  
Jiantao Jiao ◽  
Ervin Tánczos ◽  
Robert D Nowak ◽  
Tsachy Weissman

Abstract We study concentration inequalities for the Kullback–Leibler (KL) divergence between the empirical distribution and the true distribution. Applying a recursion technique, we improve over the method of types bound uniformly in all regimes of sample size $n$ and alphabet size $k$, and the improvement becomes more significant when $k$ is large. We discuss the applications of our results in obtaining tighter concentration inequalities for $L_1$ deviations of the empirical distribution from the true distribution, and the difference between concentration around the expectation or zero. We also obtain asymptotically tight bounds on the variance of the KL divergence between the empirical and true distribution, and demonstrate their quantitatively different behaviours between small and large sample sizes compared to the alphabet size.


2020 ◽  
Vol 148 (3) ◽  
pp. 877-890 ◽  
Author(s):  
Christopher A. Kerr ◽  
Xuguang Wang

Abstract The potential future installation of a multifunction phased-array radar (MPAR) network will provide capabilities of case-specific adaptive scanning. Knowing the impacts adaptive scanning may have on short-term forecasts will influence scanning strategy decision-making in hopes to produce the most optimal ensemble forecast while also benefiting human severe weather warning decision-making. An ensemble-based targeted observation algorithm is applied to an observing system simulation experiment (OSSE) where the impacts of synthetic idealized supercell radial velocity observations are estimated before the observations are “collected” and assimilated. The forecast metric of interest is the low-level rotation forecast metric (0–1-km updraft helicity), a surrogate for tornado prediction. It is found that the ensemble-based targeted observation approach can reasonably estimate the true error variance reduction when an effective method that treats sampling error is applied, the period of model forecast is associated with less degrees of nonlinearity, and the observation information content relative to the background forecast is larger. In some scenarios, a subset of a full-volume scan assimilation produces better forecasts than all observations within the full volume. Assimilating the full-volume scan increases the number of potential spurious correlations arising between the forecast metric and radial velocity observation induced state perturbations, which may degrade the forecast metric accuracy.


1987 ◽  
Vol 62 (5) ◽  
pp. 2083-2093 ◽  
Author(s):  
H. H. Stratton ◽  
P. J. Feustel ◽  
J. C. Newell

To test hypotheses regarding relations between meaningful parameters, it is often necessary to calculate these parameters from other directly measured variables. For example, the relationship between O2 consumption and O2 delivery may be of interest, although these may be computed from measurements of cardiac output and blood O2 contents. If a measured variable is used in the calculation of two derived parameters, error in the measurement will couple the calculated parameters and introduce a bias, which can lead to incorrect conclusions. This paper presents a method of correcting for this bias in the linear regression coefficient and the Pearson correlation coefficient when calculations involve the nonlinear and linear combination of the measured variables. The general solution is obtained when the first two terms of a Taylor series expansion of the function can be used to represent the function, as in the case of multiplication. A significance test for the hypothesis that the regression coefficient is equal to zero is also presented. Physiological examples are provided demonstrating this technique, and the correction methods are also applied in simulations to verify the adequacy of the technique and to test for the magnitude of the coupling effect. In two previous studies of O2 consumption and delivery, the effect of coupled error is shown to be small when the range of O2 deliveries studied is large, and measurement errors are of reasonable size.


2007 ◽  
Vol 135 (12) ◽  
pp. 4117-4134 ◽  
Author(s):  
Brian Ancell ◽  
Gregory J. Hakim

Abstract The sensitivity of numerical weather forecasts to small changes in initial conditions is estimated using ensemble samples of analysis and forecast errors. Ensemble sensitivity is defined here by linear regression of analysis errors onto a given forecast metric. It is shown that ensemble sensitivity is proportional to the projection of the analysis-error covariance onto the adjoint-sensitivity field. Furthermore, the ensemble-sensitivity approach proposed here involves a small calculation that is easy to implement. Ensemble- and adjoint-based sensitivity fields are compared for a representative wintertime flow pattern near the west coast of North America for a 90-member ensemble of independent initial conditions derived from an ensemble Kalman filter. The forecast metric is taken for simplicity to be the 24-h forecast of sea level pressure at a single point in western Washington State. Results show that adjoint and ensemble sensitivities are very different in terms of location, scale, and magnitude. Adjoint-sensitivity fields reveal mesoscale lower-tropospheric structures that tilt strongly upshear, whereas ensemble-sensitivity fields emphasize synoptic-scale features that tilt modestly throughout the troposphere and are associated with significant weather features at the initial time. Optimal locations for targeting can easily be determined from ensemble sensitivity, and results indicate that the primary targeting locations are located away from regions of greatest adjoint and ensemble sensitivity. It is shown that this method of targeting is similar to previous ensemble-based methods that estimate forecast-error variance reduction, but easily allows for the application of statistical confidence measures to deal with sampling error.


Sign in / Sign up

Export Citation Format

Share Document