scholarly journals How to estimate the measurement error variance associated with ancestry proportion estimates

2011 ◽  
Vol 4 (3) ◽  
pp. 327-337 ◽  
Author(s):  
David B. Allison ◽  
Raymond J. Carroll ◽  
Jasmin Divers ◽  
David T. Redden
1986 ◽  
Vol 67 (2) ◽  
pp. 177-185 ◽  
Author(s):  
Lauren L. Morone

Data collected from aircraft equipped with AIDS (Aircraft Integrated Data System) instrumentation during the Global Weather Experiment year of 1979 are used to estimate the observational error of winds at flight level from this and other aircraft automated wind-reporting systems. Structure functions are computed from reports that are paired using specific criteria. The value of this function extrapolated to zero separation distance is an estimate of twice the random measurement-error variance of the AIDS-measured winds. Component-wind errors computed in this way range from 2.1 to 3.1 m · s−1 for the two months of data examined, January and August 1979. Observational error, specified in optimum-interpolation analyses to allow the analysis to distinguish among observations of differing quality, is composed of both measurement error and the error of unrepresentativeness. The latter type of error is a function of the resolvable scale of the analysis-prediction system. The structure function, which measures the variability of a field as a function of separation distance, includes both of these types of error. If the resolvable scale of an analysis procedure is known, an estimate of the observational error can be computed from the structure function at that particular distance. An observational error of 5.3 m · s−1 was computed for the u and v wind components for a sample resolvable scale of 300 km. The errors computed from the structure functions are compared to colocation statistics from radiosondes. The errors associated with automated wind reports are found to compare favorably with those estimated for radiosonde winds at that level.


2016 ◽  
Vol 44 (7) ◽  
pp. 2909-2933 ◽  
Author(s):  
Aaron F. McKenny ◽  
Herman Aguinis ◽  
Jeremy C. Short ◽  
Aaron H. Anglin

Computer-aided text analysis (CATA) is a form of content analysis that enables the measurement of constructs by processing text into quantitative data based on the frequency of words. CATA has been proposed as a useful measurement approach with the potential to lead to important theoretical advancements. Ironically, while CATA has been offered to overcome some of the known deficiencies in existing measurement approaches, we have lagged behind in regard to assessing the technique’s measurement rigor. Our article addresses this knowledge gap and describes important implications for past as well as future research using CATA. First, we describe three sources of measurement error variance that are particularly relevant to studies using CATA: transient error, specific factor error, and algorithm error. Second, we describe and demonstrate how to calculate measurement error variance with the entrepreneurial orientation, market orientation, and organizational ambidexterity constructs, offering evidence that past substantive conclusions have been underestimated. Third, we offer best-practice recommendations and demonstrate how to reduce measurement error variance by refining existing CATA measures. In short, we demonstrate that although measurement error variance in CATA has not been measured thus far, it does exist and it affects substantive conclusions. Consequently, our article has implications for theory and practice, as well as how to assess and minimize measurement error in future CATA research with the goal of improving the accuracy of substantive conclusions.


2020 ◽  
Vol 29 (9) ◽  
pp. 2411-2444
Author(s):  
Anna R S Marinho ◽  
Rosangela H Loschi

Cure fraction models have been widely used to model time-to-event data when part of the individuals survives long-term after disease and are considered cured. Most cure fraction models neglect the measurement error that some covariates may experience which leads to poor estimates for the cure fraction. We introduce a Bayesian promotion time cure model that accounts for both mismeasured covariates and atypical measurement errors. This is attained by assuming a scale mixture of the normal distribution to describe the uncertainty about the measurement error. Extending previous works, we also assume that the measurement error variance is unknown and should be estimated. Three classes of prior distributions are assumed to model the uncertainty about the measurement error variance. Simulation studies are performed evaluating the proposed model in different scenarios and comparing it to the standard promotion time cure fraction model. Results show that the proposed models are competitive ones. The proposed model is fitted to analyze a dataset from a melanoma clinical trial assuming that the Breslow depth is mismeasured.


Author(s):  
Erik Meijer ◽  
Edward Oczkowski ◽  
Tom Wansbeek

Abstract Measurement error biases OLS results. When the measurement error variance in absolute or relative (reliability) form is known, adjustment is simple. We link the (known) estimators for these cases to GMM theory and provide simple derivations of their standard errors. Our focus is on the test statistics. We show monotonic relations between the t-statistics and $$R^2$$ R 2 s of the (infeasible) estimator if there was no measurement error, the inconsistent OLS estimator, and the consistent estimator that corrects for measurement error and show the relation between the t-value and the magnitude of the assumed measurement error variance or reliability. We also discuss how standard errors can be computed when the measurement error variance or reliability is estimated, rather than known, and we indicate how the estimators generalize to the panel data context, where we have to deal with dependency among observations. By way of illustration, we estimate a hedonic wine price function for different values of the reliability of the proxy used for the wine quality variable.


2020 ◽  
Vol 3 (1) ◽  
pp. 94-123 ◽  
Author(s):  
Brenton M. Wiernik ◽  
Jeffrey A. Dahlke

Most published meta-analyses address only artifactual variance due to sampling error and ignore the role of other statistical and psychometric artifacts, such as measurement error variance (due to factors including unreliability of measurements, group misclassification, and variable treatment strength) and selection effects (including range restriction or enhancement and collider biases). These artifacts can have severe biasing effects on the results of individual studies and meta-analyses. Failing to account for these artifacts can lead to inaccurate conclusions about the mean effect size and between-studies effect-size heterogeneity, and can influence the results of meta-regression, publication-bias, and sensitivity analyses. In this article, we provide a brief introduction to the biasing effects of measurement error variance and selection effects and their relevance to a variety of research designs. We describe how to estimate the effects of these artifacts in different research designs and correct for their impacts in primary studies and meta-analyses. We consider meta-analyses of correlations, observational group differences, and experimental effects. We provide R code to implement the corrections described.


Sign in / Sign up

Export Citation Format

Share Document