measurement error variance
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 0)

H-INDEX

8
(FIVE YEARS 0)

Author(s):  
Erik Meijer ◽  
Edward Oczkowski ◽  
Tom Wansbeek

Abstract Measurement error biases OLS results. When the measurement error variance in absolute or relative (reliability) form is known, adjustment is simple. We link the (known) estimators for these cases to GMM theory and provide simple derivations of their standard errors. Our focus is on the test statistics. We show monotonic relations between the t-statistics and $$R^2$$ R 2 s of the (infeasible) estimator if there was no measurement error, the inconsistent OLS estimator, and the consistent estimator that corrects for measurement error and show the relation between the t-value and the magnitude of the assumed measurement error variance or reliability. We also discuss how standard errors can be computed when the measurement error variance or reliability is estimated, rather than known, and we indicate how the estimators generalize to the panel data context, where we have to deal with dependency among observations. By way of illustration, we estimate a hedonic wine price function for different values of the reliability of the proxy used for the wine quality variable.



2020 ◽  
Vol 3 (1) ◽  
pp. 94-123 ◽  
Author(s):  
Brenton M. Wiernik ◽  
Jeffrey A. Dahlke

Most published meta-analyses address only artifactual variance due to sampling error and ignore the role of other statistical and psychometric artifacts, such as measurement error variance (due to factors including unreliability of measurements, group misclassification, and variable treatment strength) and selection effects (including range restriction or enhancement and collider biases). These artifacts can have severe biasing effects on the results of individual studies and meta-analyses. Failing to account for these artifacts can lead to inaccurate conclusions about the mean effect size and between-studies effect-size heterogeneity, and can influence the results of meta-regression, publication-bias, and sensitivity analyses. In this article, we provide a brief introduction to the biasing effects of measurement error variance and selection effects and their relevance to a variety of research designs. We describe how to estimate the effects of these artifacts in different research designs and correct for their impacts in primary studies and meta-analyses. We consider meta-analyses of correlations, observational group differences, and experimental effects. We provide R code to implement the corrections described.



2020 ◽  
Vol 29 (9) ◽  
pp. 2411-2444
Author(s):  
Anna R S Marinho ◽  
Rosangela H Loschi

Cure fraction models have been widely used to model time-to-event data when part of the individuals survives long-term after disease and are considered cured. Most cure fraction models neglect the measurement error that some covariates may experience which leads to poor estimates for the cure fraction. We introduce a Bayesian promotion time cure model that accounts for both mismeasured covariates and atypical measurement errors. This is attained by assuming a scale mixture of the normal distribution to describe the uncertainty about the measurement error. Extending previous works, we also assume that the measurement error variance is unknown and should be estimated. Three classes of prior distributions are assumed to model the uncertainty about the measurement error variance. Simulation studies are performed evaluating the proposed model in different scenarios and comparing it to the standard promotion time cure fraction model. Results show that the proposed models are competitive ones. The proposed model is fitted to analyze a dataset from a melanoma clinical trial assuming that the Breslow depth is mismeasured.





Biometrics ◽  
2018 ◽  
Vol 75 (1) ◽  
pp. 297-307
Author(s):  
Aurélie Bertrand ◽  
Ingrid Van Keilegom ◽  
Catherine Legrand


2017 ◽  
Vol 6 (3) ◽  
pp. 335-359 ◽  
Author(s):  
Brady T West ◽  
Frederick G Conrad ◽  
Frauke Kreuter ◽  
Felicitas Mittereder


2016 ◽  
Vol 44 (7) ◽  
pp. 2909-2933 ◽  
Author(s):  
Aaron F. McKenny ◽  
Herman Aguinis ◽  
Jeremy C. Short ◽  
Aaron H. Anglin

Computer-aided text analysis (CATA) is a form of content analysis that enables the measurement of constructs by processing text into quantitative data based on the frequency of words. CATA has been proposed as a useful measurement approach with the potential to lead to important theoretical advancements. Ironically, while CATA has been offered to overcome some of the known deficiencies in existing measurement approaches, we have lagged behind in regard to assessing the technique’s measurement rigor. Our article addresses this knowledge gap and describes important implications for past as well as future research using CATA. First, we describe three sources of measurement error variance that are particularly relevant to studies using CATA: transient error, specific factor error, and algorithm error. Second, we describe and demonstrate how to calculate measurement error variance with the entrepreneurial orientation, market orientation, and organizational ambidexterity constructs, offering evidence that past substantive conclusions have been underestimated. Third, we offer best-practice recommendations and demonstrate how to reduce measurement error variance by refining existing CATA measures. In short, we demonstrate that although measurement error variance in CATA has not been measured thus far, it does exist and it affects substantive conclusions. Consequently, our article has implications for theory and practice, as well as how to assess and minimize measurement error in future CATA research with the goal of improving the accuracy of substantive conclusions.



2015 ◽  
Vol 26 (6) ◽  
pp. 2885-2896 ◽  
Author(s):  
Zeynep Kalaylioglu ◽  
Haydar Demirhan

Joint mixed modeling is an attractive approach for the analysis of a scalar response measured at a primary endpoint and longitudinal measurements on a covariate. In the standard Bayesian analysis of these models, measurement error variance and the variance/covariance of random effects are a priori modeled independently. The key point is that these variances cannot be assumed independent given the total variation in a response. This article presents a joint Bayesian analysis in which these variance terms are a priori modeled jointly. Simulations illustrate that analysis with multivariate variance prior in general lead to reduced bias (smaller relative bias) and improved efficiency (smaller interquartile range) in the posterior inference compared with the analysis with independent variance priors.



2015 ◽  
Vol 8 (2) ◽  
pp. e1-e4 ◽  
Author(s):  
Frederick L. Oswald ◽  
Seydahmet Ercan ◽  
Samuel T. McAbee ◽  
Jisoo Ock ◽  
Amy Shaw

There is understandable concern by LeBreton, Scherer, and James (2014) that psychometric corrections in organizational research are nothing more than a form of statistical hydraulics. Statistical corrections for measurement error variance and range restriction might inappropriately ratchet observed effects upward into regions of practical significance and publication glory—at the expense of highly questionable results.



2013 ◽  
Vol 29 (2) ◽  
pp. 277-297 ◽  
Author(s):  
Brady T. West ◽  
Frauke Kreuter ◽  
Ursula Jaenichen

Abstract Recent research has attempted to examine the proportion of interviewer variance that is due to interviewers systematically varying in their success in obtaining cooperation from respondents with varying characteristics (i.e., nonresponse error variance), rather than variance among interviewers in systematic measurement difficulties (i.e., measurement error variance) - that is, whether correlated responses within interviewers arise due to variance among interviewers in the pools of respondents recruited, or variance in interviewer-specific mean response biases. Unfortunately, work to date has only considered data from a CATI survey, and thus suffers from two limitations: Interviewer effects are commonly much smaller in CATI surveys, and, more importantly, sample units are often contacted by several CATI interviewers before a final outcome (response or final refusal) is achieved. The latter introduces difficulties in assigning nonrespondents to interviewers, and thus interviewer variance components are only estimable under strong assumptions. This study aims to replicate this initial work, analyzing data from a national CAPI survey in Germany where CAPI interviewers were responsible for working a fixed subset of cases.



Sign in / Sign up

Export Citation Format

Share Document