scholarly journals Adjustment of recall errors in duration data using SIMEX

2016 ◽  
Vol 13 (1) ◽  
Author(s):  
Jose Pina-Sánchez

It is widely accepted that due to memory failures retrospective survey questions tend to be prone to measurement error. However, the proportion of studies using such data that attempt to adjust for the measurement problem is shockingly low. Arguably, to a great extent this is due to both the complexity of the methods available and the need to access a subsample containing either a gold standard or replicated values. Here I suggest the implementation of a version of SIMEX capable of adjusting for the types of multiplicative measurement errors associated with memory failures in the retrospective report of durations of life-course events. SIMEX is a method relatively simple to implement and it does not require the use of replicated or validation data so long as the error process can be adequately specified. To assess the effectiveness of the method I use simulated data. I create twelve scenarios based on the combinations of three outcome models (linear, logit and Poisson) and four types of multiplicative errors (non-systematic, systematic negative, systematic positive and heteroscedastic) affecting one of the explanatory variables. I show that SIMEX can be satisfactorily implemented in each of these scenarios. Furthermore, the method can also achieve partial adjustments even in scenarios where the actual distribution and prevalence of the measurement error differs substantially from what is assumed in the adjustment, which makes it an interesting sensitivity tool in those cases where all that is known about the error process is reduced to an educated guess.

2017 ◽  
Vol 28 (3) ◽  
pp. 670-680 ◽  
Author(s):  
Monica M Vasquez ◽  
Chengcheng Hu ◽  
Denise J Roe ◽  
Marilyn Halonen ◽  
Stefano Guerra

Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.


2020 ◽  
Vol 11 (3) ◽  
pp. 289-306
Author(s):  
Harvey Goldstein ◽  
Michele Haynes ◽  
George Leckie ◽  
Phuong Tran

The presence of randomly distributed measurement errors in scale scores such as those used in educational and behavioural assessments implies that careful adjustments are required to statistical model estimation procedures if inferences are required for ‘true’ as opposed to ‘observed’ relationships. In many cases this requires the use of external values for ‘reliability’ statistics or ‘measurement error variances’ which may be provided by a test constructor or else inferred or estimated by the data analyst. Popular measures are those described as ‘internal consistency’ estimates and sometimes other measures based on data grouping. All such measures, however, make particular assumptions that may be questionable but are often not examined. In this paper we focus on scaled scores derived from aggregating a set of indicators, and set out a general methodological framework for exploring different ways of estimating reliability statistics and measurement error variances, critiquing certain approaches and suggesting more satisfactory methods in the presence of longitudinal data. In particular, we explore the assumption of local (conditional) item response independence and show how a failure of this assumption can lead to biased estimates in statistical models using scaled scores as explanatory variables. We illustrate our methods using a large longitudinal data set of mathematics test scores from Queensland, Australia.


Author(s):  
Сергій Вікторович Губін ◽  
Сергій Олександрович Тишко ◽  
Олег Євгенович Забула ◽  
Юрій Миколайович Черниченко

The subject matter of the article is the oscilloscope methods of measuring the phase shift of two harmonic signals, after carrying out their two-half-period transformation and summing. The goal is to develop ways to implement an oscilloscope method of measuring the phase shift of two harmonic signals, which will significantly reduce the component of measurement error caused by phase non-symmetry of the transmission channels, by reducing their length. Analyze the measurement error for each of the methods for determining the phase shift of two harmonic signals using their two-half-periodic transformation. The tasks: statement of measurement problem of determination of phase shift of two harmonic signals; analysis of known oscilloscope methods of phase shift measurement, development of methods for implementing the oscilloscope method based on the analysis of the characteristics of the total signal obtained during the two-half-period transformation; estimation of measurement errors for each method. The methods used are the methodology for estimating measurement errors in indirect measurements. The following results were obtained. Methods for implementing an oscilloscope measurement method using the total signal after a two-half-period transformation based on the analysis of temporal characteristics and local extrema of this signal are proposed. The list of measuring operations that implement each method is defined. The analysis of the components of measurement errors was performed and the degree of correlation was determined. Synthesized ratios for the calculation of measurement error. Conclusions. The scientific novelty of the obtained results is the following: an oscilloscopic method has been developed that will allow reducing substantially the component of the error caused by phase non-symmetry of the signal transmission channels; obtained ratios for the implementation of the oscilloscope measurement method using two-half-period conversion; obtained ratios to calculate the standard deviation of the total measurement error in each of the proposed methods.


2020 ◽  
Author(s):  
Paul Robert Connor ◽  
Ellen Riemke Katrien Evers

Payne, Vuletich, and Lundberg’s bias-of-crowds model proposes that a number of empirical puzzles can be resolved by conceptualizing implicit bias as a feature of situations rather than a feature of individuals. In the present article we argue against this model and propose that, given the existing evidence, implicit bias is best understood as an individual-level construct measured with substantial error. First, using real and simulated data, we show how each of Payne and colleagues’ proposed puzzles can be explained as being the result of measurement error and its reduction via aggregation. Second, we discuss why the authors’ counterarguments against this explanation have been unconvincing. Finally, we test a hypothesis derived from the bias-of-crowds model about the effect of an individually targeted “implicit-bias-based expulsion program” within universities and show the model to lack empirical support. We conclude by considering the implications of conceptualizing implicit bias as a noisily measured individual-level construct for ongoing implicit-bias research. All data and code are available at https://osf.io/tj8u6/.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


2000 ◽  
Vol 30 (2) ◽  
pp. 306-310 ◽  
Author(s):  
M S Williams ◽  
H T Schreuder

Assuming volume equations with multiplicative errors, we derive simple conditions for determining when measurement error in total height is large enough that only using tree diameter, rather than both diameter and height, is more reliable for predicting tree volumes. Based on data for different tree species of excurrent form, we conclude that measurement errors up to ±40% of the true height can be tolerated before inclusion of estimated height in volume prediction is no longer warranted.


2002 ◽  
pp. 323-332 ◽  
Author(s):  
A Sartorio ◽  
G De Nicolao ◽  
D Liberati

OBJECTIVE: The quantitative assessment of gland responsiveness to exogenous stimuli is typically carried out using the peak value of the hormone concentrations in plasma, the area under its curve (AUC), or through deconvolution analysis. However, none of these methods is satisfactory, due to either sensitivity to measurement errors or various sources of bias. The objective was to introduce and validate an easy-to-compute responsiveness index, robust in the face of measurement errors and interindividual variability of kinetics parameters. DESIGN: The new method has been tested on responsiveness tests for the six pituitary hormones (using GH-releasing hormone, thyrotrophin-releasing hormone, gonadotrophin-releasing hormone and corticotrophin-releasing hormone as secretagogues), for a total of 174 tests. Hormone concentrations were assayed in six to eight samples between -30 min and 120 min from the stimulus. METHODS: An easy-to-compute direct formula has been worked out to assess the 'stimulated AUC', that is the part of the AUC of the response curve depending on the stimulus, as opposed to pre- and post-stimulus spontaneous secretion. The weights of the formula have been reported for the six pituitary hormones and some popular sampling protocols. RESULTS AND CONCLUSIONS: The new index is less sensitive to measurement error than the peak value. Moreover, it provides results that cannot be obtained from a simple scaling of either the peak value or the standard AUC. Future studies are needed to show whether the reduced sensitivity to measurement error and the proportionality to the amount of released hormone render the stimulated AUC indeed a valid alternative to the peak value for the diagnosis of the different pathophysiological states, such as, for instance, GH deficits.


1999 ◽  
Vol 56 (7) ◽  
pp. 1234-1240
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.


Dose-Response ◽  
2005 ◽  
Vol 3 (4) ◽  
pp. dose-response.0 ◽  
Author(s):  
Kenny S. Crump

Although statistical analyses of epidemiological data usually treat the exposure variable as being known without error, estimated exposures in epidemiological studies often involve considerable uncertainty. This paper investigates the theoretical effect of random errors in exposure measurement upon the observed shape of the exposure response. The model utilized assumes that true exposures are log-normally distributed, and multiplicative measurement errors are also log-normally distributed and independent of the true exposures. Under these conditions it is shown that whenever the true exposure response is proportional to exposure to a power r, the observed exposure response is proportional to exposure to a power K, where K < r. This implies that the observed exposure response exaggerates risk, and by arbitrarily large amounts, at sufficiently small exposures. It also follows that a truly linear exposure response will appear to be supra-linear—i.e., a linear function of exposure raised to the K-th power, where K is less than 1.0. These conclusions hold generally under the stated log-normal assumptions whenever there is any amount of measurement error, including, in particular, when the measurement error is unbiased either in the natural or log scales. Equations are provided that express the observed exposure response in terms of the parameters of the underlying log-normal distribution. A limited investigation suggests that these conclusions do not depend upon the log-normal assumptions, but hold more widely. Because of this problem, in addition to other problems in exposure measurement, shapes of exposure responses derived empirically from epidemiological data should be treated very cautiously. In particular, one should be cautious in concluding that the true exposure response is supra-linear on the basis of an observed supra-linear form.


Sign in / Sign up

Export Citation Format

Share Document