classical measurement
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 8)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
pp. 1-23
Author(s):  
Daniel L. Millimet ◽  
Christopher F. Parmeter

Abstract While classical measurement error in the dependent variable in a linear regression framework results only in a loss of precision, nonclassical measurement error can lead to estimates, which are biased and inference which lacks power. Here, we consider a particular type of nonclassical measurement error: skewed errors. Unfortunately, skewed measurement error is likely to be a relatively common feature of many outcomes of interest in political science research. This study highlights the bias that can result even from relatively “small” amounts of skewed measurement error, particularly, if the measurement error is heteroskedastic. We also assess potential solutions to this problem, focusing on the stochastic frontier model and Nonlinear Least Squares. Simulations and three replications highlight the importance of thinking carefully about skewed measurement error as well as appropriate solutions.


Author(s):  
David B Richardson ◽  
Alexander P Keil ◽  
Stephen R Cole ◽  
Jessie K Edwards

Abstract Suppose that an investigator wants to estimate an association between a continuous exposure variable and an outcome, adjusting for a set of confounders. If the exposure variable suffers classical measurement error, in which the measured exposures are distributed with independent error around the true exposure, then an estimate of the covariate-adjusted exposure-outcome association may be biased. We propose an approach to estimate a marginal exposure-outcome association in the setting of classical exposure measurement error using a disease score-based approach to standardization to the exposed sample. First, we show that the proposed marginal estimate of the exposure-outcome association will suffer less bias due to classical measurement error than the covariate-conditional estimate of association when the covariates are predictors of exposure. Second, we show that if an exposure validation study is available with which to assess exposure measurement error then the proposed marginal estimate of the exposure-outcome association can be corrected for measurement error more efficiently than the covariate-conditional estimate of association. We illustrate both of these points using simulations and an empirical example using data from the Orinda Longitudinal Study of Myopia (1989-2001).


2020 ◽  
Vol 27 (03) ◽  
pp. 448-454
Author(s):  
Aamir Furqan ◽  
Rahat Akhtar ◽  
Masood Alam ◽  
Rana Altaf Ahmed

Objectives: This article is designed for comparison and contrast of item response theory measurement with classical measurement theory (Classical Measurement Theory) as well as to determine the various advantages offered by item response theory in the setting of medical education. Summary: Classical measurement theory is being impartial and inherent, is used more often than other models in medical education. However, there is one restriction encountered in the use of classical measurement theory that is it sample dependent and the data is bewildered in the specified sample that the researcher has assessed. Whereas, the score in item response theory separate from the sample or stimuli of assessment. Item Response Theory is consistent, it allows for easy evaluation of examination scores enabling the score to be placed in constant measurement scale and compare the change in students’ ability with time. There are various models of Item Response Theory out of which three are discussed along with their statistical assumptions. Conclusions: Item Response Theory being a capable tool is able to simplify a major issue of Classical Measurement Theory, i.e. bewilderment of skill of examinee with item characteristics. The Item Response Theory measurement inscribes the problems in medical education like removing rater mistakes from evaluation.


2019 ◽  
Vol 17 (2) ◽  
Author(s):  
Edward Kroc ◽  
Bruno D. Zumbo

Traditional notions of measurement error typically rely on a strong mean-zero assumption on the expectation of the errors conditional on an unobservable “true score” (classical measurement error) or on the data themselves (Berkson measurement error). Weakly calibrated measurements for an unobservable true quantity are defined based on a weaker mean-zero assumption, giving rise to a measurement model of differential error. Applications show it retains many attractive features of estimation and inference when performing a naive data analysis (i.e. when performing an analysis on the error-prone measurements themselves), and other interesting properties not present in the classical or Berkson cases. Applied researchers concerned with measurement error should consider weakly calibrated errors and rely on the stronger formulations only when both a stronger model's assumptions are justifiable and would result in appreciable inferential gains.


2018 ◽  
Vol 29 (1) ◽  
pp. 100-128 ◽  
Author(s):  
Günter Trendler

According to classical measurement theory, fundamental measurement necessarily requires the operation of concatenation qua physical addition. Quantities which do not allow this operation are measurable only indirectly by means of derived measurement. Since only extensive quantities sustain the operation of physical addition, measurement in psychology has been considered problematic. In contrast, the theory of conjoint measurement, as developed in representational measurement theory, proposes that the operation of ordering is sufficient for establishing fundamental measurement. The validity of this view is questioned. The misconception about the advantages of conjoint measurement, it is argued, results from the failure to notice that magnitudes of derived quantities cannot be determined directly, i.e., without the help of associated quantitative indicators. This takes away the advantages conjoint measurement has over derived measurement, making it practically useless.


Sign in / Sign up

Export Citation Format

Share Document