scholarly journals Calibration of Measurements

2019 ◽  
Vol 17 (2) ◽  
Author(s):  
Edward Kroc ◽  
Bruno D. Zumbo

Traditional notions of measurement error typically rely on a strong mean-zero assumption on the expectation of the errors conditional on an unobservable “true score” (classical measurement error) or on the data themselves (Berkson measurement error). Weakly calibrated measurements for an unobservable true quantity are defined based on a weaker mean-zero assumption, giving rise to a measurement model of differential error. Applications show it retains many attractive features of estimation and inference when performing a naive data analysis (i.e. when performing an analysis on the error-prone measurements themselves), and other interesting properties not present in the classical or Berkson cases. Applied researchers concerned with measurement error should consider weakly calibrated errors and rely on the stronger formulations only when both a stronger model's assumptions are justifiable and would result in appreciable inferential gains.

1969 ◽  
Vol 21 (4) ◽  
pp. 641-654 ◽  
Author(s):  
Edward R. Tufte

Students of politics use statistical and quantitative techniques to: summarize a large body of numbers into a small collection of typical values;confirm (and perhaps sanctify) the results of the analysis by using tests of statistical significance that help protect against sampling and measurement error;discover what's going on in their data and expose some new relationships; andinform their audience what's going on in the data.


2018 ◽  
Vol 37 (2) ◽  
pp. 232-256 ◽  
Author(s):  
Bradley C. Smith ◽  
William Spaniel

The causes and consequences of nuclear proficiency are central to important questions in international relations. At present, researchers tend to use observable characteristics as a proxy. However, aggregation is a problem: existing measures implicitly assume that each indicator is equally informative and that measurement error is not a concern. We overcome these issues by applying a statistical measurement model to directly estimate nuclear proficiency from observed indicators. The resulting estimates form a new dataset on nuclear proficiency which we call ν-CLEAR. We demonstrate that these estimates are consistent with known patterns of nuclear proficiency while also uncovering more nuance than existing measures. Additionally, we demonstrate how scholars can use these estimates to account for measurement error by revisiting existing results with our measure.


1981 ◽  
Vol 18 (1) ◽  
pp. 39-50 ◽  
Author(s):  
Claes Fornell ◽  
David F. Larcker

The statistical tests used in the analysis of structural equation models with unobservable variables and measurement error are examined. A drawback of the commonly applied chi square test, in addition to the known problems related to sample size and power, is that it may indicate an increasing correspondence between the hypothesized model and the observed data as both the measurement properties and the relationship between constructs decline. Further, and contrary to common assertion, the risk of making a Type II error can be substantial even when the sample size is large. Moreover, the present testing methods are unable to assess a model's explanatory power. To overcome these problems, the authors develop and apply a testing system based on measures of shared variance within the structural model, measurement model, and overall model.


2018 ◽  
Vol 96 (7) ◽  
pp. 738-748 ◽  
Author(s):  
Peter D. Wentzell ◽  
Chelsi C. Wicks ◽  
Jez W.B. Braga ◽  
Liz F. Soares ◽  
Tereza C.M. Pastore ◽  
...  

The analysis of multivariate chemical data is commonplace in fields ranging from metabolomics to forensic classification. Many of these studies rely on exploratory visualization methods that represent the multidimensional data in spaces of lower dimensionality, such as hierarchical cluster analysis (HCA) or principal components analysis (PCA). However, such methods rely on assumptions of independent measurement errors with uniform variance and can fail to reveal important information when these assumptions are violated, as they often are for chemical data. This work demonstrates how two alternative methods, maximum likelihood principal components analysis (MLPCA) and projection pursuit analysis (PPA), can reveal chemical information hidden from more traditional techniques. Experimental data to compare different methods consists of near-infrared (NIR) reflectance spectra from 108 samples of wood that are derived from four different species of Brazilian trees. The measurement error characteristics of the spectra are examined and it is shown that, by incorporating measurement error information into the data analysis (through MLPCA) or using alternative projection criteria (i.e., PPA), samples can be separated by species. These techniques are proposed as powerful tools for multivariate data analysis in chemistry.


2021 ◽  
Vol 7 (2) ◽  
pp. 285-298
Author(s):  
Yonathan Natanael ◽  
Yusak Novanto

Many researchers make an error in data analysis, where researchers analyzing data using the raw score on the instrument with an ordinal scale. Error in the use of raw score for an instrument with an ordinal scale can be overcome by using measurement model testing, namely tau-equivalent and parallel. The purpose of this study is to examine the best measurement model of the Satisfaction with Life Scale (SWLS). The research method is Secondary Data Analysis approach (SDA). The secondary data was combined from two previous studies. The quantitative research analysis technique used to test the three measurement models in SWLS was confirmatory factor analysis. The unidimensional model of confirmatory factor analysis indicates that tau-equivalent is the best measurement model in SWLS testing (χ2(9)=13.759, p > .05 and RMSEA < .05). Based on the result, an implication measuring instruments using raw score can be used while measurement model testing of an instrument is tau-equivalent.


2017 ◽  
Author(s):  
Marko Bachl ◽  
Michael Scharkow

Linkage analysis is a sophisticated media effect research design that reconstructs the likely exposure to relevant media messages of individual survey respondents by complementing the survey data with a content analysis. It is an important improvement over survey-only designs: Instead of predicting some outcome of interest by media use and implicitly assuming what kind of media messages the respondents were exposed to, linkage analysis explicitly takes the media messages into account (de Vreese &amp; Neijens, 2016; Scharkow &amp; Bachl, 2017; Schuck, Vliegenthart, &amp; de Vreese, 2016; Shoemaker &amp; Reese, 1990; Slater, 2016; Valkenburg &amp; Peter, 2013). The design in its modern form has been pioneered by Miller, Goldenberg, and Erbring (1979) and is today considered a “state-of-the art analysis of the impact of specific news consumption” (Fazekas &amp; Larsen, 2015, p. 196). Its widespread use, especially in the field of political communication, and its still increasing popularity demonstrate the relevance of the design. The main advantage of a linkage analysis is the use of one or more message exposure variables which combine information about media use and media content. However, both constitutive sources are often measured with error: Survey respondents are not very good at reporting their media use reliably, and coders will often make some errors when classifying the relevant messages.In this article, we will first give a short overview on the prevalence and consequences of measurement error in both data sources. The arguments are based on a literature review and a simulation study which are published elsewhere in full detail (Scharkow &amp; Bachl, 2017). We continue with a discussion of possible remedies in measurement and data analysis. Beyond the obvious need to improve the measures themselves, we highlight the importance of serious diagnostics of measurement quality. Such information can then be incorporated in the data analysis using estimation or imputation approaches, which are introduced in the main section of this chapter. We conclude by noting that 1) the improvement of measurements and the diagnosis of measurement error in both parts of a linkage analysis must be taken seriously; 2) many tools for correcting measurement error in single parts of a linkage analysis already exist and should be used; 3) methodological research is needed for the development of an integrated analysis workflow which accounts for measurement error and uncertainty in both data sources.


2018 ◽  
Vol 1 (2) ◽  
pp. 167-185
Author(s):  
Farooq Miiro ◽  
Mohd Ibrahim Burhan

Organizational culture plays a pivotal role in the development and change of organizations. To achieve institutional competitiveness and repositioning on the world market all key players in the institutional development need to be on the same page in terms of organizational culture. A mammoth of studies have been done in the past to explore organizational culture structure but there were no attempts done to validate and measure the construct on employee behaviour and thoughts at the Islamic University in Uganda. The purpose of this study therefore is to measure and validate organisational construct as perceived by staff at the Islamic University in Uganda. The study employed four dimensions to examine organizational culture, and 361 staff through randomization participated in the study. To arrive at the intention of the study SEM-Amos technique of data analysis was used to confirm the hypothesized measurement model. The results indicated that meaningful value, support and promotion of values, discipline values and free style value are true and valid predictors of organizational culture structure.


Sign in / Sign up

Export Citation Format

Share Document