The Role of Sample Cluster Means in Multilevel Models

Methodology ◽  
2011 ◽  
Vol 7 (4) ◽  
pp. 121-133 ◽  
Author(s):  
Leonardo Grilli ◽  
Carla Rampichini

The paper explores some issues related to endogeneity in multilevel models, focusing on the case where the random effects are correlated with a level 1 covariate in a linear random intercept model. We consider two basic specifications, without and with the sample cluster mean. It is generally acknowledged that the omission of the cluster mean may cause omitted-variable bias. However, it is often neglected that the inclusion of the sample cluster mean in place of the population cluster mean entails a measurement error that yields biased estimators for both the slopes and the variance components. In particular, the contextual effect is attenuated, while the level 2 variance is inflated. We derive explicit formulae for measurement error biases that allow us to implement simple post-estimation corrections based on the reliability of the covariate. In the first part of the paper, the issue is tackled in a standard framework where the population cluster mean is treated as a latent variable. Later we consider a different framework arising when sampling from clusters of finite size, where the latent variable methods may have a poor performance, and we show how to effectively modify the measurement error correction. The theoretical analysis is supplemented with a simulation study and a discussion of the implications for effectiveness evaluation.

2019 ◽  
Author(s):  
Curtis David Von Gunten ◽  
Bruce D Bartholow ◽  
Jorge S. Martins

Executive functioning (EF) is defined as a set of top-down processes used in reasoning, forming goals, planning, concentrating, and inhibition. It is widely believed that these processes are critical to self-regulation and, therefore, that performance on behavioral task measures of EF should be associated with individual differences in everyday life outcomes. The purpose of the present study was to test this core assumption, focusing on the EF facet of inhibition. A sample of 463 undergraduates completed five laboratory inhibition tasks, along with three self-report measures of self-control and 28 self-report measures of life outcomes. Results showed that although most of the life outcome measures were associated with self-reported self-control, none of the life outcomes were associated with inhibition task performance at the latent-variable level, and few associations were found at the individual task level. These findings challenge the criterion validity of lab-based inhibition tasks. More generally, when considered alongside the known lack of convergent validity between inhibition tasks and self-report measures of self-control, the findings cast doubt on the task’s construct validity as measures of self-control processes. Potential methodological and theoretical reasons for the poor performance of laboratory-based inhibition tasks are discussed.


Psychometrika ◽  
2021 ◽  
Author(s):  
Li Cai ◽  
Carrie R. Houts

AbstractWith decades of advance research and recent developments in the drug and medical device regulatory approval process, patient-reported outcomes (PROs) are becoming increasingly important in clinical trials. While clinical trial analyses typically treat scores from PROs as observed variables, the potential to use latent variable models when analyzing patient responses in clinical trial data presents novel opportunities for both psychometrics and regulatory science. An accessible overview of analyses commonly used to analyze longitudinal trial data and statistical models familiar in both psychometrics and biometrics, such as growth models, multilevel models, and latent variable models, is provided to call attention to connections and common themes among these models that have found use across many research areas. Additionally, examples using empirical data from a randomized clinical trial provide concrete demonstrations of the implementation of these models. The increasing availability of high-quality, psychometrically rigorous assessment instruments in clinical trials, of which the Patient-Reported Outcomes Measurement Information System (PROMIS®) is a prominent example, provides rare possibilities for psychometrics to help improve the statistical tools used in regulatory science.


2015 ◽  
Vol 5 (2) ◽  
pp. 149-156 ◽  
Author(s):  
Priscillia Hunt ◽  
Jeremy N.V Miles

Purpose – Studies in criminal psychology are inevitably undertaken in a context of uncertainty. One class of methods addressing such uncertainties is Monte Carlo (MC) simulation. The purpose of this paper is to provide an introduction to MC simulation for representing uncertainty and focusses on likely uses in studies of criminology and psychology. In addition to describing the method and providing a step-by-step guide to implementing a MC simulation, this paper provides examples using the Fragile Families and Child Wellbeing Survey data. Results show MC simulations can be a useful technique to test biased estimators and to evaluate the effect of bias on power for statistical tests. Design/methodology/approach – After describing MC simulation methods in detail, this paper provides a step-by-step guide to conducting a simulation. Then, a series of examples are provided. First, the authors present a brief example of how to generate data using MC simulation and the implications of alternative probability distribution assumptions. The second example uses actual data to evaluate the impact that omitted variable bias can have on least squares estimators. A third example evaluates the impact this form of heteroskedasticity can have on the power of statistical tests. Findings – This study shows MC simulated variable means are very similar to the actual data, but the standard deviations are considerably less in MC simulation-generated data. Using actual data on criminal convictions and income of fathers, the authors demonstrate the impact of omitted variable bias on the standard errors of the least squares estimator. Lastly, the authors show the p-values are systematically larger and the rejection frequencies correspondingly smaller in heteroskedastic error models compared to a model with homoskedastic errors. Originality/value – The aim of this paper is to provide a better understanding of what MC simulation methods are and what can be achieved with them. A key value of this paper is that the authors focus on understanding the concepts of MC simulation for researchers of statistics and psychology in particular. Furthermore, the authors provide a step-by-step description of the MC simulation approach and provide examples using real survey data on criminal convictions and economic characteristics of fathers in large US cities.


2018 ◽  
Vol 37 (2) ◽  
pp. 232-256 ◽  
Author(s):  
Bradley C. Smith ◽  
William Spaniel

The causes and consequences of nuclear proficiency are central to important questions in international relations. At present, researchers tend to use observable characteristics as a proxy. However, aggregation is a problem: existing measures implicitly assume that each indicator is equally informative and that measurement error is not a concern. We overcome these issues by applying a statistical measurement model to directly estimate nuclear proficiency from observed indicators. The resulting estimates form a new dataset on nuclear proficiency which we call ν-CLEAR. We demonstrate that these estimates are consistent with known patterns of nuclear proficiency while also uncovering more nuance than existing measures. Additionally, we demonstrate how scholars can use these estimates to account for measurement error by revisiting existing results with our measure.


2014 ◽  
Vol 21 (1) ◽  
pp. 48-68 ◽  
Author(s):  
Andrea J. Hester

Purpose – This paper aims to examine organizational information systems based on Web 2.0 technology as socio-technical systems that involve interacting relationships among actors, structure, tasks and technology. Alignment within the relationships may facilitate increased technology use; however, gaps in alignment may impede technology use and result in poor performance or system failure. The technology examined is an organizational wiki used for collaborative knowledge management. Design/methodology/approach – Results of a survey administered to employees of an organization providing cloud computing services are presented. The research model depicts the socio-technical component relationships and their influence on use of the wiki. Hierarchical latent variable modelling is used to operationalize the six main constructs. Hypotheses propose that as alignment of a relationship increases, wiki use increases. The partial least squares (PLS) method is used to examine the hypotheses. Findings – Based on the results, increased perceptions of alignment among technology and structure increase wiki use. Further analysis indicates that low usage may be linked to gaps in alignment. Many respondents with lower usage scores also indicated “low alignment” among actor-task, actor-technology, and task-structure. Research limitations/implications – The sample size is rather small; however, results may give an indication as to the appropriateness of dimensions chosen to represent the alignment relationships. Socio-technical systems theory (STS) is often utilized in qualitative studies. This paper introduces a measurement instrument designed to evaluate STS through quantitative analysis. Practical implications – User acceptance and change management continue to be important topics for both researchers and practitioners. The model proposed here provides measures that may reveal predictive indicators for increased information system use. Alternatively, practitioners may be able to utilize a diagnostic tool as presented here to assess underlying factors that may be impeding effective technology utilization. Originality/value – The paper presents a diagnostic tool that may help management to better uncover misaligned relationships leading to underutilization of technology. Practical advice and guidelines are provided allowing for a plan to rectify the situation and improve technology usage and performance outcomes.


2020 ◽  
Author(s):  
Gordana Rajlic

In the realities of measurement in social and behavioral sciences, in addition to the characteristic(s) of the respondents targeted by the measurement, other influences (other characteristics of the respondents and the items) can be reflected by the responses to the items in a measure. In the current study, different levels of deviations from strict unidimensionality in measures and the accuracy of parameter estimates of widely used unidimensional latent variable measurement models were further investigated. Of interest were unidimensionality violations in measures intended/designed as unidimensional (when the items primarily reflect a dominant latent dimension, as intended in a unidimensional measure, but also reflect, to a smaller degree, some additional influences). In the simulated conditions of interest, varying degrees of systematic error (bias) in the unidimensional model item and person parameters estimates were demonstrated (e.g., factor loadings overestimation and measurement error underestimation). The strength of the relevant relations and the size of bias were examined. If the size of these systematic distortions is uncommunicated, various negative consequences can ensue for substantive research and applied measurement (in relation to the reliability, validity, and fairness of research/measurement outcomes), when the model estimates are used. The utility of the approach employed in the study was discussed.


2020 ◽  
Vol 57 (6) ◽  
pp. 692-700 ◽  
Author(s):  
Kyle L Marquardt

Expert-coded datasets provide scholars with otherwise unavailable data on important concepts. However, expert coders vary in their reliability and scale perception, potentially resulting in substantial measurement error. These concerns are acute in expert coding of key concepts for peace research. Here I examine (1) the implications of these concerns for applied statistical analyses, and (2) the degree to which different modeling strategies ameliorate them. Specifically, I simulate expert-coded country-year data with different forms of error and then regress civil conflict onset on these data, using five different modeling strategies. Three of these strategies involve regressing conflict onset on point estimate aggregations of the simulated data: the mean and median over expert codings, and the posterior median from a latent variable model. The remaining two strategies incorporate measurement error from the latent variable model into the regression process by using multiple imputation and a structural equation model. Analyses indicate that expert-coded data are relatively robust: across simulations, almost all modeling strategies yield regression results roughly in line with the assumed true relationship between the expert-coded concept and outcome. However, the introduction of measurement error to expert-coded data generally results in attenuation of the estimated relationship between the concept and conflict onset. The level of attenuation varies across modeling strategies: a structural equation model is the most consistently robust estimation technique, while the median over expert codings and multiple imputation are the least robust.


2017 ◽  
Vol 78 (5) ◽  
pp. 905-917 ◽  
Author(s):  
Tenko Raykov ◽  
Natalja Menold ◽  
George A. Marcoulides

Validity coefficients for multicomponent measuring instruments are known to be affected by measurement error that attenuates them, affects associated standard errors, and influences results of statistical tests with respect to population parameter values. To account for measurement error, a latent variable modeling approach is discussed that allows point and interval estimation of the relationship of an underlying latent factor to a criterion variable in a setting that is more general than the commonly considered homogeneous psychometric test case. The method is particularly helpful in validity studies for scales with a second-order factorial structure, by allowing evaluation of the relationship between the second-order factor and a criterion variable. The procedure is similarly useful in studies of discriminant, convergent, concurrent, and predictive validity of measuring instruments with complex latent structure, and is readily applicable when measuring interrelated traits that share a common variance source. The outlined approach is illustrated using data from an authoritarianism study.


Sign in / Sign up

Export Citation Format

Share Document