systematic measurement error
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 7)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
Claudia Cappa ◽  
Nicole Petrowski ◽  
Elga Filipa De Castro ◽  
Emily Geisen ◽  
Patricia LeBaron ◽  
...  

Challenges in measuring early childhood development (ECD) at scale have been documented, yet little is known about the specific difficulties related to questionnaire design and question interpretation. The purpose of this paper is to discuss the challenges of measuring ECD at scale in the context of household surveys and to show how to overcome them. The paper uses examples from the cognitive interviewing exercises that were conducted as part of the methodological work to develop a measure of ECD outcomes, the ECDI2030. It describes the methodological work carried out to inform the selection and improvement of question items and survey implementation tools as a fundamental step to reduce and mitigate systematic measurement error and improve data quality. The project consisted of a total of five rounds of testing, comprising 191 one-on-one, in-depth cognitive interviews across six countries (Bulgaria, India, Jamaica, Mexico, Uganda, and the USA). Qualitative data analysis methods were used to determine matches and mismatches between intention of items and false positives or false negative answers among subgroups of respondents. Key themes emerged that could potentially lead to systematic measurement error in population-based surveys on ECD: (1) willingness of child to perform task versus ability of child to perform task; (2) performing task versus performing task correctly; (3) identifying letters or numbers versus recognizing letters or numbers; (4) consistently performing task versus correctly performing task; (5) applicability of skills being asked versus observability of skills being asked; and (6) language production versus language comprehension. Through an iterative process of testing and subsequent revision, improvements were made to item wording, response options, and interviewer training instructions. Given the difficulties inherent in population-level data collection in the context of global monitoring, this study’s findings confirm the importance of cognitive testing as a crucial step in careful, culturally relevant, and sensitive questionnaire design and as a means to reduce response bias in cross-cultural contexts.


2021 ◽  
Vol 8 (3) ◽  
pp. 205316802110440
Author(s):  
Steven C. Rosenzweig

Research in political science and other social sciences often relies on survey data to study a range of questions about politics in the developing world. This study identifies systematic measurement error in some of the most frequently used datasets with respect to one commonly employed variable: respondent’s age. It shows evidence of substantial measurement error that is correlated with observable characteristics, and discusses and illustrates the implications for empirical analysis with an example from a recently published study. In doing so, it demonstrates tools for identifying and diagnosing systematic measurement error in survey data, as well as for investigating the robustness of one’s findings when the problem arises.


2019 ◽  
Vol 59 (2) ◽  
pp. 171-184 ◽  
Author(s):  
Ranoua Bouchouicha ◽  
Lachlan Deer ◽  
Ashraf Galal Eid ◽  
Peter McGee ◽  
Daniel Schoch ◽  
...  

AbstractGender effects in risk taking have attracted much attention by economists, and remain debated. Loss aversion—the stylized finding that a given loss carries substantially greater weight than a monetarily equivalent gain—is a fundamental driver of risk aversion. We deploy four definitions of loss aversion commonly used in the literature to investigate gender effects. Even though the definitions only differ in subtle ways, we find women to be more loss averse than men according to one definition, while another definition results in no gender differences, and the remaining two definitions point to women being less loss averse than men. Conceptually, these contradictory effects can be organized by systematic measurement error resulting from model mis-specifications relative to the true underlying decision process.


Author(s):  
Inken von Borzyskowski ◽  
Michael Wahman

AbstractWhat are the causes and consequences of systematic measurement error in violence measures drawn from media-based conflict event data? More specifically, how valid are such event data for geocoding and capturing election violence? This study examines sub-national variation in election violence and uses original data from domestic election monitor surveys as a comparison to widely used sources of event data. The authors show that conventional data under-report events throughout the election cycle, particularly in sparsely populated areas and outside anticipated violence hotspots. Moreover, systematic measurement error of media-based event data for measuring election violence can generate significant relationships where none exist, and can result in different effect magnitudes. The article suggests areas for future research and indicates ways in which existing work on election violence may have been affected by systematic measurement error.


2018 ◽  
Vol 49 (5) ◽  
pp. 713-734 ◽  
Author(s):  
Diana Boer ◽  
Katja Hanke ◽  
Jia He

One major threat to revealing cultural influences on psychological states or processes is the presence of bias (i.e., systematic measurement error). When quantitative measures are not targeting the same construct or they differ in metric across cultures, the validity of inferences about cultural variability (and universality) is in doubt. The objectives of this article are to review what can be done about it and what is being done about it. To date, a multitude of useful techniques and methods to reduce or assess bias in cross-cultural research have been developed. We explore the limits of invariance/equivalence testing and suggest more flexible means of dealing with bias. First, we review currently available established and novel methods that reveal bias in cross-cultural research. Second, we analyze current practices in a systematic content analysis. The content analysis of more than 500 culture-comparative quantitative studies (published from 2008 to 2015 in three outlets in cross-cultural, social, and developmental psychology) aims to gauge current practices and approaches in the assessment of measurement equivalence/invariance. Surprisingly, the analysis revealed a rather low penetration of invariance testing in cross-cultural research. Although a multitude of classical and novel approaches for invariance testing is available, these are employed infrequent rather than habitual. We discuss reasons for this hesitation, and we derive suggestions for creatively assessing and handling biases across different research paradigms and designs.


Sign in / Sign up

Export Citation Format

Share Document