scholarly journals Combining List Experiment and Direct Question Estimates of Sensitive Behavior Prevalence

2015 ◽  
Vol 3 (1) ◽  
pp. 43-66 ◽  
Author(s):  
P. m. Aronow ◽  
A. Coppock ◽  
F. W. Crawford ◽  
D. P. Green
2019 ◽  
Vol 63 ◽  
pp. 41-48 ◽  
Author(s):  
Dominique Roe-Sepowitz ◽  
Stephanie Bontrager ◽  
Justin T. Pickett ◽  
Anna E. Kosloski

2017 ◽  
Vol 25 (2) ◽  
pp. 241-259 ◽  
Author(s):  
Gregory Eady

What explains why some survey respondents answer truthfully to a sensitive survey question, while others do not? This question is central to our understanding of a wide variety of attitudes, beliefs, and behaviors, but has remained difficult to investigate empirically due to the inherent problem of distinguishing those who are telling the truth from those who are misreporting. This article proposes a solution to this problem. It develops a method to model, within a multivariate regression context, whether survey respondents provide one response to a sensitive item in a list experiment, but answer otherwise when asked to reveal that belief openly in response to a direct question. As an empirical application, the method is applied to an original large-scale list experiment to investigate whether those on the ideological left are more likely to misreport their responses to questions about prejudice than those on the right. The method is implemented for researchers as open-source software.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0247201
Author(s):  
Heidi Moseson ◽  
Ruvani Jayaweera ◽  
Sarah Huber-Krum ◽  
Sarah Garver ◽  
Alison Norris ◽  
...  

Background Accurately measuring abortion incidence poses many challenges. The list experiment is a method designed to increase the reporting of sensitive or stigmatized behaviors in surveys, but has only recently been applied to the measurement of abortion. To further test the utility of the list experiment for measuring abortion incidence, we conducted list experiments in two countries, over two time periods. Materials and methods The list experiment is an indirect method of measuring sensitive experiences that protects respondent confidentiality by hiding individual responses to a binary sensitive item (i.e., abortion) by combining this response with answers to other non-sensitive binary control items. Respondents report the number of list items that apply to them, not which ones. We conducted a list experiment to measure cumulative lifetime incidence of abortion in Malawi, and separately to measure cumulative five-year incidence of abortion in Senegal, among cisgender women of reproductive age. Results Among 810 eligible respondents in Malawi, list experiment results estimated a cumulative lifetime incidence of abortion of 0.9% (95%CI: 0.0, 7.6). Among 1016 eligible respondents in Senegal, list experiment estimates indicated a cumulative five-year incidence of abortion of 2.8% (95%CI: 0.0, 10.4) which, while lower than anticipated, is seven times the proportion estimated from a direct question on abortion (0.4%). Conclusions Two test applications of the list experiment to measure abortion experiences in Malawi and Senegal likely underestimated abortion incidence. Future efforts should include context-specific formative qualitative research for the development and selection of list items, enumerator training, and method delivery to assess if and how these changes can improve method performance.


2019 ◽  
Vol 27 (4) ◽  
pp. 540-555
Author(s):  
Yimeng Li

The analysis of list experiments depends on two assumptions, known as “no design effect” and “no liars”. The no liars assumption is strong and may fail in many list experiments. I relax the no liars assumption in this paper, and develop a method to provide bounds for the prevalence of sensitive behaviors or attitudes under a weaker behavioral assumption about respondents’ truthfulness toward the sensitive item. I apply the method to a list experiment on the anti-immigration attitudes of California residents and on a broad set of existing list experiment datasets. The prevalence of different items and the correlation structure among items on the list jointly determine the width of the bound estimates. In particular, the bounds tend to be narrower when the list consists of items of the same category, such as multiple groups or organizations, different corporate activities, and various considerations for politician decision-making. My paper illustrates when the full power of the no liars assumption is most needed to pin down the prevalence of the sensitive behavior or attitude, and facilitates estimation of the prevalence robust to violations of the no liars assumption for many list experiment applications.


2021 ◽  
pp. 1-22
Author(s):  
Patrick M. Kuhn ◽  
Nick Vivyan

Abstract To reduce strategic misreporting on sensitive topics, survey researchers increasingly use list experiments rather than direct questions. However, the complexity of list experiments may increase nonstrategic misreporting. We provide the first empirical assessment of this trade-off between strategic and nonstrategic misreporting. We field list experiments on election turnout in two different countries, collecting measures of respondents’ true turnout. We detail and apply a partition validation method which uses true scores to distinguish true and false positives and negatives for list experiments, thus allowing detection of nonstrategic reporting errors. For both list experiments, partition validation reveals nonstrategic misreporting that is: undetected by standard diagnostics or validation; greater than assumed in extant simulation studies; and severe enough that direct turnout questions subject to strategic misreporting exhibit lower overall reporting error. We discuss how our results can inform the choice between list experiment and direct question for other topics and survey contexts.


2017 ◽  
Vol 8 (1) ◽  
Author(s):  
Alexander Coppock

AbstractExplanations for the failure to predict Donald Trump’s win in the 2016 Presidential election sometimes include the “Shy Trump Supporter” hypothesis, according to which some Trump supporters succumb to social desirability bias and hide their vote preference from pollsters. I evaluate this hypothesis by comparing direct question and list experimental estimates of Trump support in a nationally representative survey of 5290 American adults fielded from September 2 to September 13, 2016. Of these, 32.5% report supporting Trump’s candidacy. A list experiment conducted on the same respondents yields an estimate 29.6%, suggesting that Trump’s poll numbers were not artificially deflated by social desirability bias as the list experiment estimate is actually lower than direct question estimate. I further investigate differences across measurement modes for relevant demographic and political subgroups and find no evidence in support of the “Shy Trump Supporter” hypothesis.


2019 ◽  
Vol 83 (S1) ◽  
pp. 236-263 ◽  
Author(s):  
Eric Kramon ◽  
Keith Weghorst

Abstract List experiments (LEs) are an increasingly popular survey research tool for measuring sensitive attitudes and behaviors. However, there is evidence that list experiments sometimes produce unreasonable estimates. Why do list experiments “fail,” and how can the performance of the list experiment be improved? Using evidence from Kenya, we hypothesize that the length and complexity of the LE format make them costlier for respondents to complete and thus prone to comprehension and reporting errors. First, we show that list experiments encounter difficulties with simple, nonsensitive lists about food consumption and daily activities: over 40 percent of respondents provide inconsistent responses between list experiment and direct question formats. These errors are concentrated among less numerate and less educated respondents, offering evidence that the errors are driven by the complexity and difficulty of list experiments. Second, we examine list experiments measuring attitudes about political violence. The standard list experiment reveals lower rates of support for political violence compared to simply asking directly about this sensitive attitude, which we interpret as list experiment breakdown. We evaluate two modifications to the list experiment designed to reduce its complexity: private tabulation and cartoon visual aids. Both modifications greatly enhance list experiment performance, especially among respondent subgroups where the standard procedure is most problematic. The paper makes two key contributions: (1) showing that techniques such as the list experiment, which have promise for reducing response bias, can introduce different forms of error associated with question complexity and difficulty; and (2) demonstrating the effectiveness of easy-to-implement solutions to the problem.


2020 ◽  
Vol 16 ◽  
pp. 174550652095335
Author(s):  
Sarah Huber-Krum ◽  
Duygu Karadon ◽  
Sebahat Kurutas ◽  
Julia Rohr ◽  
Simay Sevval Baykal ◽  
...  

Objectives: Abortions are difficult to measure; yet, accurate estimates are critical in developing health programs. We implemented and tested the validity of a list experiment of lifetime abortion prevalence in Istanbul, Turkey. We complemented our findings by understanding community perspectives using in-depth interviews with key informants. Methods: We conducted a household survey between March and June 2018. In a random sample of 4040 married women aged 16–44 years, we implemented a double list experiment. We averaged difference in mean values calculations between the average counts for each list to provide an estimated lifetime abortion prevalence. We conducted in-depth interviews with 16 key informants to provide insights into possible explanations for the quantitative results. Results: The abortion prevalence estimate from the list experiment was close to that of the direct question (3.25% vs 2.97%). Key informant narratives suggest that differing definitions of abortion, inaccessibility, provider bias, lack of knowledge of abortion laws and safety, and religious norms could contribute to under-reporting. Results from the qualitative study suggest that abortion is largely inaccessible and highly stigmatized. Conclusion: Measuring experiences of abortion is critical to understanding women’s needs and informing harm-reduction strategies; however, in highly stigmatized settings, researchers may face unique challenges in obtaining accurate reports.


Author(s):  
S. Rinken ◽  
S. Pasadas-del-Amo ◽  
M. Rueda ◽  
B. Cobo

AbstractExtant scholarship on attitudes toward immigration and immigrants relies mostly on direct survey items. Thus, little is known about the scope of social desirability bias, and even less about its covariates. In this paper, we use probability-based mixed-modes panel data collected in the Southern Spanish region of Andalusia to estimate anti-immigrant sentiment with both the item-count technique, also known as list experiment, and a direct question. Based on these measures, we gauge the size of social desirability bias, compute predictor models for both estimators of anti-immigrant sentiment, and pinpoint covariates of bias. For most respondent profiles, the item-count technique produces higher estimates of anti-immigrant sentiment than the direct question, suggesting that self-presentational concerns are far more ubiquitous than previously assumed. However, we also find evidence that among people keen to position themselves as all-out xenophiles, social desirability pressures persist in the list-experiment: the full scope of anti-immigrant sentiment remains elusive even in non-obtrusive measurement.


2021 ◽  
pp. 004912412199552
Author(s):  
Rainer Schnell ◽  
Kathrin Thomas

This article provides a meta-analysis of studies using the crosswise model (CM) in estimating the prevalence of sensitive characteristics in different samples and populations. On a data set of 141 items published in 33 either articles or books, we compare the difference (Δ) between estimates based on the CM and a direct question (DQ). The overall effect size of Δ is 4.88; 95% CI [4.56, 5.21]. The results of a meta-regression indicate that Δ is smaller when general populations and nonprobability samples are considered. The population effect suggests an education effect: Differences between the CM and DQ estimates are more likely to occur when highly educated populations, such as students, are studied. Our findings raise concerns to what extent the CM is able to improve estimates of sensitive behavior in general population samples.


Sign in / Sign up

Export Citation Format

Share Document