item count technique
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 7)

H-INDEX

9
(FIVE YEARS 1)

2022 ◽  
pp. 26-41
Author(s):  
Beatriz Cobo ◽  
Elvira Pelle

In situations where the estimation of the proportion of sensitive variables relies on the observations of real measurements that are difficult to obtain, there is a need to combine indirect questioning techniques. In the present work, the authors will focus on the item count technique, with alternative methods of sampling, such as the ranked set sampling. They are based on the idea proposed by Santiago et al., which combines the randomized response technique proposed by Warner together with ranked set sampling. The authors will carry out a simulation study to compare the item count technique under ranked set sampling and under simple random sampling without replacement.


Author(s):  
S. Rinken ◽  
S. Pasadas-del-Amo ◽  
M. Rueda ◽  
B. Cobo

AbstractExtant scholarship on attitudes toward immigration and immigrants relies mostly on direct survey items. Thus, little is known about the scope of social desirability bias, and even less about its covariates. In this paper, we use probability-based mixed-modes panel data collected in the Southern Spanish region of Andalusia to estimate anti-immigrant sentiment with both the item-count technique, also known as list experiment, and a direct question. Based on these measures, we gauge the size of social desirability bias, compute predictor models for both estimators of anti-immigrant sentiment, and pinpoint covariates of bias. For most respondent profiles, the item-count technique produces higher estimates of anti-immigrant sentiment than the direct question, suggesting that self-presentational concerns are far more ubiquitous than previously assumed. However, we also find evidence that among people keen to position themselves as all-out xenophiles, social desirability pressures persist in the list-experiment: the full scope of anti-immigrant sentiment remains elusive even in non-obtrusive measurement.


2020 ◽  
Vol 114 (4) ◽  
pp. 1297-1315 ◽  
Author(s):  
GRAEME BLAIR ◽  
ALEXANDER COPPOCK ◽  
MARGARET MOOR

Eliciting honest answers to sensitive questions is frustrated if subjects withhold the truth for fear that others will judge or punish them. The resulting bias is commonly referred to as social desirability bias, a subset of what we label sensitivity bias. We make three contributions. First, we propose a social reference theory of sensitivity bias to structure expectations about survey responses on sensitive topics. Second, we explore the bias-variance trade-off inherent in the choice between direct and indirect measurement technologies. Third, to estimate the extent of sensitivity bias, we meta-analyze the set of published and unpublished list experiments (a.k.a., the item count technique) conducted to date and compare the results with direct questions. We find that sensitivity biases are typically smaller than 10 percentage points and in some domains are approximately zero.


2019 ◽  
pp. 004912411988246
Author(s):  
Jiayuan Li ◽  
Wim Van den Noortgate

This article presents an updated meta-analysis of survey experiments comparing the performance of the item count technique (ICT) and the direct questioning method. After synthesizing 246 effect sizes from 54 studies, we find that the probability that a sensitive item will be selected is .089 higher when using ICT compared to direct questioning. In recognition of the heterogeneity across studies, we seek to explain this variation by means of moderator analyses. We find that the relative effectiveness of ICT is moderated by cultural orientation in the context in which ICT is conducted (collectivism vs. individualism), the valence of topics involved in the applications (socially desirable vs. socially undesirable), and the number of nonkey items. In the Discussion section, we elaborate on the methodological implications of the main findings.


Author(s):  
Chi-lin Tsai

In this article, I review recent developments of the item-count technique (also known as the unmatched-count or list-experiment technique) and introduce a new package, kict, for statistical analysis of the item-count data. This package contains four commands: kict deff performs a diagnostic test to detect the violation of an assumption underlying the item-count technique. kict ls and kict ml perform least-squares estimation and maximum likelihood estimation, respectively. Each encompasses a number of estimators, offering great flexibility for data analysis. kict pfci is a postestimation command for producing confidence intervals with better coverage based on profile likelihood. The development of the item-count technique is still ongoing. I will continue to update the kict package accordingly.


2019 ◽  
Vol 49 (6) ◽  
pp. 1330-1356
Author(s):  
Tasos C. Christofides ◽  
Eleni Manoli

Author(s):  
Benjamin R. Knoll and ◽  
Cammie Jo Bolin

This chapter asks whether it is reasonable to expect that the data is revealing a fully accurate picture of the prevalence of support for female ordination in the United States. When asked by a telephone surveyor whether they are in favor of women being allowed to serve as clergy in their own congregation, respondents might feel social pressure to say “yes” when in actuality they are more hesitant. This chapter takes advantage of a survey tool called a “list experiment” (or “item-count technique”) to examine whether there is any evidence that support for female ordination is either over- or underreported in our public opinion surveys. It finds this is indeed the case: support for female clergy is likely overreported among our survey respondents, especially among women, meaning that there are fewer supporters of female ordination than our public opinion surveys would lead one to believe.


2018 ◽  
Vol 3 (335) ◽  
pp. 35-47
Author(s):  
Michał Bernardelli ◽  
Barbara Kowalczyk

Indirect methods of questioning are of utmost importance when dealing with sensitive questions. This paper refers to the new indirect method introduced by Tian et al. (2014) and examines the optimal allocation of the sample to control and treatment groups. If determining the optimal allocation is based on the variance formula for the method of moments (difference in means) estimator of the sensitive proportion, the solution is quite straightforward and was given in Tian et al. (2014). However, maximum likelihood (ML) estimation is known from much better properties, therefore determining the optimal allocation based on ML estimators has more practical importance. This problem is nontrivial because in the Poisson item count technique the study sensitive variable is a latent one and is not directly observable. Thus ML estimation is carried out by using the expectation‑maximisation (EM) algorithm and therefore an explicit analytical formula for the variance of the ML estimator of the sensitive proportion is not obtained. To determine the optimal allocation of the sample based on ML estimation, comprehensive Monte Carlo simulations and the EM algorithm have been employed.


2017 ◽  
Vol 26 (1) ◽  
pp. 34-53 ◽  
Author(s):  
John S. Ahlquist

The item count technique (ICT-MLE) regression model for survey list experiments depends on assumptions about responses at the extremes (choosing no or all items on the list). Existing list experiment best practices aim to minimize strategic misrepresentation in ways that virtually guarantee that a tiny number of respondents appear in the extrema. Under such conditions both the “no liars” identification assumption and the computational strategy used to estimate the ICT-MLE become difficult to sustain. I report the results of Monte Carlo experiments examining the sensitivity of the ICT-MLE and simple difference-in-means estimators to survey design choices and small amounts of non-strategic respondent error. I show that, compared to the difference in means, the performance of the ICT-MLE depends on list design. Both estimators are sensitive to measurement error, but the problems are more severe for the ICT-MLE as a direct consequence of the no liars assumption. These problems become extreme as the number of treatment-group respondents choosing all the items on the list decreases. I document that such problems can arise in real-world applications, provide guidance for applied work, and suggest directions for further research.


Sign in / Sign up

Export Citation Format

Share Document