scholarly journals Corrigendum to “Survey questions about party competence: Insights from cognitive interviews” [Elect. Stud. 34 (2014) 280–290]

2014 ◽  
Vol 35 ◽  
pp. 406 ◽  
Author(s):  
Markus Wagner ◽  
Eva Zeglovits
2021 ◽  
pp. 004912412110312
Author(s):  
Cornelia E. Neuert ◽  
Katharina Meitinger ◽  
Dorothée Behr

The method of web probing integrates cognitive interviewing techniques into web surveys and is increasingly used to evaluate survey questions. In a usual web probing scenario, probes are administered immediately after the question to be tested (concurrent probing), typically as open-ended questions. A second possibility of administering probes is in a closed format, whereby the response categories for the closed probes are developed during previously conducted qualitative cognitive interviews. Using closed probes has several benefits, such as reduced costs and time efficiency, because this method does not require manual coding of open-ended responses. In this article, we investigate whether the insights gained into item functioning when implementing closed probes are comparable to the insights gained when asking open-ended probes and whether closed probes are equally suitable to capture the cognitive processes for which traditionally open-ended probes are intended. The findings reveal statistically significant differences with regard to the variety of themes, the patterns of interpretation, the number of themes per respondent, and nonresponse. No differences in number of themes across formats by sex and educational level were found.


SAGE Open ◽  
2016 ◽  
Vol 6 (4) ◽  
pp. 215824401667177 ◽  
Author(s):  
Jennifer Edgar ◽  
Joe Murphy ◽  
Michael Keating

Cognitive interviewing is a common method used to evaluate survey questions. This study compares traditional cognitive interviewing methods with crowdsourcing, or “tapping into the collective intelligence of the public to complete a task.” Crowdsourcing may provide researchers with access to a diverse pool of potential participants in a very timely and cost-efficient way. Exploratory work found that crowdsourcing participants, with self-administered data collection, may be a viable alternative, or addition, to traditional pretesting methods. Using three crowdsourcing designs (TryMyUI, Amazon Mechanical Turk, and Facebook), we compared the participant characteristics, costs, and quantity and quality of data with traditional laboratory-based cognitive interviews. Results suggest that crowdsourcing and self-administered protocols may be a viable way to collect survey pretesting information, as participants were able to complete the tasks and provide useful information; however, complex tasks may require the skills of an interviewer to administer unscripted probes.


Author(s):  
Kerry Scott ◽  
Dipanwita Gharai ◽  
Manjula Sharma ◽  
Namrata Choudhury ◽  
Bibha Mishra ◽  
...  

Abstract Quantitative survey findings are important in measuring health-related phenomena, including on sensitive topics such as respectful maternity care (RMC). But how well do survey results truly capture respondent experiences and opinions? Quantitative tool development and piloting often involve translating questions from other settings and assessing the mechanics of implementation, which fails to deeply explore how respondents understand survey questions and response options. To address this gap, we conducted cognitive interviews on survey questions (n = 88) adapted from validated RMC instruments used in Ethiopia, Kenya and elsewhere in India. Cognitive interviews with rural women (n = 21) in Madhya Pradesh, India involved asking the respondent the survey question, recording her response, then interviewing her about what the question and response options meant to her. We analysed the interviews to revise the tool and identify question failures, which we grouped into six areas: issues with sequencing, length and sensitivity; problematic response options; inappropriate vocabulary; temporal and spatial confusion; accessing different cognitive domains; and failure to resonate with the respondent’s worldview and reality. Although women tended to provide initial answers to the survey questions, cognitive interviews revealed widespread mismatch between respondent interpretation and question intent. Likert scale response options were generally incomprehensible and questions involving hypothetical scenarios could be interpreted in unexpected ways. Many key terms and concepts from the international RMC literature did not translate well and showed low resonance with respondents, including consent and being involved in decisions about one’s care. This study highlights the threat to data quality and the validity of findings when translating quantitative surveys between languages and cultures and showcases the value of cognitive interviews in identifying question failures. While survey tool revision can address many of these issues, further critical discussion is needed on the use of standardized questions to assess the same domains across contexts.


2012 ◽  
Vol 15 (6) ◽  
pp. 467-483 ◽  
Author(s):  
Stephen Farrall ◽  
Camilla Priede ◽  
Elina Ruuskanen ◽  
Anniina Jokinen ◽  
Todor Galev ◽  
...  

2015 ◽  
Vol 46 (3) ◽  
pp. 540-564 ◽  
Author(s):  
Philip S. Brenner

That rates of normative behaviors produced by sample surveys are higher than actual behavior warrants is well evidenced in the research literature. Less well understood is the source of this error. Twenty-five cognitive interviews were conducted to probe responses to a set of common, conventional survey questions about one such normative behavior: religious service attendance. Answers to the survey questions and cognitive probes are compared both quantitatively and qualitatively. Half of the respondents amended their answer during cognitive probing, all amendments indicating a lower rate of attendance than originally reported, yielding a statistically significant reduction in reported attendance. Narrative responses shed light onto the source of bias, as respondents pragmatically interpreted the survey question to allow themselves to include other types of religious behavior, to report on a more religious past, and discount current constraints on their religious behavior, in order to report aspirational or normative religious identities.


Author(s):  
Laura H. Lippman ◽  
Kristin Anderson Moore ◽  
Lina Guzman ◽  
Renee Ryberg ◽  
Hugh McIntosh ◽  
...  

2021 ◽  
Vol 45 (1) ◽  
pp. 81-94
Author(s):  
Julie M. Maier ◽  
Kristen N. Jozkowski ◽  
Danny Valdez ◽  
Brandon L. Crawford ◽  
Ronna C. Turner ◽  
...  

Objectives: Salient belief elicitations (SBEs), informed by the Reasoned Action Approach (RAA), are used to identify 3 sets of beliefs – behavioral, control, and normative – that influence attitudes toward a health behavior. SBEs ask participants about their own beliefs through open-ended questions. We adapted a SBE by focusing on abortion, which is infrequently examined through SBEs; we also included a survey version that asked participants their views on what a hypothetical woman would do if contemplating an abortion. Given these deviations from traditional SBEs, the purpose of this study was to assess if the adapted SBE was understood by participants in English and Spanish through cognitive interviewing. Methods: We examined participants' interpretations of SBE items about abortion to determine if they aligned with the corresponding RAA construct. We administered SBE surveys and conducted cognitive interviews with US adults in both English and Spanish. Results: Participants comprehended the SBE questions as intended. Participants' interpretations of most questions were also in line with the respective RAA construct. Conclusions: SBE survey questions were comprehended well by participants. We discuss areas in which SBE questions can be modified to improve alignment with the underlying RAA construct to assess abortion beliefs.


Field Methods ◽  
2011 ◽  
Vol 23 (4) ◽  
pp. 379-396 ◽  
Author(s):  
Kristen Miller ◽  
Rory Fitzgerald ◽  
José-Luis Padilla ◽  
Stephanie Willson ◽  
Sally Widdop ◽  
...  

This article summarizes the work of the Comparative Cognitive Testing Workgroup, an international coalition of survey methodologists interested in developing an evidence-based methodology for examining the comparability of survey questions within cross-cultural or multinational contexts. To meet this objective, it was necessary to ensure that the cognitive interviewing (CI) method itself did not introduce method bias. Therefore, the workgroup first identified specific characteristics inherent in CI methodology that could undermine the comparability of CI evidence. The group then developed and implemented a protocol addressing those issues. In total, 135 cognitive interviews were conducted by participating countries. Through the process, the group identified various interpretive patterns resulting from sociocultural and language-related differences among countries as well as other patterns of error that would impede comparability of survey data.


Sign in / Sign up

Export Citation Format

Share Document