Appendix I: CMES Codebook – Relevant Survey Questions

2021 ◽  
pp. 231-246
Keyword(s):  
Author(s):  
David M. Willumsen

The central argument of this book is that voting unity in European legislatures is not primarily the result of the ‘disciplining’ power of the leadership of parliamentary parties, but rather the result of a combination of ideological homogeneity through self-selection into political parties and the calculations of individual legislators about their own long-term benefits. Despite the central role of policy preferences in the subsequent behaviour of legislators, preferences at the level of the individual legislator have been almost entirely neglected in the study of parliaments and legislative behaviour. The book measures these using an until now under-utilized resource: parliamentary surveys. Building on these, the book develops measures of policy incentives of legislators to dissent from their parliamentary parties, and show that preference similarity amongst legislators explains a very substantial proportion of party unity, yet alone cannot explain all of it. Analysing the attitudes of legislators to the demands of party unity, and what drives these attitudes, the book argues that what explains the observed unity (beyond what preference similarity would explain) is the conscious acceptance by MPs that the long-term benefits of belonging to a united party (such as increased influence on legislation, lower transaction costs, and better chances of gaining office) outweigh the short-terms benefits of always voting for their ideal policy outcome. The book buttresses this argument through the analysis of both open-ended survey questions as well as survey questions on the costs and benefits of belonging to a political party in a legislature.


2019 ◽  
Vol 8 (4) ◽  
pp. 691-705
Author(s):  
Robert P Agans ◽  
Quirina M Vallejos ◽  
Thad S Benefield

Abstract Past research has shown that commonly reported cultural group disparities in health-related indices may be attributable to culturally mediated differences in the interpretation of translated survey questions and response scales. This problem may be exacerbated when administering single-item survey questions, which typically lack the reliability seen in multi-item scales. We adapt the test-retest approach for single-item survey questions that have been translated from English into Spanish and demonstrate how to use this approach as a quick and efficient pilot test before fielding a major survey. Three retest conditions were implemented (English-Spanish, Spanish-English, and English-English) on a convenience sample (n = 109) of Latinos and non-Latinos where translated items were compared against an English-English condition that served as our control. Several items were flagged for investigation using this approach. Discussion centers on the utility of this approach for evaluating the Spanish translation of single-item questions in population-based surveys.


2021 ◽  
Vol 8 (3) ◽  
pp. 205316802110317
Author(s):  
Rebecca A. Glazier ◽  
Amber E. Boydstun ◽  
Jessica T. Feezell

Open-ended survey questions can provide researchers with nuanced and rich data, but content analysis is subject to misinterpretation and can introduce bias into subsequent analysis. We present a simple method to improve the semantic validity of a codebook and test for bias: a “self-coding” method where respondents first provide open-ended responses and then self-code those responses into categories. We demonstrated this method by comparing respondents’ self-coding to researcher-based coding using an established codebook. Our analysis showed significant disagreement between the codebook’s assigned categorizations of responses and respondents’ self-codes. Moreover, this technique uncovered instances where researcher-based coding disproportionately misrepresented the views of certain demographic groups. We propose using the self-coding method to iteratively improve codebooks, identify bad-faith respondents, and, perhaps, to replace researcher-based content analysis.


2021 ◽  
pp. 004912412110312
Author(s):  
Cornelia E. Neuert ◽  
Katharina Meitinger ◽  
Dorothée Behr

The method of web probing integrates cognitive interviewing techniques into web surveys and is increasingly used to evaluate survey questions. In a usual web probing scenario, probes are administered immediately after the question to be tested (concurrent probing), typically as open-ended questions. A second possibility of administering probes is in a closed format, whereby the response categories for the closed probes are developed during previously conducted qualitative cognitive interviews. Using closed probes has several benefits, such as reduced costs and time efficiency, because this method does not require manual coding of open-ended responses. In this article, we investigate whether the insights gained into item functioning when implementing closed probes are comparable to the insights gained when asking open-ended probes and whether closed probes are equally suitable to capture the cognitive processes for which traditionally open-ended probes are intended. The findings reveal statistically significant differences with regard to the variety of themes, the patterns of interpretation, the number of themes per respondent, and nonresponse. No differences in number of themes across formats by sex and educational level were found.


2008 ◽  
Vol 15 (1) ◽  
pp. 79-87 ◽  
Author(s):  
Thomas Niedomysl ◽  
Bo Malmberg
Keyword(s):  

2010 ◽  
Vol 24 (5) ◽  
pp. 351-354 ◽  
Author(s):  
Victoria P. Niederhauser ◽  
Deborah Mattheus
Keyword(s):  

2015 ◽  
Vol 38 ◽  
pp. 92-110 ◽  
Author(s):  
Jacqualyn Blizzard ◽  
Leidy Klotz ◽  
Geoff Potvin ◽  
Zahra Hazari ◽  
Jennifer Cribbs ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document