Reported Attitudes About Euthanasia Reflect Comprehension of Terms in Survey Questions

2004 ◽  
Author(s):  
Maile O'Hara ◽  
Michael F. Schober
Keyword(s):  
Author(s):  
David M. Willumsen

The central argument of this book is that voting unity in European legislatures is not primarily the result of the ‘disciplining’ power of the leadership of parliamentary parties, but rather the result of a combination of ideological homogeneity through self-selection into political parties and the calculations of individual legislators about their own long-term benefits. Despite the central role of policy preferences in the subsequent behaviour of legislators, preferences at the level of the individual legislator have been almost entirely neglected in the study of parliaments and legislative behaviour. The book measures these using an until now under-utilized resource: parliamentary surveys. Building on these, the book develops measures of policy incentives of legislators to dissent from their parliamentary parties, and show that preference similarity amongst legislators explains a very substantial proportion of party unity, yet alone cannot explain all of it. Analysing the attitudes of legislators to the demands of party unity, and what drives these attitudes, the book argues that what explains the observed unity (beyond what preference similarity would explain) is the conscious acceptance by MPs that the long-term benefits of belonging to a united party (such as increased influence on legislation, lower transaction costs, and better chances of gaining office) outweigh the short-terms benefits of always voting for their ideal policy outcome. The book buttresses this argument through the analysis of both open-ended survey questions as well as survey questions on the costs and benefits of belonging to a political party in a legislature.


2019 ◽  
Vol 8 (4) ◽  
pp. 691-705
Author(s):  
Robert P Agans ◽  
Quirina M Vallejos ◽  
Thad S Benefield

Abstract Past research has shown that commonly reported cultural group disparities in health-related indices may be attributable to culturally mediated differences in the interpretation of translated survey questions and response scales. This problem may be exacerbated when administering single-item survey questions, which typically lack the reliability seen in multi-item scales. We adapt the test-retest approach for single-item survey questions that have been translated from English into Spanish and demonstrate how to use this approach as a quick and efficient pilot test before fielding a major survey. Three retest conditions were implemented (English-Spanish, Spanish-English, and English-English) on a convenience sample (n = 109) of Latinos and non-Latinos where translated items were compared against an English-English condition that served as our control. Several items were flagged for investigation using this approach. Discussion centers on the utility of this approach for evaluating the Spanish translation of single-item questions in population-based surveys.


2021 ◽  
Vol 8 (3) ◽  
pp. 205316802110317
Author(s):  
Rebecca A. Glazier ◽  
Amber E. Boydstun ◽  
Jessica T. Feezell

Open-ended survey questions can provide researchers with nuanced and rich data, but content analysis is subject to misinterpretation and can introduce bias into subsequent analysis. We present a simple method to improve the semantic validity of a codebook and test for bias: a “self-coding” method where respondents first provide open-ended responses and then self-code those responses into categories. We demonstrated this method by comparing respondents’ self-coding to researcher-based coding using an established codebook. Our analysis showed significant disagreement between the codebook’s assigned categorizations of responses and respondents’ self-codes. Moreover, this technique uncovered instances where researcher-based coding disproportionately misrepresented the views of certain demographic groups. We propose using the self-coding method to iteratively improve codebooks, identify bad-faith respondents, and, perhaps, to replace researcher-based content analysis.


2021 ◽  
pp. 004912412110312
Author(s):  
Cornelia E. Neuert ◽  
Katharina Meitinger ◽  
Dorothée Behr

The method of web probing integrates cognitive interviewing techniques into web surveys and is increasingly used to evaluate survey questions. In a usual web probing scenario, probes are administered immediately after the question to be tested (concurrent probing), typically as open-ended questions. A second possibility of administering probes is in a closed format, whereby the response categories for the closed probes are developed during previously conducted qualitative cognitive interviews. Using closed probes has several benefits, such as reduced costs and time efficiency, because this method does not require manual coding of open-ended responses. In this article, we investigate whether the insights gained into item functioning when implementing closed probes are comparable to the insights gained when asking open-ended probes and whether closed probes are equally suitable to capture the cognitive processes for which traditionally open-ended probes are intended. The findings reveal statistically significant differences with regard to the variety of themes, the patterns of interpretation, the number of themes per respondent, and nonresponse. No differences in number of themes across formats by sex and educational level were found.


2008 ◽  
Vol 15 (1) ◽  
pp. 79-87 ◽  
Author(s):  
Thomas Niedomysl ◽  
Bo Malmberg
Keyword(s):  

2010 ◽  
Vol 24 (5) ◽  
pp. 351-354 ◽  
Author(s):  
Victoria P. Niederhauser ◽  
Deborah Mattheus
Keyword(s):  

2015 ◽  
Vol 38 ◽  
pp. 92-110 ◽  
Author(s):  
Jacqualyn Blizzard ◽  
Leidy Klotz ◽  
Geoff Potvin ◽  
Zahra Hazari ◽  
Jennifer Cribbs ◽  
...  

2007 ◽  
Vol 16 (4) ◽  
pp. 439-446 ◽  
Author(s):  
Henry J. Gardner ◽  
Michael A. Martin

Likert scaled data, which are frequently collected in studies of interaction in virtual environments, demand specialized statistical tools for analysis. The routine use of statistical methods appropriate for continuous data in this context can lead to significant inferential flaws. Likert scaled data are ordinal rather than interval scaled and need to be analyzed using rank based statistical procedures that are widely available. Likert scores are “lumpy” in the sense that they cluster around a small number of fixed values. This lumpiness is made worse by the tendency for subjects to cluster towards either the middle or the extremes of the scale. We suggest an ad hoc method to deal with such data which can involve a further lumping of the results followed by the application of nonparametric statistics. Averaging Likert scores over several different survey questions, which is sometimes done in studies of interaction in virtual environments, results in a different sort of lumpiness. The lumped variables which are obtained in this manner can be quite murky and should be used with great caution, if at all, particularly if the number of questions over which such averaging is carried out is small.


Sign in / Sign up

Export Citation Format

Share Document