When “Don’t Know” Indicates Nonignorable Missingness: Using the Estimation of Political Knowledge as an Example

2022 ◽  
pp. 147892992110585
Author(s):  
Tsung-Han Tsai

The conventional procedure for measuring political knowledge is treating nonresponses such as “don’t know” as incorrect responses and counting the number of “correct” responses. In recent times, increasing attention has been paid to partial knowledge hidden within incorrect and nonresponses. This article explores partial knowledge indicated by incorrect and nonresponses and considers nonresponses as nonignorable missingness. We propose a model that combines the shared-parameter approach presented in the literature on missing data mechanisms and the methods of item response theory. We show that the proposed model can determine whether the people with nonresponses should be treated as more or less knowledgeable and detect whether it is appropriate to pool nonresponses and incorrect responses into the same category. Furthermore, we find partial knowledge hidden within women’s nonresponses, which confirms the possibility of the exaggeration of the gender gap in political knowledge.

2006 ◽  
Vol 31 (1) ◽  
pp. 63-79 ◽  
Author(s):  
Henry May

A new method is presented and implemented for deriving a scale of socioeconomic status (SES) from international survey data using a multilevel Bayesian item response theory (IRT) model. The proposed model incorporates both international anchor items and nation-specific items and is able to (a) produce student family SES scores that are internationally comparable, (b) reduce the influence of irrelevant national differences in culture on the SES scores, and (c) effectively and efficiently deal with the problem of missing data in a manner similar to Rubin’s (1987) multiple imputation approach. The results suggest that this model is superior to conventional models in terms of its fit to the data and its ability to use information collected via international surveys.


Psico-USF ◽  
2019 ◽  
Vol 24 (4) ◽  
pp. 673-684 ◽  
Author(s):  
Kaline da Silva Lima ◽  
Juliana Maria Vieira Tenório ◽  
Francisco Romário ◽  
Luã Medeiros Fernandes de Melo ◽  
Josemberg Moura de Andrade

Abstract The goal of this research was to adapt and obtain validity evidence of the Modern Homonegativity Scale (MHS), which is set by two parallel forms with 12 items, one of them referring to gays (MHS-G) and the other referring to lesbians (MHS-L). In the first study 418 heterosexuals between 18 and 58 years old (M = 24,9; SD = 7,23), mostly women (66,3%) living at João Pessoa-PB (50,5%) answered. Both scales have shown as unidimensional and containing a high degree of internal consistency. The second study had the participation of 273 heterosexuals between 18 and 55 years old (M = 23,7; SD = 6,33), mostly women (69%). The confirmatory factor analysis showed satisfactory adjustment indexes for the proposed model and the Item Response Theory (IRT) demonstrated a good degree of discrimination and variation of the difficulty parameters. Therefore, we may conclude MHS is psychometrically valid, easily applicable and can be used in research contexts.


2018 ◽  
Vol 79 (3) ◽  
pp. 462-494 ◽  
Author(s):  
Ken A. Fujimoto

Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the aspects of another source (i.e., a nested method–source interaction). For this study, then, a Bayesian IRT model is proposed, one that accounts for such interaction among method sources while controlling for the clustering of individuals within the sample. The proposed model accomplishes these tasks by specifying a multilevel trifactor structure for the latent trait space. Details of simulations are also reported. These simulations demonstrate that this model can identify when item response data represent a multilevel trifactor structure, and it does so in data from samples as small as 250 cases nested within 50 clusters. Additionally, the simulations show that misleading estimates for the item discriminations could arise when the trifactor structure reflected in the data is not correctly accounted for. The utility of the model is also illustrated through the analysis of empirical data.


2015 ◽  
Vol 5 (4) ◽  
pp. 711-731 ◽  
Author(s):  
Stephen A. Jessee

A prominent worry in the measurement of political knowledge is that respondents who say they don’t know the answer to a survey question may have partial knowledge about the topic—more than respondents who answer incorrectly but less than those who answer correctly. It has also been asserted that differentials in respondents’ willingness to guess, driven strongly by personality, can bias traditional knowledge measures. Using a multinomial probit item response model, I show that, contrary to previous claims that “don’t know” responses to political knowledge questions conceal a good deal of “hidden knowledge,” these responses are actually reflective of less knowledge, not only than correct responses but also than incorrect answers. Furthermore, arguments that the meaning of “don’t know” responses varies strongly by respondent personality type are incorrect. In fact, these results hold for high- and low-trait respondents on each of the five most commonly used core personality measures.


2017 ◽  
Vol 25 (4) ◽  
pp. 483-504
Author(s):  
Tsung-han Tsai ◽  
Chang-chih Lin

Due to the crucial role of political knowledge in democratic participation, the measurement of political knowledge has been a major concern in the discipline of political science. Common formats used for political knowledge questions include multiple-choice items and open-ended identification questions. The conventional wisdom holds that multiple-choice items induce guessing behavior, which leads to underestimated item-difficulty parameters and biased estimates of political knowledge. This article examines guessing behavior in multiple-choice items and argues that a successful guess requires certain levels of knowledge conditional on the difficulties of items. To deal with this issue, we propose a Bayesian IRT guessing model that accommodates the guessing components of item responses. The proposed model is applied to analyzing survey data in Taiwan, and the results show that the proposed model appropriately describes the guessing components based on respondents’ levels of political knowledge and item characteristics. That is, in general, partially informed respondents are more likely to have a successful guess because well-informed respondents do not need to guess and barely informed ones are highly seducible by the attractive distractors. We also examine the gender gap in political knowledge and find that, even when the guessing effect is accounted for, men are more knowledgeable than women about political affairs, which is consistent with the literature.


2021 ◽  
Vol 11 ◽  
Author(s):  
Hongyue Zhu ◽  
Wei Gao ◽  
Xue Zhang

Multilevel item response theory (MLIRT) models are used widely in educational and psychological research. This type of modeling has two or more levels, including an item response theory model as the measurement part and a linear-regression model as the structural part, the aim being to investigate the relation between explanatory variables and latent variables. However, the linear-regression structural model focuses on the relation between explanatory variables and latent variables, which is only from the perspective of the average tendency. When we need to explore the relationship between variables at various locations along the response distribution, quantile regression is more appropriate. To this end, a quantile-regression-type structural model named as the quantile MLIRT (Q-MLIRT) model is introduced under the MLIRT framework. The parameters of the proposed model are estimated using the Gibbs sampling algorithm, and comparison with the original (i.e., linear-regression-type) MLIRT model is conducted via a simulation study. The results show that the parameters of the Q-MLIRT model could be recovered well under different quantiles. Finally, a subset of data from PISA 2018 is analyzed to illustrate the application of the proposed model.


2005 ◽  
Author(s):  
◽  
Yanyan Sheng

As item response theory models gain increased popularity in large scale educational and measurement testing situations, many studies have been conducted on the development and applications of unidimensional and multidimensional models. However, to date, no study has yet looked at models in the IRT framework with an overall ability dimension underlying all test items and several ability dimensions specific for each subtest. This study is to propose such a model and compare it with the conventional IRT models using Bayesian methodology. The results suggest that the proposed model offers a better way to represent the test situations not realized in existing models. The model specifications for the proposed model also give rise to implications for test developers on test designing. In addition, the proposed IRT model can be applied in other areas, such as intelligence or psychology, among others.


2020 ◽  
Vol 63 (6) ◽  
pp. 1916-1932 ◽  
Author(s):  
Haiying Yuan ◽  
Christine Dollaghan

Purpose No diagnostic tools exist for identifying social (pragmatic) communication disorder (SPCD), a new Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition category for individuals with social communication deficits but not the repetitive, restricted behaviors and interests (RRBIs) that would qualify them for a diagnosis of autism spectrum disorder (ASD). We explored the value of items from a widely used screening measure of ASD for distinguishing SPCD from typical controls (TC; Aim 1) and from ASD (Aim 2). Method We applied item response theory (IRT) modeling to Social Communication Questionnaire–Lifetime ( Rutter, Bailey, & Lord, 2003 ) records available in the National Database for Autism Research. We defined records from putative SPCD ( n = 54), ASD ( n = 278), and TC ( n = 274) groups retrospectively, based on National Database for Autism Research classifications and Autism Diagnostic Interview–Revised responses. After assessing model assumptions, estimating model parameters, and measuring model fit, we identified items in the social communication and RRBI domains that were maximally informative in differentiating the groups. Results IRT modeling identified a set of seven social communication items that distinguished SPCD from TC with sensitivity and specificity > 80%. A set of five RRBI items was less successful in distinguishing SPCD from ASD (sensitivity and specificity < 70%). Conclusion The IRT modeling approach and the Social Communication Questionnaire–Lifetime item sets it identified may be useful in efforts to construct screening and diagnostic measures for SPCD.


Sign in / Sign up

Export Citation Format

Share Document