scholarly journals A Lognormal Ipsative Model for Multidimensional Compositional Items

2021 ◽  
Vol 12 ◽  
Author(s):  
Chia-Wen Chen ◽  
Wen-Chung Wang ◽  
Magdalena Mo Ching Mok ◽  
Ronny Scherer

Compositional items – a form of forced-choice items – require respondents to allocate a fixed total number of points to a set of statements. To describe the responses to these items, the Thurstonian item response theory (IRT) model was developed. Despite its prominence, the model requires that items composed of parts of statements result in a factor loading matrix with full rank. Without this requirement, the model cannot be identified, and the latent trait estimates would be seriously biased. Besides, the estimation of the Thurstonian IRT model often results in convergence problems. To address these issues, this study developed a new version of the Thurstonian IRT model for analyzing compositional items – the lognormal ipsative model (LIM) – that would be sufficient for tests using items with all statements positively phrased and with equal factor loadings. We developed an online value test following Schwartz’s values theory using compositional items and collected response data from a sample size of N = 512 participants with ages from 13 to 51 years. The results showed that our LIM had an acceptable fit to the data, and that the reliabilities exceeded 0.85. A simulation study resulted in good parameter recovery, high convergence rate, and the sufficient precision of estimation in the various conditions of covariance matrices between traits, test lengths and sample sizes. Overall, our results indicate that the proposed model can overcome the problems of the Thurstonian IRT model when all statements are positively phrased and factor loadings are similar.

2018 ◽  
Vol 79 (3) ◽  
pp. 462-494 ◽  
Author(s):  
Ken A. Fujimoto

Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the aspects of another source (i.e., a nested method–source interaction). For this study, then, a Bayesian IRT model is proposed, one that accounts for such interaction among method sources while controlling for the clustering of individuals within the sample. The proposed model accomplishes these tasks by specifying a multilevel trifactor structure for the latent trait space. Details of simulations are also reported. These simulations demonstrate that this model can identify when item response data represent a multilevel trifactor structure, and it does so in data from samples as small as 250 cases nested within 50 clusters. Additionally, the simulations show that misleading estimates for the item discriminations could arise when the trifactor structure reflected in the data is not correctly accounted for. The utility of the model is also illustrated through the analysis of empirical data.


2020 ◽  
Author(s):  
Alexander P. Christensen ◽  
Hudson Golino

Recent research has demonstrated that the network measure node strength or sum of a node’s connections is roughly equivalent to confirmatory factor analysis (CFA) loadings. A key finding of this research is that node strength represents a combination of different latent causes. In the present research, we sought to circumvent this issue by formulating a network equivalent of factor loadings, which we call network loadings. In two simulations, we evaluated whether these network loadings could effectively (1) separate the effects of multiple latent causes and (2) estimate the simulated factor loading matrix of factor models. Our findings suggest that the network loadings can effectively do both. In addition, we leveraged the second simulation to derive effect size guidelines for network loadings. In a third simulation, we evaluated the similarities and differences between factor and network loadings when the data were generated from random, factor, and network models. We found sufficient differences between the loadings, which allowed us to develop an algorithm to predict the data generating model called the Loadings Comparison Test (LCT). The LCT had high sensitivity and specificity when predicting the data generating model. In sum, our results suggest that network loadings can provide similar information to factor loadings when the data are generated from a factor model and therefore can be used in a similar way (e.g., item selection, measurement invariance, factor scores).


2021 ◽  
pp. 014662162110138
Author(s):  
Joseph A. Rios ◽  
James Soland

Suboptimal effort is a major threat to valid score-based inferences. While the effects of such behavior have been frequently examined in the context of mean group comparisons, minimal research has considered its effects on individual score use (e.g., identifying students for remediation). Focusing on the latter context, this study addressed two related questions via simulation and applied analyses. First, we investigated how much including noneffortful responses in scoring using a three-parameter logistic (3PL) model affects person parameter recovery and classification accuracy for noneffortful responders. Second, we explored whether improvements in these individual-level inferences were observed when employing the Effort Moderated IRT (EM-IRT) model under conditions in which its assumptions were met and violated. Results demonstrated that including 10% noneffortful responses in scoring led to average bias in ability estimates and misclassification rates by as much as 0.15 SDs and 7%, respectively. These results were mitigated when employing the EM-IRT model, particularly when model assumptions were met. However, once model assumptions were violated, the EM-IRT model’s performance deteriorated, though still outperforming the 3PL model. Thus, findings from this study show that (a) including noneffortful responses when using individual scores can lead to potential unfounded inferences and potential score misuse, and (b) the negative impact that noneffortful responding has on person ability estimates and classification accuracy can be mitigated by employing the EM-IRT model, particularly when its assumptions are met.


2021 ◽  
Author(s):  
Masaki Uto

AbstractPerformance assessment, in which human raters assess examinee performance in a practical task, often involves the use of a scoring rubric consisting of multiple evaluation items to increase the objectivity of evaluation. However, even when using a rubric, assigned scores are known to depend on characteristics of the rubric’s evaluation items and the raters, thus decreasing ability measurement accuracy. To resolve this problem, item response theory (IRT) models that can estimate examinee ability while considering the effects of these characteristics have been proposed. These IRT models assume unidimensionality, meaning that a rubric measures one latent ability. In practice, however, this assumption might not be satisfied because a rubric’s evaluation items are often designed to measure multiple sub-abilities that constitute a targeted ability. To address this issue, this study proposes a multidimensional IRT model for rubric-based performance assessment. Specifically, the proposed model is formulated as a multidimensional extension of a generalized many-facet Rasch model. Moreover, a No-U-Turn variant of the Hamiltonian Markov chain Monte Carlo algorithm is adopted as a parameter estimation method for the proposed model. The proposed model is useful not only for improving the ability measurement accuracy, but also for detailed analysis of rubric quality and rubric construct validity. The study demonstrates the effectiveness of the proposed model through simulation experiments and application to real data.


Author(s):  
Pallavi Mirajkar ◽  
Rupali Dahake

The novel COVID sickness 2019 (COVID-19) pandemic caused by the SARS-CoV-2 keeps on representing a serious and vital threat to worldwide health. This pandemic keeps on testing clinical frameworks around the world in numerous viewpoints, remembering sharp increments in requests for clinic beds and basic deficiencies in clinical equipments, while numerous medical services laborers have themselves been infected. We have proposed analytical model that predicts a positive SARS-CoV-2 infection by considering both common and severe symptoms in patients. The proposed model will work on response data of all individuals if they are suffering from various symptoms of the COVID-19. Consequently, proposed model can be utilized for successful screening and prioritization of testing for the infection in everyone.


Author(s):  
Ralph B. D'agostino ◽  
Heidy K. Russell

2006 ◽  
Vol 31 (1) ◽  
pp. 63-79 ◽  
Author(s):  
Henry May

A new method is presented and implemented for deriving a scale of socioeconomic status (SES) from international survey data using a multilevel Bayesian item response theory (IRT) model. The proposed model incorporates both international anchor items and nation-specific items and is able to (a) produce student family SES scores that are internationally comparable, (b) reduce the influence of irrelevant national differences in culture on the SES scores, and (c) effectively and efficiently deal with the problem of missing data in a manner similar to Rubin’s (1987) multiple imputation approach. The results suggest that this model is superior to conventional models in terms of its fit to the data and its ability to use information collected via international surveys.


Author(s):  
James A. Mynderse ◽  
George T. C. Chiu

A dynamic mirror actuator utilizing antagonistic piezoelectric stack actuators is presented for use in laser printers. Exhibiting hysteresis and other nonlinearities in open-loop operation, the dynamic mirror actuator (DMA) requires a control structure to achieve accurate mirror positioning. A linear DMA model is developed for extending operational bandwidth under closed-loop control, employing explicit piezoelectric stack actuator (PESA) charging dynamics and incorporating two modes for single input control of opposing PESA drives. Compared to constitutive models from literature, the proposed model displays a comparable fit with experimental frequency response data while retaining a lower model order. As further validation, simulated step response data are shown to agree with experimental data.


2021 ◽  
Author(s):  
Joseph Rios ◽  
Jim Soland

Suboptimal effort is a major threat to valid score-based inferences. While the effects of such behavior have been frequently examined in the context of mean group comparisons, minimal research has considered its effects on individual score use (e.g., identifying students for remediation). Focusing on the latter context, this study addressed two related questions via simulation and applied analyses. First, we investigated how much including noneffortful responses in scoring using a three-parameter logistic (3PL) model affects person parameter recovery and classification accuracy for noneffortful responders. Second, we explored whether improvements in these individual-level inferences were observed when employing the Effort Moderated IRT (EM-IRT) model under conditions in which its assumptions were met and violated. Results demonstrated that including 10% noneffortful responses in scoring led to average bias in ability estimates and misclassification rates by as much as 0.15 SDs and 7% respectively. These results were mitigated when employing the EM-IRT model, particularly when model assumptions were met. However, once model assumptions were violated, the EM-IRT model’s performance deteriorated, though still outperforming the 3PL model. Thus, findings from this study show that: (a) including noneffortful responses when using individual scores can lead to potential unfounded inferences and potential score misuse; and (b) the negative impact that noneffortful responding has on person ability estimates and classification accuracy can be mitigated by employing the EM-IRT model, particularly when its assumptions are met.


2008 ◽  
Vol 24 (1) ◽  
pp. 49-56 ◽  
Author(s):  
Wolfgang A. Rauch ◽  
Karl Schweizer ◽  
Helfried Moosbrugger

Abstract. In this study the psychometric properties of the Personal Optimism scale of the POSO-E questionnaire ( Schweizer & Koch, 2001 ) for the assessment of dispositional optimism are evaluated by applying Samejima's (1969) graded response model, a parametric item response theory (IRT) model for polytomous data. Model fit is extensively evaluated via fit checks on the lower-order margins of the contingency table of observed and expected responses and visual checks of fit plots comparing observed and expected category response functions. The model proves appropriate for the data; a small amount of misfit is interpreted in terms of previous research using other measures for optimism. Item parameters and information functions show that optimism can be measured accurately, especially at moderately low to middle levels of the latent trait scale, and particularly by the negatively worded items.


Sign in / Sign up

Export Citation Format

Share Document