person parameter
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 1)

POETICA ◽  
2021 ◽  
Vol 52 (3-4) ◽  
pp. 361-386
Author(s):  
José A. Álvarez-Amorós

Abstract Taking its cue from the critical treatment given to unreliable narration by Wayne C. Booth and his early followers, and in contrast to the claims often made in the field of authentication theory, this paper seeks to join the debate on “third-person” narrative unreliability by outlining an inclusive approach to this phenomenon in which the “person” parameter need not be a determining factor. To theorize and illustrate this approach, a methodological context is first developed by juxtaposing Genette’s revisionist stance on voice and perception with Booth’s 1961 dismissal of the vocal issue and his controversial assimilation of tellers and observers. Then Ryan’s dissenting views are addressed by identifying common ground between her idea of the impersonal narrator and the principles of inclusivity which precisely rest on the impersonating potential of that figure. Finally the inclusive conception of unreliability is shown at work in three Jamesian tales – “The Aspern Papers” (1888), “The Liar” (1888), and “The Beast in the Jungle” (1903) – whose different vocal options do not seem to immunize their narrators against charges of untrustworthiness.


2021 ◽  
Vol 6 ◽  
Author(s):  
Stephen Humphry ◽  
Paul Montuoro

This article demonstrates that the Rasch model cannot reveal systematic differential item functioning (DIF) in single tests. The person total score is the sufficient statistic for the person parameter estimate, eliminating the possibility for residuals at the test level. An alternative approach is to use subset DIF analysis to search for DIF in item subsets that form the components of the broader latent trait. In this methodology, person parameter estimates are initially calculated using all test items. Then, in separate analyses, these person estimates are compared to the observed means in each subset, and the residuals assessed. As such, this methodology tests the assumption that the person locations in each factor group are invariant across subsets. The first objective is to demonstrate that in single tests differences in factor groups will appear as differences in the mean person estimates and the distributions of these estimates. The second objective is to demonstrate how subset DIF analysis reveals differences between person estimates and the observed means in subsets. Implications for practitioners are discussed.


2020 ◽  
Vol 44 (4) ◽  
pp. 327-328
Author(s):  
Pere J. Ferrando ◽  
David Navarro-González

InDisc is an R Package that implements procedures for estimating and fitting unidimensional Item Response Theory (IRT) Dual Models (DMs). DMs are intended for personality and attitude measures and are, essentially, extended standard IRT models with an extra person parameter that models the discriminating power of the individual. The package consists of a main function, which calls subfunctions for fitting binary, graded, and continuous responses. The program, a detailed user’s guide, and an empirical example are available at no cost to the interested practitioner.


2018 ◽  
Vol 43 (3) ◽  
pp. 226-240 ◽  
Author(s):  
Philseok Lee ◽  
Seang-Hwane Joo ◽  
Stephen Stark ◽  
Oleksandr S. Chernyshenko

Historically, multidimensional forced choice (MFC) measures have been criticized because conventional scoring methods can lead to ipsativity problems that render scores unsuitable for interindividual comparisons. However, with the recent advent of item response theory (IRT) scoring methods that yield normative information, MFC measures are surging in popularity and becoming important components in high-stake evaluation settings. This article aims to add to burgeoning methodological advances in MFC measurement by focusing on statement and person parameter recovery for the GGUM-RANK (generalized graded unfolding-RANK) IRT model. Markov chain Monte Carlo (MCMC) algorithm was developed for estimating GGUM-RANK statement and person parameters directly from MFC rank responses. In simulation studies, it was examined that how the psychometric properties of statements composing MFC items, test length, and sample size influenced statement and person parameter estimation; and it was explored for the benefits of measurement using MFC triplets relative to pairs. To demonstrate this methodology, an empirical validity study was then conducted using an MFC triplet personality measure. The results and implications of these studies for future research and practice are discussed.


Psihologija ◽  
2015 ◽  
Vol 48 (4) ◽  
pp. 345-360 ◽  
Author(s):  
Jörg Müller ◽  
Petra Hasselbach ◽  
Adrian Loerbroks ◽  
Manfred Amelang

Person-fit methodology is a promising technique for identifying subjects whose test scores have questionable validity. Less is known however about this technique?s ability to predict survey participation longitudinally. This study presents theory-derived expectations related to social desirability, the tendency for extreme responding and traitedness for specific deviating answer patterns and an expected consistence of person-fit scores across 27 personality scales. Data from 5,114 subjects (Amelang, 1997) were reanalysed with a polytomous-Rasch model to estimate scale scores and von Davier and Molenaar?s (2003) person-fit statistics. The person-fit statistics of the 27 scales were examined together with the 27 person parameter scores in one common factor analysis. The person-fit scores served as indicators of the latent factor ?scalability? while the person-parameter scores were considered to index the bias introduced by social desirability. The sign of factor loadings showed consistency and validity of the tendency for social desirability and extreme responding. Moreover, the personfit- based subject classification derived from the baseline data was able to predict subjects? participation at a 8,5-year follow-up. However, the nature of those associations was contrary to our predictions. The discussion addresses explanations and practical implications, but also the limitations pertaining to the identification and interpretation of person-fit scores.


Author(s):  
John J. Barnard

This article briefly touches on how different measurement theories can be used to score responses on multiple choice questions (MCQs). How missing data is treated may have a profound effect on a person’s score and is dealt with most elegantly in modern theories. The issue of guessing a correct answer has been a topic of discussion for many years. It is asserted that test takers almost never have no knowledge whatsoever of the content in an appropriate test and therefore tend to make educated guesses rather than random guesses. Problems related to the classical correction for guessing is highlighted and the Rasch approach to use fit statistics to identify possible guessing, is briefly discussed. The threeparameter ‘logistic’ item response theory (IRT) model includes a ‘guessing item parameter’ to indicate the chances that a test taker guessed the correct answer to an item. However, it is pointed out that it is a person that guesses, not an item, and therefore a guessing parameter should be a person parameter. Option probability theory (OPT) purports to overcome this problem through requiring an indication of the degree of certainty the test taker has that a particular option is the correct one. Realistic allocations of these probabilities indicate the degree of guessing and hence more precise measures of ability.


Sign in / Sign up

Export Citation Format

Share Document