item response
Recently Published Documents


TOTAL DOCUMENTS

3901
(FIVE YEARS 1059)

H-INDEX

97
(FIVE YEARS 7)

2022 ◽  
Author(s):  
Achmad Shabir

The aim of this study was to describe the quality of English testing intrument used in Try Out National Exam conducted by 40 Junior High Schools in Makassar-Sulawesi Selatan, using Item Response Theory (IRT) especially based on one (1PL), two (2PL), and three (3PL) parameters models. The data consist of 1.267 student’s answer sheets and the test has 50 multiple choice items. Results showed that the test is preferably good at both item difficulty and item dicrimination as suggest by 1PL and 2PL estimation. But at 3PL estimation, the test unable to discriminate students ability, while 38 % of the items were easy to guess.


Author(s):  
Maria Brucato ◽  
Andrea Frick ◽  
Stefan Pichelmann ◽  
Alina Nazareth ◽  
Nora S. Newcombe

2022 ◽  
Author(s):  
Neil Hester ◽  
Jordan Axt ◽  
Eric Hehman

Racial attitudes, beliefs, and motivations lie at the center of many of the most influential theories of prejudice and discrimination. The extent to which such theories can meaningfully explain behavior hinges on accurate measurement of these latent constructs. We evaluated the validity properties of 25 race-related scales in a sample of 1,031,207 respondents using modern approaches such as dynamic fit indices, Item Response Theory, and nomological nets. Despite showing adequate internal reliability, many scales demonstrated poor model fit and had latent score distributions showing clear floor or ceiling effects, results that illustrate deficiencies in measures’ ability to capture their intended construct. Nomological nets further suggested that the theoretical space of “racial prejudice” is crowded with scales that may not actually capture meaningfully distinct latent constructs. We provide concrete recommendations for scale selection and renovation and outline implications for overlooking measurement issues in the study of prejudice and discrimination.


2022 ◽  
Vol 12 ◽  
Author(s):  
Feifei Huang ◽  
Zhe Li ◽  
Ying Liu ◽  
Jingan Su ◽  
Li Yin ◽  
...  

Educational assessments tests are often constructed using testlets because of the flexibility to test various aspects of the cognitive activities and broad content sampling. However, the violation of the local item independence assumption is inevitable when tests are built using testlet items. In this study, simulations are conducted to evaluate the performance of item response theory models and testlet response theory models for both the dichotomous and polytomous items in the context of equating tests composed of testlets. We also examine the impact of testlet effect, length of testlet items, and sample size on estimating item and person parameters. The results show that more accurate performance of testlet response theory models over item response theory models was consistently observed across the studies, which supports the benefits of using the testlet response theory models in equating for tests composed of testlets. Further, results of the study indicate that when sample size is large, item response theory models performed similarly to testlet response theory models across all studies.


2022 ◽  
Vol 3 (1) ◽  
pp. 01-19
Author(s):  
O. M. Adetutu ◽  
H. B. Lawal

A test is a tool meant to measure the ability level of the students, and how well they can recall the subject matter, but items making up a test may be defectives, and thereby unable to measure students’ ability or traits satisfactorily as intended if proper attention is not paid to item properties such as difficulty, discrimination, and pseudo guessing indices (power) of each item. This could be remedied by item analysis and moderation.  It is a known fact that the absence or improper use of item analysis could undermine the integrity of assessment, certification and placement in our educational institutions. Both appropriateness and spread of items properties in accessing students’ abilities distribution, and the adequacy of information provided by dichotomous response items in a compulsory university undergraduate statistics course which was scored dichotomously, and analyzed with stata 16 SE on window 7 were focused here.   In view of this, three dichotomous Item Response Theory (IRT) measurement models were used in the context of their potential usefulness in an education setting such as in determining these items properties. Ability, item discrimination, difficulty, and guessing parameters as unobservable characteristics were quantified with a binary response test, then discrete item response becomes an observable outcome variable which is associated with student’s ability level is thereby linked by Item Characteristic Curves that is defined by a set of item parameters that models the probability of observing a given item response by conditioning on a specific ability level. These models were used to assess each of the three items properties together with students’ abilities; then identified defective items that were needed to be discarded, moderated, and non-defectives items as the case may be while some of these chosen items were discussed based on underlining models. Finally, the information provided by these items was also discussed.


2022 ◽  
pp. 147892992110585
Author(s):  
Tsung-Han Tsai

The conventional procedure for measuring political knowledge is treating nonresponses such as “don’t know” as incorrect responses and counting the number of “correct” responses. In recent times, increasing attention has been paid to partial knowledge hidden within incorrect and nonresponses. This article explores partial knowledge indicated by incorrect and nonresponses and considers nonresponses as nonignorable missingness. We propose a model that combines the shared-parameter approach presented in the literature on missing data mechanisms and the methods of item response theory. We show that the proposed model can determine whether the people with nonresponses should be treated as more or less knowledgeable and detect whether it is appropriate to pool nonresponses and incorrect responses into the same category. Furthermore, we find partial knowledge hidden within women’s nonresponses, which confirms the possibility of the exaggeration of the gender gap in political knowledge.


Sign in / Sign up

Export Citation Format

Share Document