scholarly journals Item development process and analysis of 50 case-based items for implementation on the Korean Nursing Licensing Examination

Author(s):  
In Sook Park ◽  
Yeon Ok Suh ◽  
Hae Sook Park ◽  
So Young Kang ◽  
Kwang Sung Kim ◽  
...  

Purpose: The purpose of this study was to improve the quality of items on the Korean Nursing Licensing Examination by developing and evaluating case-based items that reflect integrated nursing knowledge.Methods: We conducted a cross-sectional observational study to develop new case-based items. The methods for developing test items included expert workshops, brainstorming, and verification of content validity. After a mock examination of undergraduate nursing students using the newly developed case-based items, we evaluated the appropriateness of the items through classical test theory and item response theory.Results: A total of 50 case-based items were developed for the mock examination, and content validity was evaluated. The question items integrated 34 discrete elements of integrated nursing knowledge. The mock examination was taken by 741 baccalaureate students in their fourth year of study at 13 universities. Their average score on the mock examination was 57.4, and the examination showed a reliability of 0.40. According to classical test theory, the average level of item difficulty of the items was 57.4% (80%–100% for 12 items; 60%–80% for 13 items; and less than 60% for 25 items). The mean discrimination index was 0.19, and was above 0.30 for 11 items and 0.20 to 0.29 for 15 items. According to item response theory, the item discrimination parameter (in the logistic model) was none for 10 items (0.00), very low for 20 items (0.01 to 0.34), low for 12 items (0.35 to 0.64), moderate for 6 items (0.65 to 1.34), high for 1 item (1.35 to 1.69), and very high for 1 item (above 1.70). The item difficulty was very easy for 24 items (below −2.0), easy for 8 items (−2.0 to −0.5), medium for 6 items (−0.5 to 0.5), hard for 3 items (0.5 to 2.0), and very hard for 9 items (2.0 or above). The goodness-of-fit test in terms of the 2-parameter item response model between the range of 2.0 to 0.5 revealed that 12 items had an ideal correct answer rate.Conclusion: We surmised that the low reliability of the mock examination was influenced by the timing of the test for the examinees and the inappropriate difficulty of the items. Our study suggested a methodology for the development of future case-based items for the Korean Nursing Licensing Examination.

Author(s):  
Geum-Hee Jeong ◽  
Mi Kyoung Yim

To test the applicability of item response theory (IRT) to the Korean Nurses' Licensing Examination (KNLE), item analysis was performed after testing the unidimensionality and goodness-of-fit. The results were compared with those based on classical test theory. The results of the 330-item KNLE administered to 12,024 examinees in January 2004 were analyzed. Unidimensionality was tested using DETECT and the goodness-of-fit was tested using WINSTEPS for the Rasch model and Bilog-MG for the two-parameter logistic model. Item analysis and ability estimation were done using WINSTEPS. Using DETECT, Dmax ranged from 0.1 to 0.23 for each subject. The mean square value of the infit and outfit values of all items using WINSTEPS ranged from 0.1 to 1.5, except for one item in pediatric nursing, which scored 1.53. Of the 330 items, 218 (42.7%) were misfit using the two-parameter logistic model of Bilog-MG. The correlation coefficients between the difficulty parameter using the Rasch model and the difficulty index from classical test theory ranged from 0.9039 to 0.9699. The correlation between the ability parameter using the Rasch model and the total score from classical test theory ranged from 0.9776 to 0.9984. Therefore, the results of the KNLE fit unidimensionality and goodness-of-fit for the Rasch model. The KNLE should be a good sample for analysis according to the IRT Rasch model, so further research using IRT is possible.


2019 ◽  
Vol 9 (2) ◽  
pp. 133-146
Author(s):  
Yance Manoppo ◽  
Djemari Mardapi

This study aimed to reveal: (1) the characteristics of items of Chemistry Test in National Examination by using the classical test theory and item response theory; (2) the amount of cheating which occured by using Angoff's B-index Method, Pair 1 Method, Pair 2 Method, Modified Error Similarity Analysis (MESA) Method, and G2 Method; (3) the methods that detect more cheating in the implementation of the Chemistry Test in National Examination for high schools in the year 2011/2012 in Maluku Province. The results of the analysis with the classical test theory approach show that 77.5% items have item difficulty functioning well, 55% items have discrimination yet qualified and 70% items have distractor that works well with the index reliability test of 0,772. The analysis using the item response theory approach shows that 14 (35%) items fit with the model, the maximum function information is 11,4069 at θ = -1,6, and the magnitude of the error of measurement is 2,296. The number of pairs who are suspected of cheating is as follows: 13 pairs according to Angoff's B-index Method, 212 pairs according to Pair 1 Method, 444 pairs according to Pair 2 Method, 7 pairs according to MESA Method, and 102 pairs according to G2 Method. The most widely detecting cheating in a row is a   Pair 2, Pair 1, G2, Angoff's B-index, and MESA.


2019 ◽  
Vol 23 (4) ◽  
pp. 275-283
Author(s):  
Ling Wang ◽  
John W. Nelson

The aim of the study is to evaluate psychometric properties of the Chinese version of Caring Factor Survey-Caring of Manager (CFS-CM), which evaluated by using with classical test theory (CTT) and item response theory (IRT). CTT analyses evaluate include internal consistence reliability, test–retest reliability and construct validity. IRT analyses were conducted to test the unidimensionality, item fit, item difficulty, the reliability, and rating scale analysis. CTT showed good psychometric properties of the CFS-CM. However, IRT revealed some problems of category level. Taking the above issue into consideration, it could be beneficial to perfect the CFS-CM in the future.


2019 ◽  
Vol 13 (1) ◽  
pp. 1-16
Author(s):  
Muh Syahrul Sarea ◽  
Rosnia Ruslan

This research aimes to describe the characteristic of UAS items theme 1 at the fourth grade of Primary School in Paramasan bawah village according to the item difficulties and discrimination. The sample of this research was 37 students who took the final examination year academic 2018/2019. The objects of this research were question items and the answer sheet of the final exam that obtained from 3 different schools in Paramasan Bawah village. The data analysis technique used in this research was empirical analysis helped by Bilog  and Iteman program application. This analysis used to know the characteristic of items based on the Item Response Theory and Classical Test Theory. The result of this research showed the characteristic of UAS items, according to item response theory, 30 items had a good discrimination and 33 items had a good item difficulty, while according to Classical Test Theory: 15 items had a good discrimination and 27 items had a good item difficulty.Keywords: Characteristics of items, item difficulties, discrimination


2015 ◽  
Vol 23 (88) ◽  
pp. 593-610
Author(s):  
Patrícia Costa ◽  
Maria Eugénia Ferrão

This study aims to provide statistical evidence of the complementarity between classical test theory and item response models for certain educational assessment purposes. Such complementarity might support, at a reduced cost, future development of innovative procedures for item calibration in adaptive testing. Classical test theory and the generalized partial credit model are applied to tests comprising multiple choice, short answer, completion, and open response items scored partially. Datasets are derived from the tests administered to the Portuguese population of students enrolled in the 4th and 6th grades. The results show a very strong association between the estimates of difficulty obtained from classical test theory and item response models, corroborating the statistical theory of mental testing.


2021 ◽  
Vol 11 (13) ◽  
pp. 6048
Author(s):  
Jaroslav Melesko ◽  
Simona Ramanauskaite

Feedback is a crucial component of effective, personalized learning, and is usually provided through formative assessment. Introducing formative assessment into a classroom can be challenging because of test creation complexity and the need to provide time for assessment. The newly proposed formative assessment algorithm uses multivariate Elo rating and multi-armed bandit approaches to solve these challenges. In the case study involving 106 students of the Cloud Computing course, the algorithm shows double learning path recommendation precision compared to classical test theory based assessment methods. The algorithm usage approaches item response theory benchmark precision with greatly reduced quiz length without the need for item difficulty calibration.


Sign in / Sign up

Export Citation Format

Share Document