scholarly journals Summative Test Items Analysis Using Classical Test Theory (CTT)/ Analisis Item Kertas Peperiksaan Sumatif Menggunakan Teori Ujian Klasik (TUK)

2020 ◽  
Vol 12 (2-2) ◽  
Author(s):  
Nor Aisyah Saat

Item analysis is the process of examining student responses to test items individually in order to get clear picture on the quality of the item and the overall test. Teachers are encouraged to perform item analysis for each administered test in order to determine which items should be retained, modified, or discarded in the given test. This study aims to analyse items in 2 summative examination question papers by using classical test theory (CTT). The instruments used were the SPM Mathematics Trial Examination Questions 1 2019 which involved 50 students in form 5 students and the SPM Mathematics Trial Examination Question 1 2019 which involved 20 students. The SPM Mathematics Trial Examination Question paper 1 contains 40 objective questions while the SPM Mathematics Trial Examination paper 1 contains 25 subjective questions. The data obtained were analysed using Microsoft Excel software based on the formulas of item difficulty index and discrimination index. This analysis can help teachers for better understanding about the difficulty level of the items used. Finally, based on the analysis items obtained, the items were classified as good, good but improved, marginal or weak items.

2018 ◽  
Vol 7 (1) ◽  
pp. 29
Author(s):  
Ari Arifin Danuwijaya

Developing a test is a complex and reiterative process which subject to revision even if the items were developed by skilful item writers. Many commercial test publishers need to conduct test analysis, rather than trusting the item writers� judgement and skills to improve the quality of items that need to be proven statistically after trying out was performed. This study is a part of test development process which aims to analyse the reading comprehension test items. One hundred multiple choice questions were pilot tested to 50 postgraduate students in one university. The pilot testing was aimed to investigate item quality which can further be developed better. The responses were then analysed using Classical Test Theory and using psychometric software called Lertap. The results showed that item difficulty level was mostly average. In terms of item discrimination, more than half of the total items were categorized marginal which required further modifications. This study suggests some recommendation that can be useful to improve the quality of the developed items.��Keywords: reading comprehension; item analysis; classical test theory; item difficulty; test development.


2020 ◽  
Vol 9 (1) ◽  
pp. 5-34
Author(s):  
Wong Vincent ◽  
S.Kanageswari Suppiah Shanmugam

The purpose of this study is to describe the use of Classical Test Theory (CTT) to investigate the quality of test items in measuring students' English competence. This study adopts a research method with a mixed methods approach. The results show that most items are within acceptable range of both indexes, with the exception of items in synonyms. Items that focus on vocabulary are more challenging. What is surprising is that the short answer items have an excellent item difficulty level and item discrimination index. General results from data analysis of items also support the hypothesis that items that have an ideal item difficulty value between 0.4 and 0.6 will have the same ideal item discrimination value. This paper reports part of a larger study on the quality of individual test items and overall tests.


2019 ◽  
Vol 23 (1) ◽  
pp. 124-153 ◽  
Author(s):  
Daniel R. Smith ◽  
Michael E. Hoffman ◽  
James M. LeBreton

This article provides a review of the approach that James used when conducting item analyses on his conditional reasoning test items. That approach was anchored in classical test theory. Our article extends this work in two important ways. First, we offer a set of test development protocols that are tailored to the unique nature of conditional reasoning tests. Second, we further extend James’s approach by integrating his early test validation protocols (based on classical test theory) with more recent protocols (based on item response theory). We then apply our integrated item analytic framework to data collected on James’s first test, the conditional reasoning test for relative motive strength. We illustrate how this integrated approach furnishes additional diagnostic information that may allow researchers to make more informed and targeted revisions to an initial set of items.


Author(s):  
Eun Young Lim ◽  
Jang Hee Park ◽  
ll Kwon ◽  
Gue Lim Song ◽  
Sun Huh

The results of the 64th and 65th Korean Medical Licensing Examination were analyzed according to the classical test theory and item response theory in order to know the possibility of applying item response theory to item analys and to suggest its applicability to computerized adaptive test. The correlation coefficiency of difficulty index, discriminating index and ability parameter between two kinds of analysis were got using computer programs such as Analyst 4.0, Bilog and Xcalibre. Correlation coefficiencies of difficulty index were equal to or more than 0.75; those of discriminating index were between - 0.023 and 0.753; those of ability parameter were equal to or more than 0.90. Those results suggested that the item analysis according to item response theory showed the comparable results with that according to classical test theory except discriminating index. Since the ability parameter is most widely used in the criteria-reference test, the high correlation between ability parameter and total score can provide the validity of computerized adaptive test utilizing item response theory.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Ummul Karimah ◽  
Heri Retnawati ◽  
Deni Hadiana ◽  
Pujiastuti Pujiastuti ◽  
Eri Yusron

AbstractThis study aimed to describe the characteristics of chemistry test items in Nationally Standardized School Examination or Ujian Sekolah Berstandar Nasional (USBN), consisted of discrimination index, difficulty index, and question reliability. The data was collected by using documentation of the answers of 194 students. The type of research was descriptive exploratory with quantitative and qualitative approaches. Quantitative data analysis was performed by using classical test theory approach and item response theory with 1 logistical parameter. Meanwhile, qualitative data analysis was conducted in order to describe the items that were categorized as difficult and bad categories. The results showed that according to classical test theory, USBN Chemistry item test had an average level of difficulty of 0.57 which was categorized as moderate. Regarding to the discrimination index, the average of different test power obtained were equal to 0.146, and belonged to the good category. Based on the item response theory, the average of difficulty index of -0,00086 was categorized as moderate. The results of the estimated reliability of the questions amounted to 0.48 included in the moderate category. The results of the qualitative analysis showed that the items which belonged to difficult category were the items on salt hydrolysis material, acid-base titration, the concept of periodicity, manufacture and use of chemical compounds according to classical test theory. Meanwhile, according to the theory of response, items which belonged to difficult category were found in acid-base titration material, the solution colligative phenomenon, and the concept of periodicity of the main metal elements.


2020 ◽  
Vol 17 (Number 2) ◽  
pp. 63-101
Author(s):  
S. Kanageswari Suppiah Shanmugam ◽  
Vincent Wong ◽  
Murugan Rajoo

Purpose - This study examined the quality of English test items using psychometric and linguistic characteristics among Grade Six pupils. Method - Contrary to the conventional approach of relying only on statistics when investigating item quality, this study adopted a mixed-method approach by employing psychometric analysis and cognitive interviews. The former was conducted on 30 Grade Six pupils, with each item representing a different construct commonly found in English test papers. Qualitative input was obtained through cognitive interviews with five Grade Six pupils and expert judgements from three teachers. Findings - None of the items were found to be too easy or difficult, and all items had positive discrimination indices. The item on idioms was most ideal in terms of difficulty and discrimination. Difficult items were found to be vocabulary-based. Surprisingly, the higher-order-thinking subjective items proved to be excellent in difficulty, although improvements could be made on their ability to discriminate. The qualitative expert judgements agreed with the quantitative psychometric analysis. Certain results from the item analysis, however, contradicted past findings that items with the ideal item difficulty value between 0.4 and 0.6 would have equally ideal item discrimination index. Significance -The findings of the study can serve as a reminder on the significance of using Classical Test Theory, a non-complex psychometric approach in assisting classroom teacher practitioners during the meticulous process of test design and ensuring test item quality.


Assessment of learning involves deciding whether or not the content and objectives of education are down pat by administering quality tests. This study assesses the standard of Chemistry action take a look at and compares the item statistics generated mistreatment CTT and IRT strategies. A descriptive survey was adopted involving a sample of N=530 students. The specialised XCALIBRE 4 and ITEMAN 4 softwares were used to conduct the item analysis. Results indicate that, both the two methods commonly identified 13(32.5%) items as “problematic” and 27(67.5%) were “good”. Similarly, a significantly higher correlation exists between item statistics derived from the CTT and IRT models, [(r=-0.985,) and (r=0.801) p<0.05] for item difficulty and discrimination respectively; the study concludes that the Chemistry Achievement test used do not pass through the processes of standardisation. Secondly, CTT and IRT frameworks appeared to be effective and reliable in assessing test items as the two frameworks provide similar and comparable results. The study recommends that the teacher made Chemistry tests used in measuring students’ achievement should be made to pass through all the processes of standardisation. Meanwhile, CTT and IRT approaches of item analysis ought to be integrated within the aspects of item development and analysis because of their superiority within the investigation of reliability and minimising measurement errors


2020 ◽  
Vol 34 (1) ◽  
pp. 52-67 ◽  
Author(s):  
Igor Himelfarb ◽  
Margaret A. Seron ◽  
John K. Hyland ◽  
Andrew R. Gow ◽  
Nai-En Tang ◽  
...  

Objective: This article introduces changes made to the diagnostic imaging (DIM) domain of the Part IV of the National Board of Chiropractic Examiners examination and evaluates the effects of these changes in terms of item functioning and examinee performance. Methods: To evaluate item function, classical test theory and item response theory (IRT) methods were employed. Classical statistics were used for the assessment of item difficulty and the relation to the total test score. Item difficulties along with item discrimination were calculated using IRT. We also studied the decision accuracy of the redesigned DIM domain. Results: The diagnostic item analysis revealed similarity in item function across test forms and across administrations. The IRT models found a reasonable fit to the data. The averages of the IRT parameters were similar across test forms and across administrations. The classification of test takers into ability (theta) categories was consistent across groups (both norming and all examinees), across all test forms, and across administrations. Conclusion: This research signifies a first step in the evaluation of the transition to digital DIM high-stakes assessments. We hope that this study will spur further research into evaluations of the ability to interpret radiographic images. In addition, we hope that the results prove to be useful for chiropractic faculty, chiropractic students, and the users of Part IV scores.


2021 ◽  
Vol 5 (2) ◽  
pp. 210-221
Author(s):  
Anis Faridah

This research is a study of quantitative descriptive. The purpose of this research is to describe the characteristics of final semester exam items for grade XI in the History subject at SMA Negeri 1 Pangkalpinang using the classical test theory approach. The research of the subject was 138 students of class XI in Social Sciences Major. The result of the research shows that final exam questions in the history subject class XI of SMA Negeri 1 Pangkalpinang are proper to use. This shows that from the validity of the items which there are 39 items of questions (97.5%) which are proven empirically valid with a 0.818 reliability coefficient. Other than that, there are 27 items of questions (67,5%) that can fulfill the criteria for the difficulty level, distinguishing power, and distractor function so it can be used directly to measure the student's ability without correction. While 12 items of questions (30%) need to be fixed and 1 item of question (2,5%) is declared to be invalid so it can't be used to measure the student's ability in History Subject. Permasalahan yang melatarbelakangi penelitian ini adalah pengembangan soal penilaian akhir semester mata pelajaran sejarah yang tidak melalui tahapan analisis butir soal sehingga kualitas butir soal tidak diketahui. Penelitian ini merupakan penelitian deskriptif kuantitatif. Tujuan penelitian ini adalah untuk mendeskripsikan karakteristik butir soal penilaian akhir semester mata pelajaran sejarah kelas XI SMA Negeri 1 Pangkalpinang menggunakan pendekatan teori tes klasik. Subjek penelitian berjumlah 138 peserta didik kelas XI jurusan IPS. Hasil penelitian menunjukkan bahwa soal PAS mata pelajaran sejarah kelas XI SMA Negeri 1 Pangkalpinang telah layak digunakan. Hal ini dibuktikan dari validitas butir soal yang mana terdapat 39 butir soal (97,5%) terbukti valid secara empirik dengan koefisien reliabilitas sebesar 0,818. Selain itu terdapat 27 butir soal (67,5%) yang memenuhi kriteria tingkat kesukaran, daya beda, dan keberfungsian distraktor sehingga dapat digunakan langsung untuk mengukur kemampuan peserta didik tanpa perbaikan. Sedangkan sebanyak 12 butir soal (30%) perlu dilakukan perbaikan dan 1 butir soal (2,5%) dinyatakan gugur sehingga tidak dapat digunakan untuk mengukur kemampuan peserta didik pada mata pelajaran sejarah.


Author(s):  
Geum-Hee Jeong ◽  
Mi Kyoung Yim

To test the applicability of item response theory (IRT) to the Korean Nurses' Licensing Examination (KNLE), item analysis was performed after testing the unidimensionality and goodness-of-fit. The results were compared with those based on classical test theory. The results of the 330-item KNLE administered to 12,024 examinees in January 2004 were analyzed. Unidimensionality was tested using DETECT and the goodness-of-fit was tested using WINSTEPS for the Rasch model and Bilog-MG for the two-parameter logistic model. Item analysis and ability estimation were done using WINSTEPS. Using DETECT, Dmax ranged from 0.1 to 0.23 for each subject. The mean square value of the infit and outfit values of all items using WINSTEPS ranged from 0.1 to 1.5, except for one item in pediatric nursing, which scored 1.53. Of the 330 items, 218 (42.7%) were misfit using the two-parameter logistic model of Bilog-MG. The correlation coefficients between the difficulty parameter using the Rasch model and the difficulty index from classical test theory ranged from 0.9039 to 0.9699. The correlation between the ability parameter using the Rasch model and the total score from classical test theory ranged from 0.9776 to 0.9984. Therefore, the results of the KNLE fit unidimensionality and goodness-of-fit for the Rasch model. The KNLE should be a good sample for analysis according to the IRT Rasch model, so further research using IRT is possible.


Sign in / Sign up

Export Citation Format

Share Document