New CBT Technical Issues: Developing Items, Pretesting, Test Security, and Item Exposure

2005 ◽  
pp. 205-216
2021 ◽  
Author(s):  
Jumoke Oladele ◽  
Mdutshekelwa Ndlovu

<div><p>This study examines remote proctoring as emerging practice for ascertaining the validity of offsite test administration regarding test security. While Computer Adaptive Testing (CAT) has the potentials for greater precision in determining examinees ability level, its gains can be jeopardized with off-site testing if the test is not ensured. This study simulated CAT assessment while focusing on item administration, varying the option of using pre-test items and how it impacts students' ability estimation and item exposure. Monte-Carlo simulation was employed to generate data for answering the research questions raised for the study. The study's findings revealed that CAT administration was more consistent with no pre-test items once tightly controlled at ±2theta level, upon which recommendations were made. This finding is particularly germane, with more institutions moving their assessments online and rapidly becoming a new normal as an aftermath of the Covid-19 pandemic.</p></div><div><br></div>The data for this study were generated from computer simulations using SimulCAT, a free software package designed by Dr. K. C. T. Han of Graduate Management Admission Council.<p></p>


2021 ◽  
Author(s):  
Jumoke Oladele ◽  
Mdutshekelwa Ndlovu

<div><p>This study examines remote proctoring as emerging practice for ascertaining the validity of offsite test administration regarding test security. While Computer Adaptive Testing (CAT) has the potentials for greater precision in determining examinees ability level, its gains can be jeopardized with off-site testing if the test is not ensured. This study simulated CAT assessment while focusing on item administration, varying the option of using pre-test items and how it impacts students' ability estimation and item exposure. Monte-Carlo simulation was employed to generate data for answering the research questions raised for the study. The study's findings revealed that CAT administration was more consistent with no pre-test items once tightly controlled at ±2theta level, upon which recommendations were made. This finding is particularly germane, with more institutions moving their assessments online and rapidly becoming a new normal as an aftermath of the Covid-19 pandemic.</p></div><div><br></div>The data for this study were generated from computer simulations using SimulCAT, a free software package designed by Dr. K. C. T. Han of Graduate Management Admission Council.<p></p>


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Iwan Suhardi

Salah satu metode estimasi kemampuan  yang banyak diaplikasikan pada algoritma Computerized Adaptive Testing (CAT) adalah Maximum Likeli-hood Estimation (MLE).  Metode MLE mempunyai kekurangan yaitu ketidakmampuan menemukan solusi estimasi kemampuan peserta tes ketika skor peserta masih belum berpola. Bila ada peserta tes yang memperoleh skor 0 atau skor sempurna, maka untuk menentukan estimasi kemampuan peserta tes umumnya menggunakan model step size. Namun, model step-size tersebut mengakibatkan item exposure. Item exposure merupakan fenomena dimana butir-butir soal tertentu akan lebih sering muncul dibandingkan dengan butir-butir soal yang lain. Hal tersebut membuat tes menjadi tidak aman karena butir-butir soal yang sering muncul akan lebih mudah pula untuk dikenali. Kajian ini mencoba memberikan alternatif strategi dengan cara memodifikasi model step-size dan dilanjutkan dengan merandom hasil perhitungan fungsi informasi yang diperoleh. Berdasarkan hasil kajian didapatkan bahwa alternatif strategi pemilihan butir soal ini mampu menghasilkan kemunculan butir soal yang lebih bervariasi sehingga dapat meningkatkan keamanan tes pada CAT.Kata kunci: item exposure, step-size, adaptive testing AbstractOne method of capability estimation that is widely applied to the Computerized Adaptive Testing (CAT) algorithm is Maximum Likeli-hood Estimation (MLE). The MLE method has the disadvantage of being unable to find a solution to the test taker's ability when the participant's score is not patterned. If there are test takers who get a score of 0 or perfect score, then to determine the ability of the test takers to generally use the step size model. However, the step-size model results in exposure items. The exposure item is a phenomenon where certain items will appear more often than other items. This makes the test insecure because items that often appear will be easier to recognize. This study tries to provide an alternative strategy by modifying the step-size model and proceed by randomizing the results of the calculation of the information function obtained. Based on the results of the study, it was found that alternative item selection strategies were able to produce the appearance of more varied items so as to improve the safety of tests on the CAT.


2006 ◽  
Author(s):  
Bryce Sullivan ◽  
Jennifer M. Craft ◽  
Jameca W. Falconer

Sign in / Sign up

Export Citation Format

Share Document