The relationship between non-functioning distractors and item difficulty of multiple choice questions: A descriptive analysis

2014 ◽  
Vol 2 (4) ◽  
pp. 148 ◽  
Author(s):  
HamzaMohammad Abdulghani ◽  
Farah Ahmad ◽  
Abdulmajeed Aldrees ◽  
MahmoudS Khalil ◽  
GomindaG Ponnamperuma
2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.


2010 ◽  
Vol 9 (4) ◽  
pp. 453-461 ◽  
Author(s):  
Jia Shi ◽  
William B. Wood ◽  
Jennifer M. Martin ◽  
Nancy A. Guild ◽  
Quentin Vicens ◽  
...  

We have developed and validated a tool for assessing understanding of a selection of fundamental concepts and basic knowledge in undergraduate introductory molecular and cell biology, focusing on areas in which students often have misconceptions. This multiple-choice Introductory Molecular and Cell Biology Assessment (IMCA) instrument is designed for use as a pre- and posttest to measure student learning gains. To develop the assessment, we first worked with faculty to create a set of learning goals that targeted important concepts in the field and seemed likely to be emphasized by most instructors teaching these subjects. We interviewed students using open-ended questions to identify commonly held misconceptions, formulated multiple-choice questions that included these ideas as distracters, and reinterviewed students to establish validity of the instrument. The assessment was then evaluated by 25 biology experts and modified based on their suggestions. The complete revised assessment was administered to more than 1300 students at three institutions. Analysis of statistical parameters including item difficulty, item discrimination, and reliability provides evidence that the IMCA is a valid and reliable instrument with several potential uses in gauging student learning of key concepts in molecular and cell biology.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Mariano Amo-Salas ◽  
María del Mar Arroyo-Jimenez ◽  
David Bustos-Escribano ◽  
Eva Fairén-Jiménez ◽  
Jesús López-Fidalgo

Multiple choice questions (MCQs) are one of the most popular tools to evaluate learning and knowledge in higher education. Nowadays, there are a few indices to measure reliability and validity of these questions, for instance, to check the difficulty of a particular question (item) or the ability to discriminate from less to more knowledge. In this work two new indices have been constructed: (i) the no answer index measures the relationship between the number of errors and the number of no answers; (ii) the homogeneity index measures homogeneity of the wrong responses (distractors). The indices are based on the lack-of-fit statistic, whose distribution is approximated by a chi-square distribution for a large number of errors. An algorithm combining several traditional and new indices has been developed to refine continuously a database of MCQs. The final objective of this work is the classification of MCQs from a large database of items in order to produce an automated-supervised system of generating tests with specific characteristics, such as more or less difficulty or capacity of discriminating knowledge of the topic.


2021 ◽  
Vol 20 (2) ◽  
Author(s):  
Siti Khadijah Adam ◽  
Faridah Idris ◽  
Puteri Shanaz Jahn Kassim ◽  
Nor Fadhlina Zakaria ◽  
Rafidah Hod

Background: Multiple-choice questions (MCQs) are used for measuring the student’s progress, and they should be analyzed properly to guarantee the item’s appropriateness. The analysis usually determines three indices of an item; difficulty or passing index (PI), discrimination index (DI), and distractor efficiency (DE). Objectives: This study was aimed to analyze the multiple-choice questions in the preclinical and clinical examinations with different numbers of options in medical program of Universiti Putra Malaysia. Methods: This is a cross-sectional study. Forty multiple-choice questions with four options from the preclinical examination and 80 multiple-choice questions with five options from the clinical examination in 2017 and 2018 were analyzed using optical mark recognition machine and Ms. Excel. The parameters included PI, DI, and DE. Results: The average difficulty level of multiple-choice questions for preclinical and clinical phase examinations were similar in 2017 and 2018 that were considered ‘acceptable’ and ‘ideal’ ranged from 0.55 to 0.60, respectively. The average DIs were similar in all examinations that were considered ‘good’ (ranged from 0.25 to 0.31) except in 2018 clinical phase examination that showed ‘poor’ items (DI = 0.20 ± 0.11). The questions for preclinical phase showed an increase in the number of ‘excellent’ and ‘good’ items in 2018 from 37.5% to 70.0%. There was an increase of 10.0% for preclinical phase, and 6.25% for clinical phase, in the number of items with no non-functioning distractors in 2018. Among all, preclinical multiple-choice questions in 2018 showed the highest mean of DE (71.67%). Conclusions: Our findings suggested that there was an improvement in the questions from preclinical phase while more training on questions preparation and continuous feedback should be given to clinical phase teachers. A higher number of options did not affect the level of difficulty of a question; however, the discrimination power and distractors efficiency might differ.


2009 ◽  
pp. 102-121 ◽  
Author(s):  
Martin Owen

This chapter explores mobile learning where location and game like or playful activity adds to the context of learning. The relationship between space, play, and the development of context and learner identity is explored through an examination of the issues concerning the context of space, narratives, and engagement. There is a discussion of the meta-knowledge and specific learning attributes we would want to encounter in mobile game like learning. These issues are further explored in three case studies of learning activities which have been designed such that the context of location and game like or playful learning is significant. The examples include simple games based on multiple choice questions, a complex multi-role simulation and an environmental tagging and hypermedia project. The case is made for the potential of the context of location, and game-like learning in mobile learning.


Sign in / Sign up

Export Citation Format

Share Document