Quality assurance procedures in assessment - a descriptive study of medical colleges in Pakistan

Author(s):  
Nighat Murad ◽  
Syed Moyn Aly ◽  
Admin

Abstract Objective: Objective of the present study was to identify and describe the quality assurance procedures being implemented in assessment system of medical colleges in Pakistan. Methods: A cross-sectional study was conducted from March 2015 to December 2017 in 49 medical colleges of Pakistan, using mixed method technique. A semi-structured questionnaire was filled after informed consent. Data was analyzed by using SPSS version 21 (IBM). Results: In this study, 35(71.4%) of institutions followed a written assessment policy provided by the affiliated university, 09 (18%) never did so, 22 (44.8 %) participants reported that content experts checked if the questions matched the objectives, 17 (34.7%) reported that content expert would never/rarely check that. Majority, 42(85.7%) of institutions took strict steps to prevent cheating in exam, 26 (53.1%) institutions analyzed theory exam statistically. Discrimination index, difficulty index, reliability, and point biserial were calculated in 14(28.6%), 13(26.5%), 12(24.4%), and 7(14.3%) of the medical colleges respectively. Only 12(24.5%) of the institutions provided written feedback on the results, 15 (30.6%) institutions conducted internal audit annually. Themes belonging to assessment domain including, training for assessment, barriers and challenges, feedback and audit were identified. Conclusion: General issues related to quality assurance procedures in assessments (e.g. overall awareness about assessment policy) were in place in 60% of the colleges however, a large proportion did not have them. QA in assessments during exams was ensured by almost all medical colleges with only few exceptions. After exams QA was below average in terms of item analysis and feedback. Continuous...

Author(s):  
Ajeet Kumar Khilnani ◽  
Rekha Thaddanee ◽  
Gurudas Khilnani

<p class="abstract"><strong>Background:</strong> Multiple choice questions (MCQs) are routinely used for formative and summative assessment in medical education. Item analysis is a process of post validation of MCQ tests, whereby items are analyzed for difficulty index, discrimination index and distractor efficiency, to obtain a range of items of varying difficulty and discrimination indices. This study was done to understand the process of item analysis and analyze MCQ test so that a valid and reliable MCQ bank in otorhinolaryngology is developed.</p><p class="abstract"><strong>Methods:</strong> 158 students of 7<sup>th</sup> Semester were given an 8 item MCQ test. Based on the marks achieved, the high achievers (top 33%, 52 students) and low achievers (bottom 33%, 52 students) were included in the study. The responses were tabulated in Microsoft Excel Sheet and analyzed for difficulty index, discrimination index and distractor efficiency.  </p><p class="abstract"><strong>Results:</strong> The mean (SD) difficulty index (Diff-I) of 8 item test was 61.41% (11.81%). 5 items had a very good difficulty index (41% to 60%), while 3 items were easy (Diff-I &gt;60%). There was no item with Diff-I &lt;30%, i.e. a difficult item, in this test. The mean (SD) discrimination index (DI) of the test was 0.48 (0.15), and all items had very good discrimination indices of more than 0.25. Out of 24 distractors, 6 (25%) were non-functional distractors (NFDs). The mean (SD) distractor efficiency (DE) of the test was 74.62% (23.79%).</p><p class="abstract"><strong>Conclusions:</strong> Item analysis should be an integral and regular activity in each department so that a valid and reliable MCQ question bank is developed.</p>


Author(s):  
Amit P. Date ◽  
Archana S. Borkar ◽  
Rupesh T. Badwaik ◽  
Riaz A. Siddiqui ◽  
Tanaji R. Shende ◽  
...  

Background: Multiple choice questions (MCQs) are a common method for formative and summative assessment of medical students. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), distracter efficiency (DE). The objective of this study was to assess the quality of MCQs currently in use in pharmacology by item analysis and develop a MCQ bank with quality items.Methods: This cross-sectional study was conducted in 148 second year MBBS students at NKP Salve institute of medical sciences from January 2018 to August 2018. Forty MCQs twenty each from the two term examination of pharmacology were taken for item analysis A correct response to an item was awarded one mark and each incorrect response was awarded zero. Each item was analyzed using Microsoft excel sheet for three parameters such as DIF I, DI, and DE.Results: In present study mean and standard deviation (SD) for Difficulty index (%) Discrimination index (%) and Distractor efficiency (%) were 64.54±19.63, 0.26±0.16 and 66.54±34.59 respectively. Out of 40 items large number of MCQs has acceptable level of DIF (70%) and good in discriminating higher and lower ability students DI (77.5%). Distractor efficiency related to presence of zero or 1 non-functional distrator (NFD) is 80%.Conclusions: The study showed that item analysis is a valid tool to identify quality items which regularly incorporated can help to develop a very useful, valid and a reliable question bank.


2019 ◽  
Author(s):  
Assad Ali Rezigalla ◽  
Elwathiq Khalid Ibrahim ◽  
Amar Babiker ElHussein

Abstract Background Distractor efficiency of multiple choice item responses is a component of item analysis used by the examiners to to evaluate the credibility and functionality of the distractors.Objective To evaluate the impact of functionality (efficiency) of the distractors on difficulty and discrimination indices.Methods A cross-sectional study in which standard item analysis of an 80-item test consisted of A type MCQs was performed. Correlation and significance of variance among Difficulty index (DIF), discrimination index (DI), and distractor Efficiency (DE) were measured.Results There is a significant moderate positive correlation between difficulty index and distractor efficiency, which means there is a tendency for high difficulty index go with high distractor efficiency (and vice versa). A weak positive correlation between distractor efficiency and discrimination index.Conclusions Non-functional distractor can reduce discrimination power of multiple choice questions. More training and effort for construction of plausible options of MCQ items is essential for the validity and reliability of the tests.


2017 ◽  
Author(s):  
Abdulaziz Alamri ◽  
Omer Abdelgadir Elfaki ◽  
Karimeldin A Salih ◽  
Suliman Al Humayed ◽  
Fatmah Mohammed Ahmad Althebat ◽  
...  

BACKGROUND Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. OBJECTIVE The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. METHODS Design: Cross sectional Study design Setting, Materials and methods Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. RESULTS Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97 CONCLUSIONS Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.


2018 ◽  
Vol 18 (1) ◽  
pp. 68 ◽  
Author(s):  
Deena Kheyami ◽  
Ahmed Jaradat ◽  
Tareq Al-Shibani ◽  
Fuad A. Ali

Objectives: The current study aimed to carry out a post-validation item analysis of multiple choice questions (MCQs) in medical examinations in order to evaluate correlations between item difficulty, item discrimination and distraction effectiveness so as to determine whether questions should be included, modified or discarded. In addition, the optimal number of options per MCQ was analysed. Methods: This cross-sectional study was performed in the Department of Paediatrics, Arabian Gulf University, Manama, Bahrain. A total of 800 MCQs and 4,000 distractors were analysed between November 2013 and June 2016. Results: The mean difficulty index ranged from 36.70–73.14%. The mean discrimination index ranged from 0.20–0.34. The mean distractor efficiency ranged from 66.50–90.00%. Of the items, 48.4%, 35.3%, 11.4%, 3.9% and 1.1% had zero, one, two, three and four nonfunctional distractors (NFDs), respectively. Using three or four rather than five options in each MCQ resulted in 95% or 83.6% of items having zero NFDs, respectively. The distractor efficiency was 91.87%, 85.83% and 64.13% for difficult, acceptable and easy items, respectively (P <0.005). Distractor efficiency was 83.33%, 83.24% and 77.56% for items with excellent, acceptable and poor discrimination, respectively (P <0.005). The average Kuder-Richardson formula 20 reliability coefficient was 0.76. Conclusion: A considerable number of the MCQ items were within acceptable ranges. However, some items needed to be discarded or revised. Using three or four rather than five options in MCQs is recommended to reduce the number of NFDs and improve the overall quality of the examination.


Author(s):  
Assad Rezigalla ◽  
Ali Eleragi ◽  
Masoud Elkhalifa ◽  
Ammar Mohammed

Introduction: Students' perception of an examination reflects their feelings, while its item analysis refers to a statistical analysis of students’ responses to examination items. The study was conducted to compare the student’s perception towards examination to its item analysis. Material and methods: This is a cross-sectional study conducted at the college of medicine, from January to April 2019. The study used a structured questionnaire and standardized item analysis of students’ examinations. All students who had registered for semester two (2018-2019) were included in the study. Exclusion criteria included students who refused to participate in the study or those who did not fill the questionnaire. Result:  KR-20 of the examination was 0.906.  The average difficulty index of the examination was 69.4. The response rate of the questionnaire was 88.9% (40/45). Students considered the examination to be easy (70.4%). A significant correlation was reported between student perceptions towards examination difficulty and standard examination difficulty. Discussion: Student perceptions support the evidence of examination validity.  Students were found able to estimate examination difficulty. Keywords: Student perception, Item analysis, Assessment, validity, reliability.  


2019 ◽  
Author(s):  
Assad Ali Rezigalla ◽  
Ali Mohammed Elhassan Eleragi ◽  
Masoud Ishag Elkhalifa ◽  
Ammar M A Mohammed

AbstractIntroductionStudent perception of an exam is a reflection of their feelings towards the exam items, while item analysis is a statistical analysis of students’ responses to exam items. The study was formulated to compare the student’s perception of the results of item analysis.Material and methodsType of the study is cross-sectional. The study was conducted in the college of medicine, in the duration from January to April 2019. The study uses a structured questionnaire and standardized item analysis of students’ exam. Participants are students registered for semester two level year (2018-2019). Exclusion criteria included all students who refused to participate in the study or do not fill the questionnaire.ResultThe response rate of the questionnaire was 88.9% (40/45). Students considered the exam as easy (70.4%). The average difficulty index of the exam is acceptable. KR-20 of the exam was 0.906. A significant correlation was reported between student perceptions towards exam difficulty and standard exam difficulty.DiscussionStudent perceptions support the evidances of exam vlaitdity. Students can estimate exam difficulty.


2010 ◽  
Vol 18 (1) ◽  
pp. 26-35 ◽  
Author(s):  
Victoria P. Niederhauser

There is a plethora of literature on barriers to immunizations; however, these studies lack standardization of measurement. The aim of this study was to develop and establish an initial psychometric evaluation of an instrument to measure parental barriers to childhood immunizations. This was a cross-sectional study design. Data analysis included descriptive statistics, reliability estimates, item analysis, and factor analysis. Six hundred and fifty-five participants completed the survey. The Searching for Hardships and Obstacles to Shots instrument was developed with 60 items and reduced to 23 items thorough multiple statistical computations; the best factor model was a three-factor solution (Access to Shots, Concerns About Shots, and Importance of Shots) with a total variance explained of 59.4%. The Cronbach’s alpha reliability estimates ranged from .86 to .93, and temporal stability was adequate (r = .85). This study supports exceptional initial psychometric properties of an instrument to measure parental barriers to childhood immunizations.


2018 ◽  
Vol Volume 9 ◽  
pp. 887-891 ◽  
Author(s):  
Hatem Alharbi ◽  
Abulaaziz Almalki ◽  
Fawaz Alabdan ◽  
Bander Hadad

Sign in / Sign up

Export Citation Format

Share Document