scholarly journals Étude Évaluative D’examen Normalisé De Sciences De La Vie Et De La Terre Au Cycle Secondaire Collegial

2016 ◽  
Vol 12 (1) ◽  
pp. 283
Author(s):  
Abdellah El Allaoui ◽  
Fouzia Rhazi Filali ◽  
El mokhtar El Hadri ◽  
Khalid Fetteh ◽  
Malika Bouhadi

Generally an exam, a performance test or a test to be used for summative or evaluation purposes must be prepared according to a definite plan. The present paper aimed at analyzing and evaluating the certificate exam, the study also revealed the questions representation of Bloom's cognitive levels. This article presents also the statistical and psychometric indices that characterize each of the questions (26 items) that make up a certification exam of the Sciences of Life and Earth in a Moroccan high school in Meknes. Indeed, two hundred test copies have been analyzed using the SPSS and Anitem software. The average of goals achieved in activities of knowledge, analysis, understanding and evaluation is 61%, 52%, 37.5%, 58% respectively, but it is only 19% for "application" activity. The success rate in open questions items is only 38.5% against 61.5% for Multiple Choice Questions (MSQ) and closed questions. The internal homogeneity coefficient is greater than 0.8 (α = 0.84), which means that the homogeneity of the instrument has been considered satisfactory. According to the index of difficulty out of the 26 items, two are extremely inferior (P> 0.8), while two others are extremely superior (P <0.2). According to the index of discrimination eleven items meet the conditions of validity, three are not discriminating and twelve should be reviewed. The correlation between the success rate in these items and their difficulty index is high (R = 0.98). In light of the findings, we recommended the necessity of having measurement and evaluation experts in writing questions of exams and we highly recommend the necessity of continuous training of teachers in the field of assessment and evaluation.


2020 ◽  
pp. 084653711989952
Author(s):  
Roxanne Labranche ◽  
Chantale Lapierre ◽  
Isabelle Trop

Objective: Radiology residents must fulfill a standardized curriculum to complete residency and pass a certification exam before they are granted a licence to practice. We sought to evaluate how well residency prepares trainees for practice as perceived by recent graduates and their department chiefs. Subjects and Methods: Radiologists who graduated from the 4 Quebec radiology residency programs between 2005 and 2016 (n = 237) and Quebec radiology department chiefs (n = 98) were anonymously surveyed. Two electronic surveys were created, for recent graduates (74 questions) and for department chiefs (11 questions), with multiple-choice questions and open questions covering all fields of radiology. Surveys were administered between April and June 2016 using the Association des radiologistes du Québec database. Results: Response rate was 75 (31.6%) of 237 from recent graduates and 96% rated their training as excellent or good. Satisfaction with training in computed tomography and magnetic resonance imaging was high, with musculoskeletal (MSK) imaging, particularly MSK ultrasound (US), as well as pediatric, cardiac, and vascular imaging needing more training. Thirty-nine (39.8%) of 98 department chiefs answered the survey and highlighted weaknesses in the interpretation of conventional radiography, obstetrical US, and invasive procedures, as well as limited leadership and administrative skills. Recent graduates and department chiefs both reported difficulties in the ability to interpret daily volume of examinations as scheduled and invasive procedure competency. Conclusion: This survey highlights areas of the radiology curriculum which may benefit from more emphasis during training. Adjustments in the residency program would ensure graduates are successful both in their certification exams and clinical practice.



2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.



Author(s):  
Netravathi B. Angadi ◽  
Amitha Nagabhushana ◽  
Nayana K. Hashilkar

Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and Distractor efficiency (DE). Item analysis is a valuable yet relatively simple procedure, performed after the examination that provides information regarding the reliability and validity of a test item. The objective of this study was to perform an item analysis of MCQs for testing their validity parameters.Methods: 50 items consisting of 150 distractors were selected from the formative exams. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analysed for three parameters such as DIF I, DI, and DE.Results: A total of 50 items consisting of 150 Distractor s were analysed. DIF I of 31 (62%) items were in the acceptable range (DIF I= 30-70%) and 30 had ‘good to excellent’ (DI >0.25). 10 (20%) items were too easy and 9 (18%) items were too difficult (DIF I <30%). There were 4 items with 6 non-functional Distractor s (NFDs), while the rest 46 items did not have any NFDs.Conclusions: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard or modify the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity.



The EHRA Book of Interventional Electrophysiology is the second official textbook of European Heart Rhythm Association (EHRA). Using clinical cases to encourage practical learning, this book assists electrophysiologists and device specialists in tackling both common and unusual situations that they may encounter during daily practice. Covering electrophysiological procedures for supraventricular and ventricular arrhythmias, the book enables specialists to deepen their understanding of complex concepts and techniques. Tracings are presented with multiple choice questions to allow readers to hone their skills for interpreting challenging cases and to prepare for the EHRA certification exam in electrophysiology. Cases include orthodromic atrioventricular re-entrant tachycardia, pulmonary vein isolation, ventricular tachycardia ablation, and atypical left atrial flutter, to name a few.



2021 ◽  
Vol 9 ◽  
Author(s):  
Nathan T. Douthit ◽  
John Norcini ◽  
Keren Mazuz ◽  
Michael Alkan ◽  
Marie-Therese Feuerstein ◽  
...  

Introduction: The standardization of global health education and assessment remains a significant issue among global health educators. This paper explores the role of multiple choice questions (MCQs) in global health education: whether MCQs are appropriate in written assessment of what may be perceived to be a broad curriculum packed with fewer facts than biomedical science curricula; what form the MCQs might take; what we want to test; how to select the most appropriate question format; the challenge of quality item-writing; and, which aspects of the curriculum MCQs may be used to assess.Materials and Methods: The Medical School for International Health (MSIH) global health curriculum was blue-printed by content experts and course teachers. A 30-question, 1-h examination was produced after exhaustive item writing and revision by teachers of the course. Reliability, difficulty index and discrimination were calculated and examination results were analyzed using SPSS software.Results: Twenty-nine students sat the 1-h examination. All students passed (scores above 67% - in accordance with University criteria). Twenty-three (77%) questions were found to be easy, 4 (14%) of moderate difficulty, and 3 (9%) difficult (using examinations department difficulty index calculations). Eight questions (27%) were considered discriminatory and 20 (67%) were non-discriminatory according to examinations department calculations and criteria. The reliability score was 0.27.Discussion: Our experience shows that there may be a role for single-best-option (SBO) MCQ assessment in global health education. MCQs may be written that cover the majority of the curriculum. Aspects of the curriculum may be better addressed by non-SBO format MCQs. MCQ assessment might usefully complement other forms of assessment that assess skills, attitude and behavior. Preparation of effective MCQs is an exhaustive process, but high quality MCQs in global health may serve as an important driver of learning.



Author(s):  
Ajeet Kumar Khilnani ◽  
Rekha Thaddanee ◽  
Gurudas Khilnani

<p class="abstract"><strong>Background:</strong> Multiple choice questions (MCQs) are routinely used for formative and summative assessment in medical education. Item analysis is a process of post validation of MCQ tests, whereby items are analyzed for difficulty index, discrimination index and distractor efficiency, to obtain a range of items of varying difficulty and discrimination indices. This study was done to understand the process of item analysis and analyze MCQ test so that a valid and reliable MCQ bank in otorhinolaryngology is developed.</p><p class="abstract"><strong>Methods:</strong> 158 students of 7<sup>th</sup> Semester were given an 8 item MCQ test. Based on the marks achieved, the high achievers (top 33%, 52 students) and low achievers (bottom 33%, 52 students) were included in the study. The responses were tabulated in Microsoft Excel Sheet and analyzed for difficulty index, discrimination index and distractor efficiency.  </p><p class="abstract"><strong>Results:</strong> The mean (SD) difficulty index (Diff-I) of 8 item test was 61.41% (11.81%). 5 items had a very good difficulty index (41% to 60%), while 3 items were easy (Diff-I &gt;60%). There was no item with Diff-I &lt;30%, i.e. a difficult item, in this test. The mean (SD) discrimination index (DI) of the test was 0.48 (0.15), and all items had very good discrimination indices of more than 0.25. Out of 24 distractors, 6 (25%) were non-functional distractors (NFDs). The mean (SD) distractor efficiency (DE) of the test was 74.62% (23.79%).</p><p class="abstract"><strong>Conclusions:</strong> Item analysis should be an integral and regular activity in each department so that a valid and reliable MCQ question bank is developed.</p>



2021 ◽  
Vol 1 (2) ◽  
pp. 91
Author(s):  
Anggit Prabowo ◽  
Puspa Puspa ◽  
Fariz Setyawan

This study aims to develop a test instrument to measure the high-level thinking ability of the two-variable linear equation system material. This research is a Research and Development using the ADDIE development model consisting of steps of analysis, design, development, implementation, and evaluation. The test subjects in this study were students of class VIII of SMP Negeri 3 Payung. The instruments used in this study were validation sheets and documentation. Data collection techniques through expert judgment and tests. This research has succeeded in developing a test instrument to measure the high-level thinking ability of two-variable linear equations system material. The test developed consists of 10 multiple choice items and 5 description items that have been declared valid by expert judgment. The trial results show from the difficulty index analysis of multiple-choice questions was obtained as much as 1 item in the easy category, 9 questions in the moderate category, the analysis of the difficulty index for the essay obtained by 2 items in the medium category, 3 questions in the difficult category. In the analysis of discriminant index of multiple-choice questions, there were 1 question in the bad category, 3 questions with enough categories, and 6 questions in the good category. In the analysis of the difference in power, 2 questions were categorized as bad, 1 question was categorized as sufficient, 2 questions were categorized as good. Based on the analysis of the function of the distractor, it was found that 1 question was not functioning properly, 9 questions were functioning properly. Finally, from the reliability analysis, it was obtained 0.73 for multiple choice questions and 0.716 for essay questions with high reliability interpretation.



Author(s):  
Manju K. Nair ◽  
Dawnji S. R.

Background: Carefully constructed, high quality multiple choice questions can serve as effective tools to improve standard of teaching. This item analysis was performed to find the difficulty index, discrimination index and number of non functional distractors in single best response type questions.Methods: 40 single best response type questions with four options, each carrying one mark for the correct response, was taken for item analysis. There was no negative marking. The maximum marks was 40. Based on the scores, the evaluated answer scripts were arranged with the highest score on top and the least score at the bottom. Only the upper third and lower third were included. The response to each item was entered in Microsoft excel 2010. Difficulty index, Discrimination index and number of non functional distractors per item were calculated.Results: 40 multiple choice questions and 120 distractors were analysed in this study. 72.5% items were good with a difficulty index between 30%-70%. 25% items were difficult and 2.5% items were easy. 27.5% items showed excellent discrimination between high scoring and low scoring students. One item had a negative discrimination index (-0.1). There were 9 items with non functional distractors.Conclusions: This study emphasises the need for improving the quality of multiple choice questions. Hence repeated evaluation by item analysis and modification of non functional distractors may be performed to enhance standard of teaching in Pharmacology.



Author(s):  
Ismail Burud ◽  
Kavitha Nagandla ◽  
Puneet Agarwal

Background: Item analysis is a quality assurance of examining the performance of the individual test items that measures the validity and reliability of exams. This study was performed to evaluate the quality of the test items with respect to their performance on difficulty index (DFI), Discriminatory index (DI) and assessment of functional and non-functional distractors (FD and NFD).Methods: This study was performed on the summative examination undertaken by 113 students. The analyses include 120 one best answers (OBAs) and 360 distractors.Results: Out of the 360 distractors, 85 distractors were chosen by less than 5% with the distractor efficiency of 23.6%. About 47 (13%) items had no NFDs while 51 (14%), 30 (8.3%), and 4 (1.1%) items contained 1, 2, and 3 NFDs respectively. Majority of the items showed excellent difficulty index (50.4%, n=42) and fair discrimination (37%, n=33). The questions with excellent difficulty index and discriminatory index showed statistical significance with 1NFD and 2 NFD (p=0.03).Conclusions: The post evaluation of item performance in any exam in one of the quality assurance method of identifying the best performing item for quality question bank. The distractor efficiency gives information on the overall quality of item.



Sign in / Sign up

Export Citation Format

Share Document