scholarly journals ANALYSIS OF PROGRESS TEST RESULTS IN MEDICAL FACULTY STUDENTS

Author(s):  
Ade Pryta Romanauli Simaremare

 Background: Assessment of learning outcomes is an important evaluation material to show how the teaching and learning process has been carried out. It can be obtained from formative and summative assessment, then students are given feedback from these results. One method for formative evaluation is a progress test. During the implementation at the HKBP Nommensen University Faculty of Medicine, there had never been an analysis of the results of the Progress Test activity. This study was conducted for analysing of the results of the Progress Test held in the even semester of the 2018/2019 academic year. Methods: This study used an observational descriptive design with cross sectional method. The sample were all students of the Faculty of Medicine who were actively studying in the even semester of the 2018/2019 Academic Year totaling 215 subjects. Item analysis was done on the questions for basic and clinical medicine category by the level of difficulty and the discrimination index based on students’ study period. Results: Students passing rate that attended the progress test in this study were very low. However, the score achieved by the students increased along with the length of students’ study period. Item analysis resulted the difficulty level majority at the medium level, and the discrimination index majority at the poor level for both the basic and clinical medicine science category. Conclusion: Progress testing can be used as a tool to help curriculum designer see the development of students’ knowledge skills both individually and in population.

2021 ◽  
Vol 13 (2) ◽  
pp. 1425-1431
Author(s):  
Andi Rahman

The current Covid-19 pandemic has had many effects on human life globally, including the implementation of education. This study aimed to obtain the impact of the Covid-19 pandemic on learning outcomes in higher education. The research method used is a cross-sectional study. The data were taken from the test results at the end of the lecture, observations, and interviews. The research was conducted at the University of Muhammadiyah Lampung, IPDN Jatinangor Campus, and the Ahmad Dahlan Institute of Technology and Business, with 120 students participating. The data analysis technique used the percentage technique and cross-tabulation. The study results concluded that student learning outcomes decreased in the 2020-2021 academic year compared to the 2019-2020 academic year. The decline in learning outcomes includes knowledge, skills, and psychology. This finding has implications for the understanding of education personnel regarding online teaching and learning design during the Covid-19 pandemic.


Author(s):  
Ajeet Kumar Khilnani ◽  
Rekha Thaddanee ◽  
Gurudas Khilnani

<p class="abstract"><strong>Background:</strong> Multiple choice questions (MCQs) are routinely used for formative and summative assessment in medical education. Item analysis is a process of post validation of MCQ tests, whereby items are analyzed for difficulty index, discrimination index and distractor efficiency, to obtain a range of items of varying difficulty and discrimination indices. This study was done to understand the process of item analysis and analyze MCQ test so that a valid and reliable MCQ bank in otorhinolaryngology is developed.</p><p class="abstract"><strong>Methods:</strong> 158 students of 7<sup>th</sup> Semester were given an 8 item MCQ test. Based on the marks achieved, the high achievers (top 33%, 52 students) and low achievers (bottom 33%, 52 students) were included in the study. The responses were tabulated in Microsoft Excel Sheet and analyzed for difficulty index, discrimination index and distractor efficiency.  </p><p class="abstract"><strong>Results:</strong> The mean (SD) difficulty index (Diff-I) of 8 item test was 61.41% (11.81%). 5 items had a very good difficulty index (41% to 60%), while 3 items were easy (Diff-I &gt;60%). There was no item with Diff-I &lt;30%, i.e. a difficult item, in this test. The mean (SD) discrimination index (DI) of the test was 0.48 (0.15), and all items had very good discrimination indices of more than 0.25. Out of 24 distractors, 6 (25%) were non-functional distractors (NFDs). The mean (SD) distractor efficiency (DE) of the test was 74.62% (23.79%).</p><p class="abstract"><strong>Conclusions:</strong> Item analysis should be an integral and regular activity in each department so that a valid and reliable MCQ question bank is developed.</p>


Author(s):  
Amit P. Date ◽  
Archana S. Borkar ◽  
Rupesh T. Badwaik ◽  
Riaz A. Siddiqui ◽  
Tanaji R. Shende ◽  
...  

Background: Multiple choice questions (MCQs) are a common method for formative and summative assessment of medical students. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), distracter efficiency (DE). The objective of this study was to assess the quality of MCQs currently in use in pharmacology by item analysis and develop a MCQ bank with quality items.Methods: This cross-sectional study was conducted in 148 second year MBBS students at NKP Salve institute of medical sciences from January 2018 to August 2018. Forty MCQs twenty each from the two term examination of pharmacology were taken for item analysis A correct response to an item was awarded one mark and each incorrect response was awarded zero. Each item was analyzed using Microsoft excel sheet for three parameters such as DIF I, DI, and DE.Results: In present study mean and standard deviation (SD) for Difficulty index (%) Discrimination index (%) and Distractor efficiency (%) were 64.54±19.63, 0.26±0.16 and 66.54±34.59 respectively. Out of 40 items large number of MCQs has acceptable level of DIF (70%) and good in discriminating higher and lower ability students DI (77.5%). Distractor efficiency related to presence of zero or 1 non-functional distrator (NFD) is 80%.Conclusions: The study showed that item analysis is a valid tool to identify quality items which regularly incorporated can help to develop a very useful, valid and a reliable question bank.


2020 ◽  
Vol 19 (1) ◽  
Author(s):  
Surajit Kundu ◽  
Jaideo M Ughade ◽  
Anil R Sherke ◽  
Yogita Kanwar ◽  
Samta Tiwari ◽  
...  

Background: Multiple-choice questions (MCQs) are the most frequently accepted tool for the evaluation of comprehension, knowledge, and application among medical students. In single best response MCQs (items), a high order of cognition of students can be assessed. It is essential to develop valid and reliable MCQs, as flawed items will interfere with the unbiased assessment. The present paper gives an attempt to discuss the art of framing well-structured items taking kind help from the provided references. This article puts forth a practice for committed medical educators to uplift the skill of forming quality MCQs by enhanced Faculty Development programs (FDPs). Objectives: The objective of the study is also to test the quality of MCQs by item analysis. Methods: In this study, 100 MCQs of set I or set II were distributed to 200 MBBS students of Late Shri Lakhiram Agrawal Memorial Govt. Medical College Raigarh (CG) for item analysis for quality MCQs. Set I and Set II were MCQs which were formed by 60 medical faculty before and after FDP, respectively. All MCQs had a single stem with three wrong and one correct answers. The data were entered in Microsoft excel 2016 software to analyze. The difficulty index (Dif I), discrimination index (DI), and distractor efficiency (DE) were the item analysis parameters used to evaluate the impact on adhering to the guidelines for framing MCQs. Results: The mean calculated difficulty index, discrimination index, and distractor efficiency were 56.54%, 0.26, and 89.93%, respectively. Among 100 items, 14 items were of higher difficulty level (DIF I < 30%), 70 were of moderate category, and 16 items were of easy level (DIF I > 60%). A total of 10 items had very good DI (0.40), 32 had recommended values (0.30 - 0.39), and 25 were acceptable with changes (0.20 - 0.29). Of the 100 MCQs, there were 27 MCQs with DE of 66.66% and 11 MCQs with DE of 33.33%. Conclusions: In this study, higher cognitive-domain MCQs increased after training, recurrent-type MCQ decreased, and MCQ with item writing flaws reduced, therefore making our results much more statistically significant. We had nine MCQs that satisfied all the criteria of item analysis.


2021 ◽  
Vol 4 (2) ◽  
pp. 178-186
Author(s):  
Budi Mulyati

The purpose of this study was to analyze the items in the form of essay items, given as a final exam in subject of introductory accounting 1. This question was given to nineteen students in semester 1 of the 2020-2021 academic year. This study used a descriptive method with a quantitative approach. For the purposes of analysis, the item analysis technique was used, which consisted of an analysis of the level of difficulty of the items and the analysis of the differentiating power of the items. Based on the results of the analysis, the results obtained that the questions made had an index of difficulty level as an easy question of 50% and an average question of 50%. And based on the results of the analysis of the differentiating power index, the questions included as questions that needed to be revised were 33.3% and questions that were not good were 67.7%.


2017 ◽  
Author(s):  
Abdulaziz Alamri ◽  
Omer Abdelgadir Elfaki ◽  
Karimeldin A Salih ◽  
Suliman Al Humayed ◽  
Fatmah Mohammed Ahmad Althebat ◽  
...  

BACKGROUND Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. OBJECTIVE The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. METHODS Design: Cross sectional Study design Setting, Materials and methods Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. RESULTS Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97 CONCLUSIONS Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.


Author(s):  
Mirfat Mirfat ◽  
Yuhernita Yuhernita

Background: Faculty of Medicine, YARSI University has experienced a paradigm shift in higher education from a content-based curriculum to a competency-based curriculum that has been applied since the academic year of 2007/2008 using a problem-based learning approach. Progress test is an evaluation that is used to measure the student’s competency in the field of knowledge in its entirety. Faculty of Medicine, YARSI University has yet to apply these progress test. Thus, this research is aimed to execute a progress test trial in attempt to use it as a standard measurement to observe student capabilities in the field of knowledge, either as the individual or as a whole class.Method: This research uses a qualitative method. The research sample consists of 200 students from the faculty of medicine chosen by using a randomized approach for each class, starting from the class of 2013 (first year students) until the class of 2010 (fourth year students).Results: Based on student entrance levels in years, 4th year students had acquired a higher score on the progress test trial in comparison to the 1st, 2nd and 3rd year students.  The results had also shown that the students in their first year of study had lower scores when compared to the students in their last year of study (4th year). This indicates that the longer the period of study or the more time spent in the study process, the better the scores achieved on the progress test.Conclusion : Faculty of Medicine, YARSI University has successfully executed the progress test trial in the first week of the third block in the odd semester of the academic year of  2013/2014. The progress test itself consisted of 200 questions and was taken by 200 student samples that were chosen in a randomized manner.  The results of the progress test had revealed that there was an increase in the average examination grades of students in the 1st year until the 4th year, which is consistent with the level of understanding of lectures accordingly to each year.  Item Analysis  on the test showed a good distribution in the level of difficulty, where the majority of questions were held in the moderate level of difficulty.  


Author(s):  
Novi Maulina ◽  
Rima Novirianthy

Background: Assessment and evaluation for students is an essential component of teaching and learning process. Item analysis is the technique of collecting, summarizing, and using students’ response data to assess the quality of the Multiple Choice Question (MCQ) test by measuring indices of difficulty and discrimination, also distracter efficiency. Peer review practices improve quality of assessment validity in evaluating student performance.Method: We analyzed 150 student’s responses for 100 MCQs in Block Examination for its difficulty index (p), discrimination index (D) and distractor efficiency (DE) using Microsoft excel formula. The Correlation of p and D was analyzed using Spearman correlation test by SPSS 23.0. The result was analyzed to evaluate the peer-review strategy.Results: The median of difficulty index (p) was 54% or within the range of excellent level (p 40-60%) and the mean of discrimination index (D) was 0.24 which is reasonably good. There were 7 items with excellent p (40–60%) and excellent D (≥0.4). Nineteen of items had excellent discrimination index (D≥0.4). However,there were 9 items with negative discrimination index and 30 items with poor discrimination index, which should be fully revised. Forty-two of items had 4 functioning distracters (DE 0%) which suggested the teacher to be more precise and carefully creating the distracters.Conclusion: Based on item analysis, there were items to be fully revised. For better test quality, feedback and suggestions for the item writer should also be performed as a part of peer-review process on the basis of item analysis.


Sign in / Sign up

Export Citation Format

Share Document