scholarly journals Teste de Progresso na Escola Médica: uma Revisão Sistemática acerca da Literatura

Author(s):  
Ademir Garcia Reberti ◽  
Nayme Hechem Monfredini ◽  
Olavo Franco Ferreira Filho ◽  
Dalton Francisco de Andrade ◽  
Carlos Eduardo Andrade Pinheiro ◽  
...  

Abstract: Progress Test is an objective assessment, consisting of 60 to 150 multiple-choice questions, designed to promote an assessment of the cognitive skills expected at the end of undergraduate school. This test is applied to all students on the same day, so that it is possible to compare the results between grades and analyze the development of knowledge performance throughout the course. This study aimed to carry out a systematic and literary review about Progress Test in medical schools in Brazil and around the world, understanding the benefits of its implementation for the development of learning for the student, the teacher and the institution. The study was carried out from July 2018 to April 2019, which addressed articles published from January 2002 to March 2019. The keywords used were: “Progress Test in Medical Schools” and “Item Response Theory in Medicine” in the PubMed, Scielo, and Lilacs platforms. There was no language limitation in article selection, but the research was carried out in English. A total of 192,026 articles were identified, and after applying advanced search filters, 11 articles were included in the study. The Progress Test (PTMed) has been applied in medical schools, either alone or in groups of partner schools, since the late 1990s. The test results build the students’ performance curves, which allow us to identify weaknesses and strengths of the students in the several areas of knowledge related to the course. The Progress Test is not an exclusive instrument for assessing student performance, but it is also important as an assessment tool for academic management use and thus, it is crucial that institutions take an active role in the preparation and analysis of this assessment data. Assessments designed to test clinical competence in medical students need to be valid and reliable. For the evaluative method to be valid it is necessary that the subject be extensively reviewed and studied, aiming at improvements and adjustments in test performance.

Author(s):  
Ademir Garcia Reberti ◽  
Nayme Hechem Monfredini ◽  
Olavo Franco Ferreira Filho ◽  
Dalton Francisco de Andrade ◽  
Carlos Eduardo Andrade Pinheiro ◽  
...  

Abstract: Progress Test is an objective assessment, consisting of 60 to 150 multiple-choice questions, designed to promote an assessment of the cognitive skills expected at the end of undergraduate school. This test is applied to all students on the same day, so that it is possible to compare the results between grades and analyze the development of knowledge performance throughout the course. This study aimed to carry out a systematic and literary review about Progress Test in medical schools in Brazil and around the world, understanding the benefits of its implementation for the development of learning for the student, the teacher and the institution. The study was carried out from July 2018 to April 2019, which addressed articles published from January 2002 to March 2019. The keywords used were: “Progress Test in Medical Schools” and “Item Response Theory in Medicine” in the PubMed, Scielo, and Lilacs platforms. There was no language limitation in article selection, but the research was carried out in English. A total of 192,026 articles were identified, and after applying advanced search filters, 11 articles were included in the study. The Progress Test (PTMed) has been applied in medical schools, either alone or in groups of partner schools, since the late 1990s. The test results build the students’ performance curves, which allow us to identify weaknesses and strengths of the students in the several areas of knowledge related to the course. The Progress Test is not an exclusive instrument for assessing student performance, but it is also important as an assessment tool for academic management use and thus, it is crucial that institutions take an active role in the preparation and analysis of this assessment data. Assessments designed to test clinical competence in medical students need to be valid and reliable. For the evaluative method to be valid it is necessary that the subject be extensively reviewed and studied, aiming at improvements and adjustments in test performance.


Author(s):  
Robyn Maree Slattery

Background: Extended matching questions (EMQs) were introduced as an objective assessment tool into third year immunology undergraduate units at Monash University, Australia. Aim: The performance of students examined objectively by multiple choice questions (MCQs) was compared to their performance assessed by EMQs; there was a high correlation coefficient between the two methods. EMQs were then introduced and the correlation of student performance between related units was measured as a function of percentage objective assessment.  The correlation of student performance between units increased proportionally with objective assessment.  Student performance in tasks assessed objectively and subjectively was then compared. The findings indicate marker bias contributes to the poor correlation between marks awarded objectively and subjectively. Conclusion: EMQs are a valid method to objectively assess students and their increased inclusion in the assessment process increases the consistency of student marks.  The subjective assessment of science communication skills introduces marker bias, indicating a need to identify, validate and implement, more objective methods for their assessment. Keywords: Extended matching question (EMQ); Objective assessment (OA); SA (SA);  Marker bias; Discipline-specific assessment; Science communication assessment 


2021 ◽  
pp. 160-171
Author(s):  
Iryna Lenchuk ◽  
Amer Ahmed

This article describes the results of Action Research conducted in an ESP classroom of Dhofar University located in Oman. Following the call of Oman Vision 2040 to emphasize educational practices that promote the development of higher-order cognitive processes, this study raises the following question: Can an online multiple choice question (MCQ) quiz tap into the higher-order cognitive skills of apply, analyze and evaluate? This question was also critical at the time of the COVID-19 pandemic when Omani universities switched to the online learning mode. The researchers administered an online MCQ quiz to 35 undergraduate students enrolled in an ESP course for Engineering and Sciences. The results showed that MCQ quizzes could be developed to tap into higher-order thinking skills when the stem of the MSQ is developed as a task or a scenario. The study also revealed that students performed better on MCQs that tap into low-level cognitive skills. This result can be attributed to the prevalent practice in Oman to develop assessment tools that tap only into a level of Bloom’s taxonomy, which involves the cognitive process of retrieving memorized information. The significance of the study lies in its pedagogical applications. The study calls for the use of teaching and assessment practices that target the development of higher-order thinking skills, which is aligned with the country’s strategic direction reflected in Oman vision 2040.


2016 ◽  
Vol 40 (3) ◽  
pp. 304-312 ◽  
Author(s):  
Nicholas Cramer ◽  
Abdo Asmar ◽  
Laurel Gorman ◽  
Bernard Gros ◽  
David Harris ◽  
...  

Multiple-choice questions are a gold-standard tool in medical school for assessment of knowledge and are the mainstay of licensing examinations. However, multiple-choice questions items can be criticized for lacking the ability to test higher-order learning or integrative thinking across multiple disciplines. Our objective was to develop a novel assessment that would address understanding of pathophysiology and pharmacology, evaluate learning at the levels of application, evaluation and synthesis, and allow students to demonstrate clinical reasoning. The rubric assesses student writeups of clinical case problems. The method is based on the physician's traditional postencounter Subjective, Objective, Assessment and Plan note. Students were required to correctly identify subjective and objective findings in authentic clinical case problems, to ascribe pathophysiological as well as pharmacological mechanisms to these findings, and to justify a list of differential diagnoses. A utility analysis was undertaken to evaluate the new assessment tool by appraising its reliability, validity, feasibility, cost effectiveness, acceptability, and educational impact using a mixed-method approach. The Subjective, Objective, Assessment and Plan assessment tool scored highly in terms of validity and educational impact and had acceptable levels of statistical reliability but was limited in terms of acceptance, feasibility, and cost effectiveness due to high time demands on expert graders and workload concerns from students. We conclude by making suggestions for improving the tool and recommend deployment of the instrument for low-stakes summative assessment or formative assessment.


Author(s):  
Amitabha Ghosh

Dynamics is a pivotal class in a student’s life-long learning profile since it builds upon the logical extensions of Statics and Strength of Materials classes, and provides a framework on which Fluid Mechanics concepts may be developed for deformable media. This paper establishes the contextual reference of Dynamics in this framework. An earlier paper by the author discussed details of how the design of proper multiple choice questions is critical for assessment in Statics and Fluid Mechanics. This paper provides a progress report of such evaluations in Dynamics. In addition, this paper explores the pedagogical issues related to building a student’s learning profile. While comparing test results obtained in trailer sections of Dynamics with those obtained in sections taught by faculty teams, some structural differences were discovered. This reporting completes the feedback loop used by faculty in our Engineering Sciences Core Curriculum for improving student performance over time. The process may further be developed by using some similarities and differences in the performance data.


2017 ◽  
Vol 12 (5) ◽  
pp. 331-336
Author(s):  
Lance Vincent Watkins

Purpose The purpose of this paper is to examine whether the current Royal College of Psychiatrists Membership (MRCPsych) written examination is a suitable assessment tool to distinguish between candidates in a high-stakes examination. Design/methodology/approach Review of current educational theory and evidence in relation to the use of multiple-choice questions (MCQs) as an assessment form. Findings When MCQ’s are constructed correctly they provide an efficient and objective assessment tool. However, when developing assessment tools for high-stakes scenarios, it is important that MCQs are used alongside other tests that may scrutinize other aspects of competence. It may be argued that written assessment can only satisfy the first stage of Miller’s pyramid. The evidence outlined demonstrates that this may not be the case and higher order thinking and problem solving can be assessed with appropriately constructed questions. MCQs or any other singular assessment alone, cannot demonstrate clinical competence or mastery. Originality/value Increasingly, the MRCPsych examination is used around the world to establish levels of competency and expertise in psychiatry. It is therefore essential that the Royal College of Psychiatrists lead the way in innovation of assessment procedures which are linked to current educational theory. The author has evidenced how the current MRCPsych, may at least in part, hold inherent biases which are not related to a candidate’s competency.


2017 ◽  
Vol 32 (4) ◽  
pp. 1-17 ◽  
Author(s):  
Dianne Massoudi ◽  
SzeKee Koh ◽  
Phillip J. Hancock ◽  
Lucia Fung

ABSTRACT In this paper we investigate the effectiveness of an online learning resource for introductory financial accounting students using a suite of online multiple choice questions (MCQ) for summative and formative purposes. We found that the availability and use of an online resource resulted in improved examination performance for those students who actively used the online learning resource. Further, we found a positive relationship between formative MCQ and unit content related to challenging financial accounting concepts. However, better examination performance was also linked to other factors, such as prior academic performance, tutorial participation, and demographics, including gender and attending university as an international student. JEL Classifications: I20; M41.


2019 ◽  
Vol 97 (Supplement_1) ◽  
pp. 79-79
Author(s):  
Lauren R Thomas ◽  
Jeremy G Powell ◽  
Elizabeth B Kegley ◽  
Kathleen Jogan

Abstract In 2015, the University of Arkansas Department of Animal Science developed a strategy for assessing student-learning outcomes within its undergraduate teaching program. The first recognized outcome states that students will demonstrate foundational scientific knowledge in the general animal science disciplines of physiology, genetics, nutrition, muscle foods, and production animal management. Subsequently, a 58-item assessment tool was developed for direct assessment of student knowledge—focusing primarily on freshmen and senior students. Over the past 3 academic calendar years, 381 students (196 freshmen, 48 sophomores, 19 juniors, 113 seniors, 5 graduates) were assessed, either during an introduction to animal science course or by appointment with outgoing seniors majoring in animal science. Scores were categorized using demographic data collected at the beginning of the assessment tool. Comparison categories included academic class, major, and general student background (rural or urban). Data analysis were performed using the Glimmix procedure of SAS, with student serving as the experimental unit and significance set at P ≤ 0.05. Generally speaking, animal science majors performed better (P < 0.01) than students from other majors, and students with a rural background performed better (P < 0.01) than their urban-backgrounded peers. Overall, senior assessment scores averaged 23-percentage points greater (P < 0.01) than freshmen assessment scores, and the average scores for freshmen and seniors were 43% and 66% respectively. In regards to student performance within each discipline, there was an average improvement of 24 percentage points between freshmen and seniors in all of the measured disciplines except for muscle foods, which only saw a 10-percentage point improvement between the two classes. While the overall improvement in scores is indicative of increased student knowledge, the department would like to see greater improvement in all discipline scores for seniors majoring in animal science.


2015 ◽  
Vol 78 (5) ◽  
pp. 1008-1013 ◽  
Author(s):  
Markus Tyler Ziesmann ◽  
Jason Park ◽  
Bertram J. Unger ◽  
Andrew W. Kirkpatrick ◽  
Ashley Vergis ◽  
...  

Author(s):  
Umayya Musharrafieh ◽  
Khalil Ashkar ◽  
Dima Dandashi ◽  
Maya Romani ◽  
Rana Houry ◽  
...  

Introduction: Objective Structured Clinical Examination (OSCE) is considered a useful method of assessing clinical skills besides Multiple Choice Questions (MCQs) and clinical evaluations. Aim: To explore the acceptance of medical students to this assessment tool in medical education and to determine whether the assessment results of MCQs and faculty clinical evaluations agree with the respective OSCE scores of 4th year medical students (Med IV). Methods: performance of a total of 223 Med IV students distributed on academic years 2006-2007, 2007-2008, and 2008-2009 in OSCE, MCQs and faculty evaluations were compared. Out of the total 93 students were asked randomly to fill a questionnaire about their attitudes and acceptance of this tool. The OSCE was conducted every two months for two different groups of medical students who had completed their family medicine rotation, while faculty evaluation based on observation by assessors was submitted on a monthly basis upon the completion of the rotation. The final exam for the family medicine clerkship was performed at the end of the 4thacademic year, and it consisted of MCQsResults: Students highly commended the OSCE as a tool of evaluation by faculty members as it provides a true measure of required clinical skills and communication skills compared to MCQs and faculty evaluation. The study showed a significant positive correlation between the OSCE scores and the clinical evaluation scores while there was no association between the OSCE score and the final exam scores.Conclusion: Student showed high appreciation and acceptance of this type of clinical skills testing. Despite the fact that OSCEs make them more stressed than other modalities of assessment, it remained the preferred one.


Sign in / Sign up

Export Citation Format

Share Document