scholarly journals Assessment practices in undergraduate clinical medicine training: What do we do and how we can improve?

Author(s):  
Hanneke Brits ◽  
Johan Bezuidenhout ◽  
Lynette J. Van der Merwe ◽  
Gina Joubert

Background: Assessment should form an integral part of curriculum design in higher education and should be robust enough to ensure clinical competence.Aim: This article reports on current assessment practices and makes recommendations to improve clinical assessment in the undergraduate medical programme at the University of the Free State.Methods: A descriptive cross-sectional study design was used. Qualitative and quantitative data were gathered by means of open- and closed-ended questions in a self-administered questionnaire, which was completed by teaching and learning coordinators in 13 disciplines.Results: All disciplines in the undergraduate medical programme are represented. They used different assessment methods to assess the competencies required of entry-level healthcare professionals. Workplace-based assessment was performed by 30.1% of disciplines, while multiple-choice questions (MCQs) (76.9%) and objective structured clinical examinations (OSCEs) (53.6%) were the main methods used during formative assessment. Not all assessors were well prepared for assessment, with 38.5% never having received any formal training on assessment. Few disciplines (15.4%) made use of post-assessment moderation as a standard practice, and few disciplines always gave feedback after assessments.Conclusion: The current assessment practices for clinical students in the undergraduate medical programme at the University of the Free State cover the spectrum that is necessary to assess all the different competencies required. Multiple-choice questions and OSCEs, which are valid and reliable assessment methods, are used frequently. Poor feedback and moderation practices should be addressed. More formative assessments, and less emphasis on summative assessment, should be considered. Workplace-based and continuous assessments may be good ways to assess clinical competence.

2012 ◽  
Vol 43 (1) ◽  
pp. 131-140
Author(s):  
Leena Vartiainen ◽  
Minna Kaipainen

Future teachers have an important role in education for sustainable development. This article describes textile craft teacher students’ perceptions of sustainable textile craft. The data derives from a survey of craft teacher students of the University of Eastern Finland (N = 20). The questionnaire included open-ended and multiple choice questions about sustainability of textile craft education and the relevance of sustainability in the students’ lives. The study reveals textile craft teacher students’ conceptions as consumers, craft makers and future textile craft teachers. The open-ended questions were analyzed by content analysis and the multiple choice questions were analyzed with statistical methods. The results were reflected to Victor Papanek’s function complex. As consumers, students favour good quality products and recycling of textiles. They are concerned about workers’ work conditions and against child labour. Although values and perceptions related to sustainable consumerism are high, sometimes the actual purchasing behaviour differs from the values because of the students’ meagre budgets. As craft makers, availability of locally produced materials and materials made of natural fibres are important to students. As future textile craft teachers, students think that craft is an excellent way to teach sustainability and sustainable craft. They consider it is important to teach life-cycle thinking but also craft culture and skills. Key words: clothing and textile design, sustainable craft, textile craft teachers.


Pythagoras ◽  
2009 ◽  
Vol 0 (69) ◽  
Author(s):  
Belinda Huntley ◽  
Johann Engelbrecht ◽  
Ansie Harding

In this study we propose a taxonomy for assessment in mathematics, which we call the assessment component taxonomy, to identify those components of mathematics that can be successfully assessed using alternative assessment formats. Based on the literature on assessment models and taxonomies in mathematics, this taxonomy consists of seven mathematics assessment components, hierarchically ordered by cognitive level, as well as the nature of the mathematical tasks associated with each component. Using a model that we developed earlier for measuring the quality of mathematics test items, we investigate which of the assessment components can be successfully assessed in the provided response question (PRQ) format, in particular multiple choice questions (MCQs), and which can be better assessed in the constructed response question (CRQ) format. The results of this study show that MCQs can be constructed to evaluate higher order levels of thinking and learning. The conclusion is that MCQs can be successfully used as an assessment format in undergraduate mathematics, more so in some assessment components than in others. The inclusion of the PRQ assessment format in all seven assessment components can reduce the large marking loads, associated with continuous assessment practices in undergraduate mathematics, without compromising the validity of the assessment.


Author(s):  
Pilar Gandía Herrero ◽  
Agustín Romero Medina

The quality of academic performance and learning outcomes depend on various factors, both psychological and contextual. The academic context includes the training activities and the type of evaluation or examination, which also influences cognitive and motivational factors, such as learning and study approaches and self-regulation. In our university context, the predominant type of exam is that of multiple-choice questions. The cognitive requirement of these questions may vary. From Bloom's typical taxonomy, it is considered that from lower to higher cognitive demand we have questions about factual, conceptual, application knowledge, etc. Normally, the teacher does not take these classifications into account when preparing this type of exam. We propose here an adaptation model of the multiple choice questions classification according to cognitive requirement (associative memorization, comprehension, application), putting it to the test analyzing an examination of a subject in Psychology Degree and relating the results with measures of learning approaches (ASSIST and R-SPQ-2F questionnaires) and self-regulation in a sample of 87 subjects. The results show differential academic performance according to "cognitive" types of questions and differences in approaches to learning and self-regulation. The convenience of taking into account these factors of cognitive requirement when elaborating multiple choice questions is underlined.


Author(s):  
Abatihun Alehegn Sewagegn ◽  
Boitumelo Molebogeng Diale

Authentic assessment plays a great role in enhancing students' learning and makes them competent in their study area. Studies indicate that assessment is authentic when the tasks have real-life value and students perform real-world tasks. Therefore, this chapter shows how lecturers practice authentic assessment to enhance students' learning in a higher education institution. To achieve this, the authors used a phenomenological qualitative research design. An interview was used to collect data. The result indicated that lecturers are highly dependent upon traditional assessment methods, which have no significant contribution to the competency of students. The practice of authentic assessment methods as a tool to enhance students' learning is limited. Therefore, the authors can conclude that enhancing students' learning using authentic assessment in their study areas is untenable if the lecturers continue to utilize their current assessment practices.


Author(s):  
Ajeet Kumar Khilnani ◽  
Rekha Thaddanee ◽  
Gurudas Khilnani

<p class="abstract"><strong>Background:</strong> Multiple choice questions (MCQs) are routinely used for formative and summative assessment in medical education. Item analysis is a process of post validation of MCQ tests, whereby items are analyzed for difficulty index, discrimination index and distractor efficiency, to obtain a range of items of varying difficulty and discrimination indices. This study was done to understand the process of item analysis and analyze MCQ test so that a valid and reliable MCQ bank in otorhinolaryngology is developed.</p><p class="abstract"><strong>Methods:</strong> 158 students of 7<sup>th</sup> Semester were given an 8 item MCQ test. Based on the marks achieved, the high achievers (top 33%, 52 students) and low achievers (bottom 33%, 52 students) were included in the study. The responses were tabulated in Microsoft Excel Sheet and analyzed for difficulty index, discrimination index and distractor efficiency.  </p><p class="abstract"><strong>Results:</strong> The mean (SD) difficulty index (Diff-I) of 8 item test was 61.41% (11.81%). 5 items had a very good difficulty index (41% to 60%), while 3 items were easy (Diff-I &gt;60%). There was no item with Diff-I &lt;30%, i.e. a difficult item, in this test. The mean (SD) discrimination index (DI) of the test was 0.48 (0.15), and all items had very good discrimination indices of more than 0.25. Out of 24 distractors, 6 (25%) were non-functional distractors (NFDs). The mean (SD) distractor efficiency (DE) of the test was 74.62% (23.79%).</p><p class="abstract"><strong>Conclusions:</strong> Item analysis should be an integral and regular activity in each department so that a valid and reliable MCQ question bank is developed.</p>


2021 ◽  
Author(s):  
Clare Lloyd ◽  
Annika Herb ◽  
Michael Kilmister ◽  
Catharine Coleborne

There has been much written recently round the “digital revolution” of universities (Nascimento Cunha et al., 2020). Indeed, in 2020 the COVID-19 pandemic demonstrated the need for universities to adapt and adopt new technological tools for teaching and learning, as both the global world we live in changed, and as students adapted to the continually evolving digital landscape. The BA Online is a new interdisciplinary online presence for the humanities and social sciences, and includes a focus on constructive alignment, innovative learning objects, and social learning. The semester-long courses were built as a supported social learning experience that is purposefully constructed with a narrative. This article reveals how the BA Online project was realised through the use of partnerships, particularly that of the university learning designers who worked very closely with both the online learning platform FutureLearn and academic staff in curriculum design and course transformation.


2012 ◽  
Vol 19 (05) ◽  
pp. 597-603
Author(s):  
Fatima MUKHTAR ◽  
NOREEN HASHMI ◽  
MUHAMMAD ALI RAUF ◽  
Amna Anzar ◽  
Khurram Islam Butt ◽  
...  

Objective: To determine preferences of medical students for modes of teaching, qualities of a good teacher and assessmenttechniques in medical education. Design: A descriptive cross-sectional study. Setting: Lahore Medical and Dental College, Lahore. Period:January 2011. Material & Methods: All students of third and fourth year MBBS classes were included in the study (n=127). A pre-testedquestionnaire was used for data collection. A 7-point Likert scale ranging from 1(strongly disagree) to 7(strongly agree) was used to determinestudent’s preferences of teaching styles. The data was recorded using SPSS version 16.0. Descriptive statistics were computed. Results: Thepreferred teaching methods for basic science subjects were skills laboratory 88(70%), followed by problem based learning 70(55%) andinteractive lectures 65(51%). The same teaching methods i.e. skills laboratory 101(80%), problem based learning 89(70%) and interactivelectures 79(62%) were also popular for the teaching of clinical science subjects. The least preferred teaching method for both basic 51(40%)and clinical 58(46%) sciences was didactic lectures. The desirable quality of a good teacher was reported as teaching skills 111(87%) and thepreferred assessment technique was found to be multiple choice questions 90(71%). Conclusions: Students prefer the student centredteaching styles as opposed to the traditional approach. Good teaching skill is the most desirable quality of a teacher and most students like to beassessed by multiple choice questions.


2020 ◽  
Vol 27 (01) ◽  
pp. 57-61
Author(s):  
Saeed Ahmed ◽  
Nabeel Qamar ◽  
Naveed Mansoori ◽  
Sajila Bano

Objectives: To find out the students perception on using multiple choice questions as a classroom assessment technique. Study Design: Cross sectional Study. Setting: University of Hail, Saudi Arabia. Period: 6 months from January 2018 to June 2018. Material & Methods: At the end of each scheduled class, students are provided with four MCQ's on the same topic which was delivered in the lecture and 3 minutes were given to solve the MCQ’s. Data was collected from 4th and 6th year MBBS students. A self-structured questionnaire regarding class assessment techniques was also administered, after completion of Surgery and Clinical Skill module. Results: Out of 80 MBBS students, 50% were selected from 4th year and 6th year respectively. Response of most of the students were positive 59 (73.8%) in classroom assessment techniques effects on student discipline. Majority of the student’s 70 (87.5%) respond positive effects on student’s interest to subject and 64 (80%) reported positive effects on student’s assessment. Suggestion about participating classroom assessment technique were found to be positive in 61 (76.2%). Data was analysed by using SPSS version 20.  Conclusion: Student’s perception on using multiple choice questions as a classroom assessment technique was found to be more positive.


Author(s):  
Amit P. Date ◽  
Archana S. Borkar ◽  
Rupesh T. Badwaik ◽  
Riaz A. Siddiqui ◽  
Tanaji R. Shende ◽  
...  

Background: Multiple choice questions (MCQs) are a common method for formative and summative assessment of medical students. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), distracter efficiency (DE). The objective of this study was to assess the quality of MCQs currently in use in pharmacology by item analysis and develop a MCQ bank with quality items.Methods: This cross-sectional study was conducted in 148 second year MBBS students at NKP Salve institute of medical sciences from January 2018 to August 2018. Forty MCQs twenty each from the two term examination of pharmacology were taken for item analysis A correct response to an item was awarded one mark and each incorrect response was awarded zero. Each item was analyzed using Microsoft excel sheet for three parameters such as DIF I, DI, and DE.Results: In present study mean and standard deviation (SD) for Difficulty index (%) Discrimination index (%) and Distractor efficiency (%) were 64.54±19.63, 0.26±0.16 and 66.54±34.59 respectively. Out of 40 items large number of MCQs has acceptable level of DIF (70%) and good in discriminating higher and lower ability students DI (77.5%). Distractor efficiency related to presence of zero or 1 non-functional distrator (NFD) is 80%.Conclusions: The study showed that item analysis is a valid tool to identify quality items which regularly incorporated can help to develop a very useful, valid and a reliable question bank.


Sign in / Sign up

Export Citation Format

Share Document