Competency Assessment Tool (CAT). The evaluation of an innovative competency-based assessment experience in higher education

2016 ◽  
Vol 25 (5) ◽  
pp. 631-648 ◽  
Author(s):  
Georgeta Ion ◽  
Elena Cano ◽  
Nati Cabrera
Author(s):  
Mohammed Khalidi Idrissi ◽  
Meriem Hnida ◽  
Samir Bennani

Competency-based Assessment (CBA) is the measurement of student's competency against a standard of performance. It is a process of collecting evidences to analyze student's progress and achievement. In higher education, Competency-based Assessment puts the focus on learning outcomes to constantly improve academic programs and meet labor market demands. As of to date, competencies are described using natural language but rarely used in e-learning systems, and the common sense idea is that: the way competency is defined shapes the way it is conceptualized, implemented and assessed. The main objective of this chapter is to introduce and discuss Competency-based Assessment from a methodological and technical perspectives. More specifically, the objective is to highlight ongoing issues regarding competency assessment in higher education in the 21st century, to emphasis the benefits of its implementation and finally to discuss some competency modeling and assessment techniques.


Author(s):  
Mohammed Khalidi Idrissi ◽  
Meriem Hnida ◽  
Samir Bennani

Competency-based Assessment (CBA) is the measurement of student's competency against a standard of performance. It is a process of collecting evidences to analyze student's progress and achievement. In higher education, Competency-based Assessment puts the focus on learning outcomes to constantly improve academic programs and meet labor market demands. As of to date, competencies are described using natural language but rarely used in e-learning systems, and the common sense idea is that: the way competency is defined shapes the way it is conceptualized, implemented and assessed. The main objective of this chapter is to introduce and discuss Competency-based Assessment from a methodological and technical perspectives. More specifically, the objective is to highlight ongoing issues regarding competency assessment in higher education in the 21st century, to emphasis the benefits of its implementation and finally to discuss some competency modeling and assessment techniques.


Author(s):  
Rachel Han ◽  
Julia Keith ◽  
Elzbieta Slodkowska ◽  
Sharon Nofech-Mozes ◽  
Bojana Djordjevic ◽  
...  

Context.— Competency-based medical education relies on frequent formative in-service assessments to ascertain trainee progression. Currently at our institution, trainees receive a summative end-of-rotation In-Training Evaluation Report based on feedback collected from staff pathologists. There is no method of simulating report sign-out. Objective.— To develop a formative in-service assessment tool that is able to simulate report sign-out and provide case-by-case feedback to trainees. Further, to compare time- versus competency-based assessment models. Design.— Twenty-one pathology trainees were assessed for 20 months. Hot Seat Diagnosis by trainees and trainee assessment by pathologists were recorded in the Laboratory Information System. In the first iteration, trainees were assessed by using a time-based assessment scale on their ability to diagnose, report, use ancillary testings, comment on clinical implications, provide intraoperative consultation and/or gross cases. The second iteration used a competency-based assessment scale. Trainees and pathologists completed surveys on the effectiveness of the In-Training Evaluation Report versus the Hot Seat Diagnosis tool. Results.— Scores from both iterations correlated significantly with other assessment tools including the Resident In-Service Examination (r = 0.93, P = .04 and r = 0.87, P = .03). The competency-based model was better able to demonstrate improvement over time and stratify junior versus senior trainees than the time-based model. Trainees and pathologists rated Hot Seat Diagnosis as significantly more objective, detailed, and timely than the In-Training Evaluation Report, and effective at simulating report sign-out. Conclusions.— Hot Seat Diagnosis is an effective tool for the formative in-service assessment of pathology trainees and simulation of report sign-out, with the competency-based model outperforming the time-based model.


CJEM ◽  
2020 ◽  
Vol 22 (S1) ◽  
pp. S48-S48
Author(s):  
T. Wawrykow ◽  
T. McColl ◽  
A. Velji ◽  
M. Chan

Introduction: The oral case presentation is recognized as a core educational and patient care activity but has not been well studied in the emergency setting. The objectives of this study are: 1) to develop a competency-based assessment tool to formally evaluate the emergency medicine oral case presentation (EM-OCP) competency of medical students and ‘transition to discipline’ residents, and 2) to develop, implement and evaluate a curriculum to enhance oral case presentation (OCP) communication skills in the emergency medicine (EM) setting. Methods: Using data from a literature review, a Canadian Association of Emergency Physicians national survey, and local focus groups, the authors designed an OCP framework, blended learning curriculum, and EM-OCP assessment tool. Ninety-six clerkship students were randomly assigned to receive either the control, the standard clerkship curriculum, or intervention, the blended learning curriculum. At the beginning of their emergency medicine rotation, learners completed a pre-test using a standardized patient (SP) case to assess their baseline OCP skills. The intervention group then completed the EM-OCP curriculum. All students completed post-tests with a different SP at the end of the six-week EM rotation. Audio-recordings of pre and post-tests were evaluated using the assessment tool by two blinded evaluators. Results: Using the Kruskal-Wallis test, all students demonstrated improvement in EM-OCP skills between their pre-test and post-test, however, those who received the blended learning curriculum showed significantly greater improvement in synthesis of information (p = 0.044), management (p = 0.006) and overall entrustment decision score (p = 0.000). Conclusion: Implementation of a novel EM-OCP curriculum resulted in more effective communication and higher entrustment scores. This curriculum could improve OCP performance not only in emergency medicine settings but also across specialties where medical students and residents must manage critical patients.


Author(s):  
Dinesh Chandra Agrawal ◽  
Hsing-Yu Hou ◽  
Tao-Ming Cheng

Teaching evaluation is an important issue in the learning process in higher education. In addition to the teaching evaluation on campus, feedback from alumni is also very important to instruction improvement. Undergraduates and graduates in universities are the main labor force in Taiwan; therefore, many higher education institutions pay attention to the feedback of competency. However, the written questionnaire has limitations and lacks sufficient evidence to improve curriculum planning and instruction activities. In the present study, a systematic survey in the ‘University Career and Competency Assessment Network' was applied to analyze the results. Data were collected from 1,080 participants. The results can be summarized as follows: (1) ‘Learn-Practice Fit' was positive to the ‘Satisfaction' at workplaces. (2) ‘Responsibility and discipline' were significantly positive concerning ‘Communication and Expression,' ‘Interpersonal Interaction', and ‘Teamwork.' (3) ‘Service' and ‘Information technology writing' need to be improved in the training of Information Management students.


2020 ◽  
Vol 78 (1) ◽  
pp. 61-79
Author(s):  
Katherina Gallardo

Rubrics are assessment guides for grading and giving feedback to students while demonstrating acquired knowledge and skills. In the last decades, rubrics have been all part of the most used learning evaluation tools in higher education. Its use is further well-related to competency-based assessment purposes. Nevertheless, criticism around design and application has been also implemented in certain reports. In order to understand rubrics’ evolution, practice, benefits, and trends, a systematic literature review on rubrics’ design and use has been conducted. Two databases were selected: Scopus and ProQuest Education. Two phases were determined: The first allowed to identify articles related to rubrics’ design and application for almost three decades. 584 articles were found. From these, most cited articles served to give a scope of rubric evolution and trends. The second phase permitted to identify design and use of performance-based evaluation rubrics from 2009 and 2019. Five terms and Boolean combinations were used. A total of 259 articles was found. After analyzing abstracts and content, 11 open access articles on performance-oriented rubric design and application were chosen. As a result, some facts and reflections on rubric design complexity are discussed towards responding to educational challenges related to competency-based assessment contemporary demands: integration of human learning domains going beyond cognition, authenticity, and interdisciplinarity as features that characterize learning design situations, follow students’ progression, educator’s preparation on assessment literacy for responding to CBA demands, and the involvement of experts of the work field in determining essential evaluation indicators as the main topics looking forward to the next decade. Keywords: competency-based assessment, competency-based education, higher education, learning taxonomy, rubric design.


Sign in / Sign up

Export Citation Format

Share Document