Assessment Tools: Multicultural Competency Assessment for Organizations

2001 ◽  
Author(s):  
2001 ◽  
Vol 76 (Supplement) ◽  
pp. S80-S83 ◽  
Author(s):  
IVY F. OANDASAN ◽  
NIALL BYRNE ◽  
DAVE DAVIS ◽  
M. SHARON SHAFIR ◽  
REBECCA MALIK ◽  
...  

2020 ◽  
Vol 41 (04) ◽  
pp. 310-324
Author(s):  
Jerry K. Hoepner ◽  
Abby L. Hemmerich

AbstractA key element of competency-based education is assessment. Effective assessment requires access to a core set of expectations that match a learner's level of preparation. Miller's triangle provides a framework for establishing appropriate expectations that move learners from novice to entry-level clinicians. Formative assessment and feedback are a crucial part of facilitating learning in this context. A pilot investigation was conducted to examine the effects of a formative, video competency on performance in a summative, live competency. Rubrics were used to score performance on two competencies, an oral mechanism exam (OME) and a clinical bedside swallowing examination (CBSE). Performance on the OME was significantly improved in the summative competency, compared with the formative, video competency. Performance on the CBSE did not change from formative to summative competency. Assessment in competency-based education is important as a measure of readiness for entry-level practice. Formative assessment and feedback can improve preparedness and performance on summative competencies. Detailed, criterion-referenced assessment tools are crucial to identifying performance. While the OME rubric used in this investigation appears to meet that standard, it is likely that the CBSE rubric was not specific enough to detect changes.


2021 ◽  
Vol 4 (Supplement_1) ◽  
pp. 71-73
Author(s):  
R Khan ◽  
E Zheng ◽  
S B Wani ◽  
M A Scaffidi ◽  
T Jeyalingam ◽  
...  

Abstract Background An increasing focus on quality and safety in colonoscopy has led to broader implementation of competency-based educational systems that enable documentation of trainees’ achievement of the knowledge, skills, and attitudes needed for independent practice. The meaningful assessment of competence in colonoscopy is critical to this process. While there are many published tools that assess competence in performing colonoscopy, there is a wide range of underlying validity evidence. Tools with strong evidence of validity are required to support feedback provision, optimize learner capabilities, and document competence. Aims We aimed to evaluate the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity. Methods We systematically searched five databases for studies investigating colonoscopy direct observation assessment tools from inception until April 8, 2020. We extracted data outlining validity evidence from the five sources (content, response process, internal structure, relations to other variables, and consequences) and graded the degree of evidence, with a maximum score of 15. We assessed educational utility using an Accreditation Council for Graduate Medical Education framework and methodological quality using the Medical Education Research Quality Instrument (MERSQI). Results From 10,841 records, we identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, 1 both). All tools assessed technical skills, while 10 assessed cognitive and integrative skills. Validity evidence scores ranged from 1–15. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence, with scores of 13, 15, and 14, respectively. Most tools were easy to use and interpret and required minimal resources. MERSQI scores ranged from 9.5–11.5 (maximum score 14.5). Conclusions The ACE, DOPS, and GiECAT have strong validity evidence compared to other assessments. Future studies should identify barriers to widespread implementation and report on use of these tools in credentialing purposes. Funding Agencies None


2020 ◽  
Vol 17 ◽  
Author(s):  
Anthony Clement Smith ◽  
Ann Framp ◽  
Patrea Andersen

Introduction With the recent introduction of registration for paramedics, and an absence of assessment tools that align undergraduate paramedic student practice to competency standards, this pilot study undertook to develop and evaluate a competency assessment tool designed to provide a standardised approach to student competency assessment. This paper reports the first part of a two-part enquiry evaluating the efficacy of the Australasian Paramedic Competency Assessment Tool (APCAT) to assess the practice competency of undergraduate paramedic students. Methods With a focus on gathering professional opinion to evaluate the usability of the tool and inform its development, a mixed methods methodology including a survey and open-ended questions were used to gather data from paramedic educators and on-road assessors in Australia and New Zealand. Data were analysed using descriptive statistics and content analysis. Results The outcome of the evaluation was positive, indicating that 81% agreed or strongly agreed that the tool was user-friendly; 71% believed that expectations of student performance and the grading system was clear; 70% found year level descriptors reflected practice expectations; and 66% believed that the resource manual provided adequate guidance. Conclusion The APCAT is simple and aligns student practice expectations with competency standards. Results indicate the support for a consistent approach for assessment of undergraduate paramedic student competence. Further research will be undertaken to determine the efficacy of using this tool to assess students in the clinical setting.


2018 ◽  
Vol 6 (3) ◽  
pp. 9 ◽  
Author(s):  
J. V. Smirnova ◽  
O. G. Krasikova

Introduction:the article is devoted to modern methods of learning outcomes assessment. As the authors of the article show, recently the evaluation of learning outcomes is inextricably linked with the quality of education provided by an educational institution. It is noted that today there is no single interpretation of the concept “quality of education”. Due to the introduction of a competence-based approach, graduates now form not just a certain amount of knowledge and skills, but the ability to use it, to apply the experience gained in practice, so modern assessment should be fundamentally new and different from traditional methods.Materials and methods:the article identifies the features of the competency assessment system. The basic provisions of the assessment methodology were established and on their basis we identified the components of the model model of the automated assessment of professional competencies. Identified principles for constructing a competency assessment modelResults:the article analyzes the existing modern assessment tools. Among them, the methods characteristic of the authentic approach were singled out, as well as the advantages of using the electronic platform Moodle for the evaluation procedure. A new system for assessing the quality of the educational process, a point-rating system for evaluating the results of the work of the trainer, is being considered.Discussion and Conclusions: The article discusses existing tools and technologies for assessing learning outcomes, which have many advantages and their combined use makes it possible to simplify the assessment process as much as possible, to make it more convenient and intensive. The proposed recommendations for the evaluation of educational results favor not only the correct assessment of the results of the educational process, but also the ongoing monitoring of the development of professional competencies.


1998 ◽  
Vol 26 (1) ◽  
pp. 43-68 ◽  
Author(s):  
Joseph G. Ponterotto

This article presents an integrative reaction to the lead contributions by Kiselica, Lark and Paul, and Rooney, Flores, and Mercier. Following the narrative path set by these contributors, the author begins with some personal reflections regarding his own multicultural development. A theme analysis of the lead contributions, along with the author's own experiences, leads to the delineation of 31 characteristics of effective multicultural training organized in three sections: characteristics of effective trainers and mentors, characteristics of promising trainees, and characteristics of facilitative training environments. The second half of this article presents both a general and specific research agenda for multicultural counseling training in the coming decade. Building from the identified themes, research recommendations are presented in five areas: racial identity development, multicultural competency assessment, mentoring, model programs, and the role of program diversity in training effectiveness. The article closes with a general discussion of the current and evolving status of multicultural counseling research. Embedded in the proposed research agenda is a strong call for qualitative research methods.


Sign in / Sign up

Export Citation Format

Share Document