Clinical Performance Assessments

Author(s):  
Emil R. Petrusa
2016 ◽  
Vol 43 (1) ◽  
pp. 5-8 ◽  
Author(s):  
Kenneth D. Royal ◽  
Kent G. Hecker

2001 ◽  
Vol 95 (1) ◽  
pp. 36-42 ◽  
Author(s):  
J. Hugh Devitt ◽  
Matt M. Kurrek ◽  
Marsha M. Cohen ◽  
Doreen Cleave-Hogg

Background The authors wished to determine whether a simulator-based evaluation technique assessing clinical performance could demonstrate construct validity and determine the subjects' perception of realism of the evaluation process. Methods Research ethics board approval and informed consent were obtained. Subjects were 33 university-based anesthesiologists, 46 community-based anesthesiologists, 23 final-year anesthesiology residents, and 37 final-year medical students. The simulation involved patient evaluation, induction, and maintenance of anesthesia. Each problem was scored as follows: no response to the problem, score = 0; compensating intervention, score = 1; and corrective treatment, score = 2. Examples of problems included atelectasis, coronary ischemia, and hypothermia. After the simulation, participants rated the realism of their experience on a 10-point visual analog scale (VAS). Results After testing for internal consistency, a seven-item scenario remained. The mean proportion scoring correct answers (out of 7) for each group was as follows: university-based anesthesiologists = 0.53, community-based anesthesiologists = 0.38, residents = 0.54, and medical students = 0.15. The overall group differences were significant (P < 0.0001). The overall realism VAS score was 7.8. There was no relation between the simulator score and the realism VAS (R = -0.07, P = 0.41). Conclusions The simulation-based evaluation method was able to discriminate between practice categories, demonstrating construct validity. Subjects rated the realism of the test scenario highly, suggesting that familiarity or comfort with the simulation environment had little or no effect on performance.


2014 ◽  
Vol 9 (3) ◽  
pp. 135-141 ◽  
Author(s):  
Gayle A. Thompson ◽  
Robert Moss ◽  
Brooks Applegate

Context Validity arguments can be used to provide evidence that instructors are drawing accurate conclusions from the results of students' clinical performance assessments (PAs). Little research has been conducted in athletic training education to determine if the evidence supports the use of current PAs. Measurement theories designed to provide this evidence can be confusing and unfamiliar to athletic training educators. Objective The purpose of this article is to present contemporary concepts of validity and suggest approaches athletic training educators can use to offer evidence to support the best assessment methods. Background Educators often use PAs to determine a student's competence for professional practice. Competence is a complex concept that is difficult to define clearly, thus making assessments of competent performance difficult as well. Most methods of PA used in athletic training education can be classified into 2 general approaches: behavioral and holistic. Athletic training educators, in an attempt to develop effective, appropriate, and user-friendly PAs to evaluate students, may be measuring skill but not truly measuring competence. Description Modern validity concepts focus on the interpretations and meanings of assessment scores, not just on the characteristics of the test itself. Using an updated concept of validity can guide the development of competence PAs to determine if educational outcomes are being met. A framework for developing a validity argument is presented. Conclusions Validity can be used to provide a simple, but rational, defense of what clinical educators do. Knowing the process of establishing validity evidence will help educators revise PAs and educational standards to further promote the profession.


1995 ◽  
Vol 70 (6) ◽  
pp. 517-22 ◽  
Author(s):  
A L Hull ◽  
S Hodder ◽  
B Berger ◽  
D Ginsberg ◽  
N Lindheim ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ji Hye Yu ◽  
Mi Jin Lee ◽  
Soon Sun Kim ◽  
Min Jae Yang ◽  
Hyo Jung Cho ◽  
...  

Abstract Background High-fidelity simulators are highly useful in assessing clinical competency; they enable reliable and valid evaluation. Recently, the importance of peer assessment has been highlighted in healthcare education, and studies using peer assessment in healthcare, such as medicine, nursing, dentistry, and pharmacy, have examined the value of peer assessment. This study aimed to analyze inter-rater reliability between peers and instructors and examine differences in scores between peers and instructors in the assessment of high-fidelity-simulation-based clinical performance by medical students. Methods This study analyzed the results of two clinical performance assessments of 34 groups of fifth-year students at Ajou University School of Medicine in 2020. This study utilized a modified Queen’s Simulation Assessment Tool to measure four categories: primary assessment, diagnostic actions, therapeutic actions, and communication. In order to estimate inter-rater reliability, this study calculated the intraclass correlation coefficient and used the Bland and Altman method to analyze agreement between raters. A t-test was conducted to analyze the differences in evaluation scores between colleagues and faculty members. Group differences in assessment scores between peers and instructors were analyzed using the independent t-test. Results Overall inter-rater reliability of clinical performance assessments was high. In addition, there were no significant differences in overall assessment scores between peers and instructors in the areas of primary assessment, diagnostic actions, therapeutic actions, and communication. Conclusions The results indicated that peer assessment can be used as a reliable assessment method compared to instructor assessment when evaluating clinical competency using high-fidelity simulators. Efforts should be made to enable medical students to actively participate in the evaluation process as fellow assessors in high-fidelity-simulation-based assessment of clinical performance in situations similar to real clinical settings.


2003 ◽  
Vol 2 (1) ◽  
pp. 125-126
Author(s):  
C PRONTERA ◽  
C PASSINO ◽  
A IERVASI ◽  
G ZUCCHELLI ◽  
A CLERICO ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document