Are You Thinking What I'm Thinking? Exploring Response Process Validity Evidence for a Workplace-based Assessment for Operative Feedback

Author(s):  
Nina W. Zhao ◽  
Lindsey M. Haddock ◽  
Bridget C. O'Brien
2018 ◽  
Vol 9 (3) ◽  
pp. e14-24
Author(s):  
Kathryn Hodwitz ◽  
William Tays ◽  
Rhoda Reardon

This paper describes the use of Kane’s validity framework to redevelop a workplace-based assessment program for practicing physicians administered by the College of Physicians and Surgeons of Ontario. The developmental process is presented according to the four inferences in Kane’s model. Scoring was addressed through the creation of specialty-specific assessment criteria and global, narrative-focused reports. Generalization was addressed through standardized sampling protocols and assessor training and consensus-building. Extrapolation was addressed through the use of real-world performance data and an external review of the scoring tools by practicing physicians. Implications were theoretically supported through adherence to formative assessment principles and will be assessed through an evaluation accompanying the implementation of the redeveloped program. Kane’s framework was valuable for guiding the redevelopment process and for systematically collecting validity evidence throughout to support the use of the assessment for its intended purpose. As the use of workplace-based assessment programs for physicians continues to increase, practical examples are needed of how to develop and evaluate these programs using established frameworks. The dissemination of comprehensive validity arguments is vital for sharing knowledge about the development and evaluation of WBA programs and for understanding the effects of these assessments on physician practice improvement.


2021 ◽  
Vol 13 (4) ◽  
pp. 490-499
Author(s):  
Paul Kukulski ◽  
James Ahn

ABSTRACT Background The standardized letter of evaluation (SLOE) is the application component that program directors value most when evaluating candidates to interview and rank for emergency medicine (EM) residency. Given its successful implementation, other specialties, including otolaryngology, dermatology, and orthopedics, have adopted similar SLOEs of their own, and more specialties are considering creating one. Unfortunately, for such a significant assessment tool, no study to date has comprehensively examined the validity evidence for the EM SLOE. Objective We summarized the published evidence for validity for the EM SLOE using Messick's framework for validity evidence. Methods A scoping review of the validity evidence of the EM SLOE was performed in 2020. A scoping review was chosen to identify gaps and future directions, and because the heterogeneity of the literature makes a systematic review difficult. Included articles were assigned to an aspect of Messick's framework and determined to provide evidence for or against validity. Results There have been 22 articles published relating to validity evidence for the EM SLOE. There is evidence for content validity; however, there is a lack of evidence for internal structure, relation to other variables, and consequences. Additionally, the literature regarding response process demonstrates evidence against validity. Conclusions Overall, there is little published evidence in support of validity for the EM SLOE. Stakeholders need to consider changing the ranking system, improving standardization of clerkships, and further studying relation to other variables to improve validity. This will be important across GME as more specialties adopt a standardized letter.


2021 ◽  
Author(s):  
Eric G. Meyer ◽  
Emily Harvey ◽  
Steven J. Durning ◽  
Sebastian Uijtdehaage

Abstract Background. Entrustable Professional Activities (EPAs) assessments measure learners’ competence with an entrustment or supervisory scale. Designed for workplace-based assessment EPA assessments have also been proposed for undergraduate medical education (UME), where assessments frequently occur outside the workplace and may be less intuitive, raising validity concerns. This study explored how assessors make entrustment determinations in UME, to include the impact of longitudinal student-assessor relationships.Methods. A qualitative approach using think-alouds was employed. Assessors assessed two students (familiar and unfamiliar) completing a history and physical exam using a supervisory scale and then thought-aloud after each assessment. We conducted a thematic analysis of assessors’ response processes and compared them based on their familiarity with a student.Results. Four themes and fifteen subthemes were identified. The most prevalent theme related to “student performance.” The other three themes included “frame of reference,” “assessor uncertainty,” and “the patient.” “Previous student performance” and “affective reactions” were subthemes more likely to inform scoring when faculty were familiar with a student, while unfamiliar faculty were more likely to reference “self” and “lack confidence in their ability to assess.”Conclusions. Student performance appears to be assessors’ main consideration for all students, providing some validity evidence for the response process in EPA assessments. Several problematic themes could be addressed with faculty development while others appear to be inherent to entrustment and may be more challenging to mitigate. Differences based on assessor familiarity with student merits further research on how trust develops over time.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Christopher R. Stephenson ◽  
Sara L. Bonnes ◽  
Adam P. Sawatsky ◽  
Lukas W. Richards ◽  
Cathy D. Schleck ◽  
...  

Abstract Background Continuing medical education (CME) often uses passive educational models including lectures. However, numerous studies have questioned the effectiveness of these less engaging educational strategies. Studies outside of CME suggest that engaged learning is associated with improved educational outcomes. However, measuring participants’ engagement can be challenging. We developed and determined the validity evidence for a novel instrument to assess learner engagement in CME. Methods We conducted a cross-sectional validation study at a large, didactic-style CME conference. Content validity evidence was established through review of literature and previously published engagement scales and conceptual frameworks on engagement, along with an iterative process involving experts in the field, to develop an eight-item Learner Engagement Instrument (LEI). Response process validity was established by vetting LEI items on item clarity and perceived meaning prior to implementation, as well as using a well-developed online platform with clear instructions. Internal structure validity evidence was based on factor analysis and calculating internal consistency reliability. Relations to other variables validity evidence was determined by examining associations between LEI and previously validated CME Teaching Effectiveness (CMETE) instrument scores. Following each presentation, all participants were invited to complete the LEI and the CMETE. Results 51 out of 206 participants completed the LEI and CMETE (response rate 25%) Correlations between the LEI and the CMETE overall scores were strong (r = 0.80). Internal consistency reliability for the LEI was excellent (Cronbach’s alpha = 0.96). To support validity to internal structure, a factor analysis was performed and revealed a two dimensional instrument consisting of internal and external engagement domains. The internal consistency reliabilities were 0.96 for the internal engagement domain and 0.95 for the external engagement domain. Conclusion Engagement, as measured by the LEI, is strongly related to teaching effectiveness. The LEI is supported by robust validity evidence including content, response process, internal structure, and relations to other variables. Given the relationship between learner engagement and teaching effectiveness, identifying more engaging and interactive methods for teaching in CME is recommended.


2020 ◽  
Author(s):  
Michael D. Wolcott ◽  
Nikki G. Lobczowski ◽  
Jacqueline M. Zeeman ◽  
Jacqueline E. McLaughlin

Abstract Background: Situational judgment tests (SJTs) are used in health sciences education to measure knowledge using case-based scenarios. Despite their popularity, there is a significant gap in the validity evidence and research on the response process that demonstrate how SJTs measure their intended constructs. Models of the SJT response process have been proposed in the literature; however, few studies explore and expand these models beyond surface-level attributes. The purpose of this study was to describe the factors and strategies involved in the cognitive process examinees use as they respond to SJT items.Methods: Thirty participants—15 students and 15 experienced practitioners—completed a 12-item SJT designed to measure empathy. Each participant engaged in a think-aloud interview while completing the SJT followed by a cognitive interview probing their decision-making processes. Interviews were transcribed and independently coded by three researchers to identify salient themes and factors that contributed to the response process. Results: Results suggested that the SJT response process included the complex integration of comprehension, retrieval, judgment, and response selections. Each of these response process stages were influenced by attributes such as perceived objective of the task, job-specific knowledge, assumptions about the scenario, and item setting. Conclusions: This study provides an evaluation of the SJT response process and contributes exploratory information to the validity evidence of SJTs; these findings can inform the design, interpretation, and utility of SJTs.


2020 ◽  
Author(s):  
Christopher R. Stephenson ◽  
Sara L. Bonnes ◽  
Adam P. Sawatsky ◽  
Lukas W. Richards ◽  
Cathy D. Schleck ◽  
...  

Abstract Background: Continuing medical education (CME) often uses passive educational models including lectures. However, numerous studies have questioned the effectiveness of these less engaging educational strategies. Studies outside of CME suggest that engaged learning is associated with improved educational outcomes. However, measuring participants’ engagement can be challenging. We developed and determined the validity evidence for a novel instrument to assess learner engagement in CME.Methods: We conducted a cross-sectional validation study at a large, didactic-style CME conference. Content validity evidence was established through review of literature and previously published engagement scales and conceptual frameworks on engagement, along with an iterative process involving experts in the field, to develop an eight-item Learner Engagement Instrument (LEI). Response process validity was established by vetting LEI items on item clarity and perceived meaning prior to implementation, as well as using a well-developed online platform with clear instructions.. Internal structure validity evidence was based on factor analysis and calculating internal consistency reliability. Relations to other variables validity evidence was determined by examining associations between LEI and previously validated CME Teaching Effectiveness (CMETE) instrument scores. Following each presentation, all participants were invited to complete the LEI and the CMETE. Results: 51 out of 206 participants completed the LEI and CMETE (response rate 25%) Correlations between the LEI and the CMETE overall scores were strong (r=0.80). Internal consistency reliability for the LEI was excellent (Cronbach’s alpha=0.96). To support validity to internal structure, a factor analysis was performed and revealed a two dimensional instrument consisting of internal and external engagement domains. The internal consistency reliabilities were 0.96 for the internal engagement domain and 0.95 for the external engagement domain.Conclusion: Engagement, as measured by the LEI, is strongly related to teaching effectiveness. The LEI is supported by robust validity evidence including content, response process, internal structure, and relations to other variables. Given the relationship between learner engagement and teaching effectiveness, identifying more engaging and interactive methods for teaching in CME is recommended.


2020 ◽  
Author(s):  
Christopher R. Stephenson ◽  
Sara L. Bonnes ◽  
Adam P. Sawatsky ◽  
Lukas W. Richards ◽  
Cathy D. Schleck ◽  
...  

Abstract Background Continuing medical education (CME) often uses passive educational models including lectures. However, numerous studies have questioned the effectiveness of these less engaging educational strategies. Studies outside of CME suggest that engaged learning is associated with improved educational outcomes. However, measuring participants’ engagement can be challenging. We developed and determined the validity evidence for a novel instrument to assess learner engagement in CME. Methods We conducted a cross-sectional validation study at a large, didactic-style CME conference. Content validity evidence was established through review of literature and previously published engagement scales and conceptual frameworks on engagement, along with an iterative process involving experts in the field, to develop an eight-item Learner Engagement Instrument (LEI). Response process validity was established by vetting LEI items on item clarity and perceived meaning prior to implementation, as well as using a well-developed online platform with clear instructions. All item responses were double-coded for statistical analysis, using a dedicated survey research center. Internal structure validity evidence was based on factor analysis and calculating internal consistency reliability. Relations to other variables validity evidence was determined by examining associations between LEI and previously validated CME Teaching Effectiveness (CMETE) instrument scores. Following each presentation, all participants were invited to complete the LEI and the CMETE. Results A total of 2486 LEI and CMETE surveys were submitted during the 5-day course. Correlations between the LEI and the CMETE overall scores were strong (r = 0.80). Internal consistency reliability for the LEI was excellent (Cronbach’s alpha = 0.96). To support validity to internal structure, a factor analysis was performed and revealed a two dimensional instrument consisting of internal and external engagement domains. The internal consistency reliabilities were 0.96 for the internal engagement domain and 0.95 for the external engagement domain. Conclusion Engagement, as measured by the LEI, is strongly related to teaching effectiveness. The LEI is supported by robust validity evidence including content, response process, internal structure, and relations to other variables. Given the relationship between learner engagement and teaching effectiveness, identifying more engaging and interactive methods for teaching in CME is recommended.


Methodology ◽  
2013 ◽  
Vol 9 (3) ◽  
pp. 113-122 ◽  
Author(s):  
José-Luis Padilla ◽  
Isabel Benítez ◽  
Miguel Castillo

The latest edition of the Standards for Educational and Psychological Testing ( APA, 1999 ) promotes the analysis of respondents’ response processes in order to obtain evidence about the fit between the intended construct and the response process produced. The aim of this paper was twofold. First, we assess whether cognitive interviewing can be used to gather such validity evidence, and secondly, to analyze the usefulness of the evidence provided for interpreting the results from traditional psychometric analysis. The usefulness of the Cognitive Interviewing Reporting Framework ( Boeije & Willis, 2013 ) for reporting the cognitive interviewing findings was also evaluated. As an empirical example, we tested the (Spanish language) APGAR family function scale. A total of 21 pretest cognitive interviews were conducted, and psychometric analyzes were conducted of data from 28,371 respondents who were administered the APGAR scale. Results and utility of CIRF as a reporting framework are discussed.


Sign in / Sign up

Export Citation Format

Share Document