scholarly journals Medical Simulation as a Competency-Based Assessment within Physician Assistant Education

Author(s):  
Michele Toussaint

Simulation-based practices are widely utilized in medical education and are known to be a safe and effective way to train and assess learners, improve provider confidence and competency, and improve patient safety. Competency-based initiatives are being more broadly utilized to assess learner proficiency in health professions education. Recent publication of competencies expected of new graduate physician assistants, and updated accreditation requirements which include assessment of learner competencies in non-knowledge based domains, have led to the creation of this simulation-based summative assessment of learner competency in communication and patient care skills for Physician Assistant students. The purpose of this quantitative study was to identify if this simulation assessment had appropriate construct validity and rater consistency, and to identify if correlation existed between learner performance on the simulation exam and in required Supervised Clinical Training Experiences for measures of communication skills and patient care skills. While raters for the simulation assessment had minimal variability, measures of internal consistency did not achieve suitable thresholds for patient care skills. Communication skills assessment was able to achieve the minimum suitable threshold for internal consistency with minor revisions. No correlation was noted between exam performance for communication skills or patient care skills and clinical practice ratings. Several key areas exist which may explain these results including the rating scale for the simulation exam which utilized checklists and not global rating scales, faculty raters with broad and diverse clinical backgrounds, observation-related factors on the part of the student, and the high-complexity and multidimensional nature of provider-patient interactions.

2010 ◽  
Vol 112 (4) ◽  
pp. 985-992 ◽  
Author(s):  
Heinz R. Bruppacher ◽  
Syed K. Alam ◽  
Vicki R. LeBlanc ◽  
David Latter ◽  
Viren N. Naik ◽  
...  

Background Simulation-based training is useful in improving physicians' skills. However, no randomized controlled trials have been able to demonstrate the effects of simulation teaching in real-life patient care. This study aimed to determine whether simulation-based training or an interactive seminar resulted in better patient care during weaning from cardiopulmonary bypass (CPB)-a high stakes clinical setting. Methods This study was conducted as a prospective, single-blinded, randomized controlled trial. After institutional research board approval, 20 anesthesiology trainees, postgraduate year 4 or higher, inexperienced in CPB weaning, and 60 patients scheduled for elective coronary artery bypass grafting were recruited. Each trainee received a teaching syllabus for CPB weaning 1 week before attempting to wean a patient from CPB (pretest). One week later, each trainee received a 2-h training session with either high-fidelity simulation-based training or a 2-h interactive seminar. Each trainee then weaned patients from CPB within 2 weeks (posttest) and 5 weeks (retention test) from the intervention. Clinical performance was measured using the validated Anesthesiologists' Nontechnical Skills Global Rating Scale and a checklist of expected clinical actions. Results Pretest Global Rating Scale and checklist performances were similar. The simulation group scored significantly higher than the seminar group at both posttest (Global Rating Scale [mean +/- standard error]: 14.3 +/- 0.41 vs. 11.8 +/- 0.41, P < 0.001; checklist: 89.9 +/- 3.0% vs. 75.4 +/- 3.0%, P = 0.003) and retention test (Global Rating Scale: 14.1 +/- 0.41 vs. 11.7 +/- 0.41, P < 0.001; checklist: 93.2 +/- 2.4% vs. 77.0 +/- 2.4%, P < 0.001). Conclusion Skills required to wean a patient from CPB can be acquired through simulation-based training. Compared with traditional interactive seminars, simulation-based training leads to improved performance in patient care by senior trainees in anesthesiology.


2016 ◽  
Vol 13 (1) ◽  
pp. 60-68 ◽  
Author(s):  
Gerben E. Breimer ◽  
Faizal A. Haji ◽  
Giuseppe Cinalli ◽  
Eelco W. Hoving ◽  
James M. Drake

Abstract BACKGROUND: Growing demand for transparent and standardized methods for evaluating surgical competence prompted the construction of the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). OBJECTIVE: To provide validity evidence of the NEVAT by reporting on the tool's internal structure and its relationship with surgical expertise during simulation-based training. METHODS: The NEVAT was used to assess performance of trainees and faculty at an international neuroendoscopy workshop. All participants performed an endoscopic third ventriculostomy (ETV) on a synthetic simulator. Participants were simultaneously scored by 2 raters using the NEVAT procedural checklist and global rating scale (GRS). Evidence of internal structure was collected by calculating interrater reliability and internal consistency of raters' scores. Evidence of relationships with other variables was collected by comparing the ETV performance of experts, experienced trainees, and novices using Jonckheere's test (evidence of construct validity). RESULTS: Thirteen experts, 11 experienced trainees, and 10 novices participated. The interrater reliability by the intraclass correlation coefficient for the checklist and GRS was 0.82 and 0.94, respectively. Internal consistency (Cronbach's α) for the checklist and the GRS was 0.74 and 0.97, respectively. Median scores with interquartile range on the checklist and GRS for novices, experienced trainees, and experts were 0.69 (0.58-0.86), 0.85 (0.63-0.89), and 0.85 (0.81-0.91) and 3.1 (2.5-3.8), 3.7 (2.2-4.3) and 4.6 (4.4-4.9), respectively. Jonckheere's test showed that the median checklist and GRS score increased with performer expertise (P = .04 and .002, respectively). CONCLUSION: This study provides validity evidence for the NEVAT to support its use as a standardized method of evaluating neuroendoscopic competence during simulation-based training.


CJEM ◽  
2008 ◽  
Vol 10 (01) ◽  
pp. 44-50 ◽  
Author(s):  
Glen Bandiera ◽  
David Lendrum

ABSTRACTObjective:We sought to determine if a novel competency-based daily encounter card (DEC) that was designed to minimize leniency bias and maximize independent competency assessments could address the limitations of existing feedback mechanisms when applied to an emergency medicine rotation.Methods:Learners in 2 tertiary academic emergency departments (EDs) presented a DEC to their teachers after each shift. DECs included dichotomous categorical rating scales (i.e., “needs attention” or “area of strength”) for each of the 7 CanMEDS roles or competencies and an overall global rating scale. Teachers were instructed to choose which of the 7 competencies they wished to evaluate on each shift. Results were analyzed using both staff and resident as the units of analysis.Results:Fifty-four learners submitted a total of 801 DECs that were then completed by 43 different teachers over 28 months. Teachers' patterns of selecting CanMEDS competencies to assess did not differ between the 2 sites. Teachers selected an average of 3 roles per DEC (range 0–7). Only 1.3% were rated as “needs further attention.” The frequency with which each competency was selected ranged from 25% (Health Advocate) to 85% (Medical Expert).Conclusion:Teachers chose to direct feedback toward a breadth of competencies. They provided feedback on all 7 CanMEDS roles in the ED, yet demonstrated a marked leniency bias.


Author(s):  
Robins M. Kalapurackal ◽  
Shun Takai

Quality function deployment (QFD) is one of the most popular tools used in the product development process. It relates customer requirements to product requirements and enables engineers to determine which product requirement is more important than the others in satisfying customers. Some of the benefits of QFD are cost reduction, fewer design changes at the start of production, and improved communication among engineers. QFD applications use various approaches (i.e., worth calculation schemes and rating scales) to calculate the worth of requirements. The purpose of this paper is to study the change in the relative worth (normalized worth) of product requirements yielded by different rating scales and calculation schemes. We studied empirical and simulation-generated QFD matrices to determine how calculation schemes and rating scales influence the relative worth of requirements. Two representative scales and two calculation schemes are used to find the most and least sensitive cases, and the influence of the number of rows and columns in the relative worth of requirements. From the results, we identified the least sensitive and most sensitive combination of calculation scheme and rating scale. We also learned that QFD matrices become less sensitive to changes in rating scale and calculation scheme as the number of columns increases.


2019 ◽  
Vol 6 (6) ◽  
pp. 339-343
Author(s):  
Melinda Fleming ◽  
Michael McMullen ◽  
Theresa Beesley ◽  
Rylan Egan ◽  
Sean Field

IntroductionSimulation training in anaesthesiology bridges the gap between theory and practice by allowing trainees to engage in high-stakes clinical training without jeopardising patient safety. However, implementing simulation-based assessments within an academic programme is highly resource intensive, and the optimal number of scenarios and faculty required for accurate competency-based assessment remains to be determined. Using a generalisability study methodology, we examine the structure of simulation-based assessment in regard to the minimal number of scenarios and faculty assessors required for optimal competency-based assessments.MethodsSeventeen anaesthesiology residents each performed four simulations which were assessed by two expert raters. Generalisability analysis (G-analysis) was used to estimate the extent of variance attributable to (1) the scenarios, (2) the assessors and (3) the participants. The D-coefficient and the G-coefficient were used to determine accuracy targets and to predict the impact of adjusting the number of scenarios or faculty assessors.ResultsWe showed that multivariate G-analysis can be used to estimate the number of simulations and raters required to optimise assessment. In this study, the optimal balance was obtained when four scenarios were assessed by two simulation experts.ConclusionSimulation-based assessment is becoming an increasingly important tool for assessing the competency of medical residents in conjunction with other assessment methods. G-analysis can be used to assist in planning for optimal resource use and cost-efficacy.


2017 ◽  
Vol 32 (3) ◽  
pp. 253-260 ◽  
Author(s):  
Andrea B. Evans ◽  
Jennifer M. Hulme ◽  
Peter Nugus ◽  
Hilarie H. Cranmer ◽  
Melanie Coutu ◽  
...  

AbstractMethodsThe evaluation tool was first derived from the formerly Consortium of British Humanitarian Agencies’ (CBHA; United Kingdom), now “Start Network’s,” Core Humanitarian Competency Framework and formatted in an electronic data capture tool that allowed for offline evaluation. During a 3-day humanitarian simulation event, participants in teams of eight to 10 were evaluated individually at multiple injects by trained evaluators. Participants were assessed on five competencies and a global rating scale. Participants evaluated both themselves and their team members using the same tool at the end of the simulation exercise (SimEx).ResultsAll participants (63) were evaluated. A total of 1,008 individual evaluations were completed. There were 90 (9.0%) missing evaluations. All 63 participants also evaluated themselves and each of their teammates using the same tool. Self-evaluation scores were significantly lower than peer-evaluations, which were significantly lower than evaluators’ assessments. Participants with a medical degree, and those with humanitarian work experience of one month or more, scored significantly higher on all competencies assessed by evaluators compared to other participants. Participants with prior humanitarian experience scored higher on competencies regarding operating safely and working effectively as a team member.ConclusionThis study presents a novel electronic evaluation tool to assess individual performance in five of six globally recognized humanitarian competency domains in a 3-day humanitarian SimEx. The evaluation tool provides a standardized approach to the assessment of humanitarian competencies that cannot be evaluated through knowledge-based testing in a classroom setting. When combined with testing knowledge-based competencies, this presents an approach to a comprehensive competency-based assessment that provides an objective measurement of competency with respect to the competencies listed in the Framework. There is an opportunity to advance the use of this tool in future humanitarian training exercises and potentially in real time, in the field. This could impact the efficiency and effectiveness of humanitarian operations.EvansAB, HulmeJM, NugusP, CranmerHH, CoutuM, JohnsonK. An electronic competency-based evaluation tool for assessing humanitarian competencies in a simulated exercise. Prehosp Disaster Med. 2017;32(3):253–260.


2000 ◽  
Vol 12 (S1) ◽  
pp. 279-280

Numerous rating scales are available to assess specific behavioral and psychological symptoms of dementia (BPSD) and BPSD in general. One of the most commonly used scales is the Brief Psychiatric Rating Scale (BPRS), which was developed nearly 40 years ago and has been partially validated for use in patients with dementia. Dr. Tariot commented that use of the BPRS requires extensive clinical training to conduct the semistructured interviews and to probe, gauge, and weigh data in a way that reflects clinical reasoning, thinking, and judgment. In contrast, a scale such as the Neuropsychiatric Inventory, which covers a broad spectrum of BPSD, can be administered by a healthcare professional without extensive clinical training; this is both a strength and a weakness. Dr. Sultzer recognized the value of both types of scales, those that require semistructured interviews and those that are largely observational, and suggested that using different combinations of scales in research studies may be valuable. Although the BPRS is an older scale that some researchers believe is inappropriate for use in patients with dementia, Dr. Mintzer noted that the BPRS and its subscales have been used successfully over the past few years to differentiate drug effects from placebo in this patient population.


2020 ◽  
Vol 21 (3) ◽  
pp. 299-313
Author(s):  
Belinda Goodenough ◽  
Jacqueline Watts ◽  
Sarah Bartlett ◽  

AbstractObjectives:To satisfy requirements for continuing professional education, workforce demand for access to large-scale continuous professional education and micro-credential-style online courses is increasing. This study examined the Knowledge Translation (KT) outcomes for a short (2 h) online course about support at night for people living with dementia (Bedtime to Breakfast), delivered at a national scale by the Dementia Training Australia (DTA).Methods:A sample of the first cohort of course completers was re-contacted after 3 months to complete a KT follow-up feedback survey (n = 161). In addition to potential practice impacts in three domains (Conceptual, Instrumental, Persuasive), respondents rated the level of Perceived Improvement in Quality of Care (PIQOC), using a positively packed global rating scale.Results:Overall, 93.8% of the respondents agreed that the course had made a difference to the support they had provided for people with dementia since the completion of the course. In addition to anticipated Conceptual impacts (e.g., change in knowledge), a range of Instrumental and Persuasive impacts were also reported, including workplace guidelines development and knowledge transfer to other staff. Tally counts for discrete KT outcomes were high (median 7/10) and explained 23% of the variance in PIQOC ratings.Conclusions:Online short courses delivered at a national scale are capable of supporting a range of translation-to-practice impacts, within the constraints of retrospective insight into personal practice change. Topics around self-assessed knowledge-to-practice and the value of positively packed rating scales for increasing variance in respondent feedback are discussed.


2019 ◽  
Vol 11 (2) ◽  
pp. 168-176
Author(s):  
Zia Bismilla ◽  
Tehnaz Boyle ◽  
Karen Mangold ◽  
Wendy Van Ittersum ◽  
Marjorie Lee White ◽  
...  

ABSTRACT Background  The Accreditation Council for Graduate Medical Education (ACGME) Milestone projects required each specialty to identify essential skills and develop means of assessment with supporting validity evidence for trainees. Several specialties rate trainees on a milestone subcompetency related to working in interprofessional teams. A tool to assess trainee competence in any role on an interprofessional team in a variety of scenarios would be valuable and suitable for simulation-based assessment. Objective  We developed a tool for simulation settings that assesses interprofessional teamwork in trainees. Methods  In 2015, existing tools that assess teamwork or interprofessionalism using direct observation were systematically reviewed for appropriateness, generalizability, adaptability, ease of use, and resources required. Items from these tools were included in a Delphi method with multidisciplinary pediatrics experts using an iterative process from June 2016 to January 2017 to develop an assessment tool. Results  Thirty-one unique tools were identified. A 2-stage review narrowed this list to 5 tools, and 81 items were extracted. Twenty-two pediatrics experts participated in 4 rounds of Delphi surveys, with response rates ranging from 82% to 100%. Sixteen items reached consensus for inclusion in the final tool. A global 4-point rating scale from novice to proficient was developed. Conclusions  A novel tool to assess interprofessional teamwork for individual trainees in a simulated setting was developed using a systematic review and Delphi methodology. This is the first step to establish the validity evidence necessary to use this tool for competency-based assessment.


Assessment ◽  
1996 ◽  
Vol 3 (1) ◽  
pp. 27-36 ◽  
Author(s):  
Bradley T. Erford

One hundred and nineteen teachers of 540 normal boys and girls, ages 5 to 10, were administered the Conners' Teacher Rating Scale-28 (CTRS-28). Their responses were analyzed to assess the instrument's internal consistency and its construct and criterion-related validity. Principle components analysis revealed a four-factor structure underlying the scale, rather than the three-factor structure originally reported. Internal consistency of the factors ranged from .79 to .95. Convergent validity with similar rating scales was primarily excellent. Norms for newly derived factors and critical analysis of the usefulness of the CTRS-28 were explored.


Sign in / Sign up

Export Citation Format

Share Document