Reliability and Validity of a Simulation-based Acute Care Skills Assessment for Medical Students and Residents

2003 ◽  
Vol 99 (6) ◽  
pp. 1270-1280 ◽  
Author(s):  
John R. Boulet ◽  
David Murray ◽  
Joe Kras ◽  
Julie Woodhouse ◽  
John McAllister ◽  
...  

Background Medical students and residents are expected to be able to manage a variety of critical events after training, but many of these individuals have limited clinical experiences in the diagnosis and treatment of these conditions. Life-sized mannequins that model critical events can be used to evaluate the skills required to manage and treat acute medical conditions. The purpose of this study was to develop and test simulation exercises and associated scoring methods that could be used to evaluate the acute care skills of final-year medical students and first-year residents. Methods The authors developed and tested 10 simulated acute care situations that clinical faculty at a major medical school expects graduating physicians to be able to recognize and treat at the conclusion of training. Forty medical students and residents participated in the evaluation of the exercises. Four faculty members scored the students/residents. Results The reliability of the simulation scores was moderate and was most strongly influenced by the choice and number of simulated encounters. The validity of the simulation scores was supported through comparisons of students'/residents' performances in relation to their clinical backgrounds and experience. Conclusion Acute care skills can be validly and reliably measured using a simulation technology. However, multiple simulated encounters, covering a broad domain, are needed to effectively and accurately estimate student/resident abilities in acute care settings.

2021 ◽  
Author(s):  
Elizabeth Sinz ◽  
Arna Banerjee ◽  
Randolph Steadman ◽  
Matthew S Shotwell ◽  
Jason Slagle ◽  
...  

Abstract Introduction: Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment scores for practicing physicians has not been established.Methods: Standardised complex simulation scenarios were developed and administered to board-certified, practicing anesthesiologists who volunteered to participate in an assessment study during formative maintenance of certification activities. A subset of the study population agreed to participate as the primary responder in a second scenario for this study. The physicians were assessed independently by trained raters on both teamwork/behavioural and technical performance measures. Analysis using Generalisability and Decision studies were completed for the two scenarios with two raters.Results: The behavioural score was not more reliable than the technical score. With two raters > 20 scenarios would be required to achieve a reliability estimate of 0.7. Increasing the number of raters for a given scenario would have little effect on reliability.Conclusions: The performance of practicing physicians on simulated critical events may be highly context-specific. Realistic simulation-based assessment for practicing physicians is resource-intensive and may be best-suited for individualized formative feedback. More importantly, aggregate data from a population of participants may have an even higher impact if used to identify skill or knowledge gaps to be addressed by training programs and inform continuing education improvements across the profession.


2020 ◽  
Author(s):  
Elizabeth Sinz ◽  
Arna Banerjee ◽  
Randolph Steadman ◽  
Matthew S Shotwell ◽  
Jason Slagle ◽  
...  

Abstract Introduction: Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment scores for practicing physicians has not been established.Methods: Standardized complex simulation scenarios were developed and administered to board-certified, practicing anesthesiologists who volunteered to participate in an assessment study during formative maintenance of certification activities. A subset of the study population agreed to participate as the primary responder in a second scenario for this study. The physicians were assessed independently by trained raters on both teamwork/behavioral and technical performance measures. Analysis using Generalizability and Decision studies were completed for the two scenarios with two raters.Results: The technical score was not more reliable than the behavioral score. With two raters > 20 scenarios would be required to achieve a reliability estimate of 0.7. Increasing the number of raters would have little effect on reliability.Discussion: The performance of practicing physicians on simulated critical events may be highly context-specific. Realistic simulation-based assessment for practicing physicians is resource-intensive and may be best-suited for individualized formative feedback. Moreover, aggregate data from a population of participants may yield even higher impact if used to identify skill or knowledge gaps to be addressed by training programs and continuing education improvements across the profession.


2020 ◽  
Vol 95 (11) ◽  
pp. 1707-1711
Author(s):  
Thomas Bunning ◽  
Matthew Goodwin ◽  
Emily Barney ◽  
Aarti Thakkar ◽  
Alison S. Clay

2014 ◽  
Vol 1 ◽  
pp. JMECD.S20094 ◽  
Author(s):  
Steven Lillis ◽  
Jill Yielder ◽  
Vernon Mogol ◽  
Barbara O'Connor ◽  
Kira Bacal ◽  
...  

Background Progress testing is a method of assessing longitudinal progress of students using a single best answer format pitched at the standard of a newly graduated doctor. Aim To evaluate the results of the first year of summative progress testing at the University of Auckland for Years 2 and 4 in 2013. SUBJECTS: Two cohorts of medical students from Years 2 and 4 of the Medical Program. Methods A survey was administered to all involved students. Open text feedback was also sought. Psychometric data were collected on test performance, and indices of reliability and validity were calculated. Results The three tests showed increased mean scores over time. Reliability of the assessments was uniformly high. There was good concurrent validity. Students believe that progress testing assists in integrating science with clinical knowledge and improve learning. Year 4 students reported improved knowledge retention and deeper understanding. Conclusion Progress testing has been successfully introduced into the Faculty for two separate year cohorts and results have met expectations. Other year cohorts will be added incrementally. Recommendation Key success factors for introducing progress testing are partnership with an experienced university, multiple and iterative briefings with staff and students as well as demonstrating the usefulness of progress testing by providing students with detailed feedback on performance.


2009 ◽  
Vol 33 (4) ◽  
pp. 670-675 ◽  
Author(s):  
Maria de Fátima Aveiro Colares ◽  
Margaret de Castro ◽  
Cristiane Martins Peres ◽  
Afonso Diniz Costa Passos ◽  
José Fernando de Castro Figueiredo ◽  
...  

Entering medical school can be associated with a number of difficulties that can hinder students' performance. Mentoring programs are designed to help students circumvent difficulties and improve their learning and personal development. The current study aimed to evaluate the perceptions of both students and mentors regarding a recently introduced, group-based mentoring program designed to support first-year students. After one year of regular meetings, students and mentors' perceptions of the program were assessed by means of structured questionnaires. Response content categories were identified through multiple readings. Both regular attendees and non-participating students had positive opinions about the program. Mentors were highly satisfied at having participated and acknowledged that the program has been useful not only for assisting students, but also for fostering their own personal and professional development. In conclusion, the group-based mentoring program is feasible and can elicit positive views from both mentors and students. In addition, faculty members' participation as mentors can also be beneficial, since the program appears to contribute to their own personal and professional development


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Bunmi S. Malau-Aduli ◽  
Simone Ross ◽  
Mary D. Adu

Abstract Background This study sought to examine the awareness/perception of intercultural competence and institutional intercultural inclusiveness among first year students at an Australian medical school over four consecutive years (2014–2017); to identify existing gaps in the curriculum and proffer recommendations. Methods The study employed an adapted 20-item questionnaire for data collection. The reliability and interrelations of the survey items were examined. Descriptive statistics was used to examine students’ perceptions, while Mann-U Whitney and Kruskal-Wallis tests were used to assess items scores in relation to participant characteristics. Results Over the 4 years of study, there were 520 respondents with between 53 to 69% response rates per year. Cronbach’s alpha for the instrument was 0.88 and factor analysis showed all items loading strongly on two components. Participants’ mean score on self-reported intercultural competence levels ranged from 3.8–4.6 out of 5; indicating relatively high awareness, valuing and understanding of cultural differences among this group of students. However, their mean scores (3.4–4.2) for institutional intercultural inclusiveness were slightly lower. Conclusion The instrument used in this study is effective in assessing level of intercultural competence among medical students. However, the results highlight the need for increased institutional support and professional development for faculty members to foster institutional intercultural inclusiveness.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Elizabeth Sinz ◽  
Arna Banerjee ◽  
Randolph Steadman ◽  
Matthew S. Shotwell ◽  
Jason Slagle ◽  
...  

Abstract Introduction Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment scores for practicing physicians has not been established. Methods Standardised complex simulation scenarios were developed and administered to board-certified, practicing anesthesiologists who volunteered to participate in an assessment study during formative maintenance of certification activities. A subset of the study population agreed to participate as the primary responder in a second scenario for this study. The physicians were assessed independently by trained raters on both teamwork/behavioural and technical performance measures. Analysis using Generalisability and Decision studies were completed for the two scenarios with two raters. Results The behavioural score was not more reliable than the technical score. With two raters > 20 scenarios would be required to achieve a reliability estimate of 0.7. Increasing the number of raters for a given scenario would have little effect on reliability. Conclusions The performance of practicing physicians on simulated critical events may be highly context-specific. Realistic simulation-based assessment for practicing physicians is resource-intensive and may be best-suited for individualized formative feedback. More importantly, aggregate data from a population of participants may have an even higher impact if used to identify skill or knowledge gaps to be addressed by training programs and inform continuing education improvements across the profession.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Lori Meyers ◽  
Bryan Mahoney ◽  
Troy Schaffernocker ◽  
David Way ◽  
Scott Winfield ◽  
...  

Abstract Background Simulation-based education (SBE) with high-fidelity simulation (HFS) offers medical students early exposure to the clinical environment, allowing development of clinical scenarios and management. We hypothesized that supplementation of standard pulmonary physiology curriculum with HFS would improve the performance of first-year medical students on written tests of pulmonary physiology. Methods This observational pilot study included SBE with three HFS scenarios of patient care that highlighted basic pulmonary physiology. First-year medical students’ test scores of their cardio-pulmonary curriculum were compared between students who participated in SBE versus only lecture-based education (LBE). A survey was administered to the SBE group to assess their perception of the HFS. Results From a class of 188 first-year medical students, 89 (47%) participated in the SBE and the remaining 99 were considered as the LBE group. On their cardio-pulmonary curriculum test, the SBE group had a median score of 106 [IQR: 97,110] and LBE group of 99 [IQR: 89,105] (p < 0.001). For the pulmonary physiology subsection, scores were also significantly different between groups (p < 0.001). Conclusions Implementation of supplemental SBE could be an adequate technique to improve learning enhancement and overall satisfaction in preclinical medical students.


2009 ◽  
Vol 33 (4) ◽  
pp. 329-334 ◽  
Author(s):  
Rashmi Vyas ◽  
Elizabeth Tharion ◽  
Solomon Sathishkumar

In compliance with the Medical Council of India, preclinical medical students maintain a record of their laboratory work in physiology. The physiology record books also contain a set of questions to be answered by the students. Faculty members and students had indicated that responding to these questions did not serve the intended purpose of being an effective learning tool. The purpose of this study was to obtain the views of the medical students and faculty members at our institution concerning the usefulness of responding to the questions and to gather suggestions for possible improvement. Data were collected through focus groups and questionnaires to first-year medical students and faculty members in physiology and were analyzed using qualitative and quantitative methods. The students and faculty members viewed the physiology record books as a potentially useful learning aid, but lack of time led the students to write the answers without understanding the topic rather than generating their own responses to the questions. Faculty members and students recommended that the students should write the responses to the questions on site during the practical classes, using relevant on-site resources and interacting with faculty members. The findings of the present study may be of value to other medical colleges in India and outside India with modifications based on their specific needs to improve the effectiveness of physiology record books as a learning tool.


Sign in / Sign up

Export Citation Format

Share Document