scholarly journals Comprehensive Assessment of Struggling Learners Referred to a Graduate Medical Education Remediation Program

2017 ◽  
Vol 9 (6) ◽  
pp. 763-767 ◽  
Author(s):  
Karen M. Warburton ◽  
Eric Goren ◽  
C. Jessica Dine

ABSTRACT Background  Implementation of the Next Accreditation System has provided a standardized framework for identifying learners not meeting milestones, but there is as yet no corresponding framework for remediation. Objective  We developed a comprehensive assessment process that allows correct diagnosis of a struggling learner's deficit(s) to promote successful remediation. Methods  At the University of Pennsylvania, resident learners within the Department of Medicine who are not meeting milestones are referred to the Early Intervention Remediation Committee (EIRC). The EIRC, composed of 14 faculty members with expertise in remediation, uses a standardized process to assess learners' deficits. These faculty members categorize primary deficits as follows: medical knowledge, clinical reasoning, organization and efficiency, professionalism, and communication skills. The standardized process of assessment includes an analysis of the learner's file, direct communication with evaluators, an interview focused on learner perception of the problem, screening for underlying medical or psychosocial issues, and a review of systems for deficits in the 6 core competencies. Participants were surveyed after participating in this process. Results  Over a 2-year period, the EIRC assessed and developed remediation plans for 4% of learners (14 of a total 342). Following remediation and reassessment, the identified problems were satisfactorily resolved in all cases with no disciplinary action. While the process was time intensive, an average of 45 hours per learner, the majority of faculty and residents rated it as positive and beneficial. Conclusions  This structured assessment process identifies targeted areas for remediation and adds to the tools available to Clinical Competency Committees.

Author(s):  
Susan Elaine Mackintosh ◽  
Emmanuel Katsaros

The goal of allopathic and osteopathic medical education is to develop the medical student into a competent and caring physician. As the evolving healthcare system continues to evolve and intersect with an increased breadth and depth of medical knowledge and an aging and more complex patient population, the emerging physician must now rely more on a team-based approach to patient-centered healthcare. Integrating interprofessional competencies into the span of the medical education and assessment process via the core competencies and the Entrustable Professional Activities has the potential to help instill not only the knowledge and skills required to practice as a member of an interprofessional healthcare team, but can also help to normalize the culture and thus the expectation of practicing collaboratively with all members of the health team toward the goal of improved patient outcome.


2011 ◽  
Vol 115 (4) ◽  
pp. 862-878 ◽  
Author(s):  
Keith Baker

Background Valid and reliable (dependable) assessment of resident clinical skills is essential for learning, promotion, and remediation. Competency is defined as what a physician can do, whereas performance is what a physician does in everyday practice. There is an ongoing need for valid and reliable measures of resident clinical performance. Methods Anesthesia residents were evaluated confidentially on a weekly basis by faculty members who supervised them. The electronic evaluation form had five sections, including a rating section for absolute and relative-to-peers performance under each of the six Accreditation Council for Graduate Medical Education core competencies, clinical competency committee questions, rater confidence in having the resident perform cases of increasing difficulty, and comment sections. Residents and their faculty mentors were provided with the resident's formative comments on a biweekly basis. Results From July 2008 to June 2010, 140 faculty members returned 14,469 evaluations on 108 residents. Faculty scores were pervasively positively biased and affected by idiosyncratic score range usage. These effects were eliminated by normalizing each performance score to the unique scoring characteristics of each faculty member (Z-scores). Individual Z-scores had low amounts of performance information, but signal averaging allowed determination of reliable performance scores. Average Z-scores were stable over time, related to external measures of medical knowledge, identified residents referred to the clinical competency committee, and increased when performance improved because of an intervention. Conclusions This study demonstrates a reliable and valid clinical performance assessment system for residents at all levels of training.


2014 ◽  
Vol 6 (3) ◽  
pp. 526-531 ◽  
Author(s):  
Allen F. Shaughnessy ◽  
Katherine T. Chang ◽  
Jennifer Sparks ◽  
Molly Cohen-Osher ◽  
Joseph Gravel

Abstract Background Development of cognitive skills for competent medical practice is a goal of residency education. Cognitive skills must be developed for many different clinical situations. Innovation We developed the Resident Cognitive Skills Documentation (CogDoc) as a method for capturing faculty members' real-time assessment of residents' cognitive performance while they precepted them in a family medicine office. The tool captures 3 dimensions of cognitive skills: medical knowledge, understanding, and its application. This article describes CogDoc development, our experience with its use, and its reliability and feasibility. Methods After development and pilot-testing, we introduced the CogDoc at a single training site, collecting all completed forms for 14 months to determine completion rate, competence development over time, consistency among preceptors, and resident use of the data. Results Thirty-eight faculty members completed 5021 CogDoc forms, documenting 29% of all patient visits by 33 residents. Competency was documented in all entrustable professional activities. Competence was statistically different among residents of different years of training for all 3 dimensions and progressively increased within all residency classes over time. Reliability scores were high: 0.9204 for the medical knowledge domain, 0.9405 for understanding, and 0.9414 for application. Almost every resident reported accessing the individual forms or summaries documenting their performance. Conclusions The CogDoc approach allows for ongoing assessment and documentation of resident competence, and, when compiled over time, depicts a comprehensive assessment of residents' cognitive development and ability to make decisions in ambulatory medicine. This approach meets criteria for an acceptable tool for assessing cognitive skills.


Author(s):  
C. R. Saju ◽  
Jose Vincent ◽  
Vidhu M. Joshy

Background: Globally there is a move to reorient the medical education to suit the needs of the developing nations. Medical Council of India has made it is mandatory that all faculty need to attend Basic course in Medical Education Technologies (MET) to improve teaching effectiveness. In spite of their efforts in this regard many of the faculty is still unaware of this initiative and those who have already attended the course are not effectively practicing it. This study aimed at assessing level of awareness and practice of medical education technologies among the teaching faculty.Methods: Data was collected from the faculty by personal interviews using a validated semi-structured questionnaire and analysed using SPSS.Results: 219 faculty members participated in the study working in 26 departments. Mean age of faculty was 40.98 (SD: 12.36). 57.1% of them were males and 42.9% were females. The level of awareness among study participants about learning process related medical education technologies ranged from 57% (for psychomotor domain) to 74% (for setting up of educational objectives). The awareness and practice of ‘teaching process’ and assessment process related medical education technologies remained low. No statistically significant association was obtained between awareness and practice of SLO, Microteaching, and MiniCEX.Conclusions: Majority of teachers remain untrained in the medical education technologies at the time of the study. Of the non-clinical compared to the clinical stream of teachers, greater proportion of teachers in non-clinical section have been trained. The awareness and practice of ‘medical education technologies’ remain low among the study participants.


2021 ◽  
Vol 13 (3) ◽  
pp. 377-384
Author(s):  
Taylor Sawyer ◽  
Megan Gray ◽  
Shilpi Chabra ◽  
Lindsay C. Johnston ◽  
Melissa M. Carbajal ◽  
...  

ABSTRACT Background A vital element of the Next Accreditation System is measuring and reporting educational Milestones. Little is known about changes in Milestones levels during the transition from residency to fellowship training. Objective Evaluate the Accreditation Council for Graduate Medical Education (ACGME) Milestones' ability to provide a linear trajectory of professional development from general pediatrics residency to neonatal-perinatal medicine (NPM) fellowship training. Methods We identified 11 subcompetencies that were the same for general pediatrics residency and NPM fellowship. We then extracted the last residency Milestone level and the first fellowship Milestone level for each subcompetency from the ACGME's Accreditation Data System on 89 subjects who started fellowship training between 2014 and 2018 at 6 NPM fellowship programs. Mixed-effects models were used to examine the intra-individual changes in Milestone scores between residency and fellowship after adjusting for the effects of the individual programs. Results A total of 1905 subcompetency Milestone levels were analyzed. The average first fellowship Milestone levels were significantly lower than the last residency Milestone levels (residency, mean 3.99 [SD = 0.48] vs fellowship 2.51 [SD = 0.56]; P < .001). Milestone levels decreased by an average of -1.49 (SD = 0.65) from the last residency to the first fellowship evaluation. Significant differences in Milestone levels were seen in both context-dependent subcompetencies (patient care and medical knowledge) and context-independent subcompetencies (professionalism). Conclusions Contrary to providing a linear trajectory of professional development, we found that Milestone levels were reset when trainees transitioned from general pediatrics residency to NPM fellowship.


PEDIATRICS ◽  
1970 ◽  
Vol 46 (4) ◽  
pp. 653-658

MEDICAL SCIENCE COURSE: University of Pennsylvania School of Medicine announces a correlated basic medical science course for the fall of 1970. The program, of one semester duration designed to provide a background in the basic sciences, lasts 15 weeks and includes 3 hours per day of formal teaching by senior faculty members and 4 hours per day of clinical teaching by discipline (medicmne, surgery, physical medicine, pediatrics, cardiology, gastroenterology, and dermatology.) This course affords a unique opportunity to participate in an informative and stimulating full time, full semester program directed to the clinical correlation of the basic sciences in a medical practice setting.


2016 ◽  
Vol 8 (2) ◽  
pp. 191-196
Author(s):  
Christina M Yuan ◽  
Robert Nee ◽  
Kevin C Abbott ◽  
James D Oliver

ABSTRACT  From 2010 to 2011, more than 70% of the clinical rotation competency evaluations for nephrology fellows in our program were rated “superior” using a 9-point Likert scale, suggesting some degree of “grade inflation.”Background  We sought to assess the efficacy of a 5-point centered rotation evaluation in reducing grade inflation.Objective  This retrospective cohort study of the impact of faculty education and a 5-point rotation evaluation on grade inflation was measured by superior item rating frequency and proportion of evaluations without superior ratings. The 5-point evaluation centered performance at the level expected for stage of training. Faculty education began in 2011–2012. The 5-point centered evaluation was introduced in 2012–2013 and used exclusively thereafter. A total of 68 evaluations, using the 9-point Likert scale, and 63 evaluations, using the 5-point centered scale, were performed after first-year fellow clinical rotations. Nine to 12 faculty members participated yearly.Methods  Faculty education alone was associated with fewer superior ratings from 2010–2011 to 2011–2012 (70.5% versus 48.3%, P = .001), declining further with 5-point centered scale introduction (2012–2013; 48.3% versus 35.6%; P = .012). Superior ratings declined with 5-point centered versus 9-point Likert scales (37.3% versus 59.3%, P = .001), specifically for medical knowledge, patient care, practice-based learning and improvement, and professionalism. On logistic regression, evaluations without superior scores were more likely for 5-point centered versus 9-point Likert scales (adjusted odds ratio [aOR] = 8.26; 95% CI 1.53–44.64; P = .014) and associated with faculty identifier (aOR= 1.18; 95% CI 1.03–1.35; P = .013), but not fellow identifier or training year quarter.Results Conclusions  Grade inflation was reduced with faculty education and the 5-point centered evaluation scale.


2010 ◽  
Vol 30 (7) ◽  
pp. 1115-1134 ◽  
Author(s):  
DAVID CHALLIS ◽  
MICHELE ABENDSTERN ◽  
PAUL CLARKSON ◽  
JANE HUGHES ◽  
CAROLINE SUTCLIFFE

ABSTRACTThe quality of assessment of older people with health and social care needs has for some time been a concern of policy makers, practitioners, older people and carers in the United Kingdom and internationally. This article seeks to address a key aspect of these concerns, namely whether sufficient expertise is deployed when, as a basis for a care plan and service allocation, an older person's eligibility for local authority adult social-care services requires a comprehensive needs assessment of their usually complex and multiple problems. Is an adequate range of professionals engaged, and is a multi-disciplinary approach applied? The Single Assessment Process (SAP) was introduced in England in 2004 to promote a multi-disciplinary model of service delivery. After its introduction, a survey in 2005–06 was conducted to establish the prevalence and patterns of comprehensive assessment practice across England. The reported arrangements for multi-disciplinary working among local authority areas in England were categorised and reviewed. The findings suggest, first, that the provision of comprehensive assessments of older people that require the expertise of multiple professionals is limited, except where the possibility arose of placement in a care-home-with-nursing, and second that by and large a systematic multi-disciplinary approach was absent. Policy initiatives to address the difficulties in assessment need to be more prescriptive if they are to produce the intended outcomes.


2012 ◽  
Vol 4 (4) ◽  
pp. 445-453 ◽  
Author(s):  
Su-Ting T. Li ◽  
Daniel J. Tancredi ◽  
Ann E. Burke ◽  
Ann Guillot ◽  
Susan Guralnick ◽  
...  

Abstract Background Self-assessment and self-directed learning are essential to becoming an effective physician. Objective To identify factors associated with resident self-assessment on the competencies, and to determine whether residents chose areas of self-assessed relative weakness as areas for improvement in their Individualized Learning Plan (ILP). Methods We performed a cross-sectional analysis of the American Academy of Pediatrics' PediaLink ILP database. Pediatrics residents self-assessed their competency in the 6 Accreditation Council for Graduate Medical Education competencies using a color-coded slider scale with end anchors “novice” and “proficient” (0–100), and then chose at least 1 competency to improve. Multivariate regression explored the relationship between overall confidence in core competencies, sex, level of training, and degree (MD or DO) status. Correlation examined whether residents chose to improve competencies in which they rated themselves as lower. Results A total of 4167 residents completed an ILP in academic year 2009–2010, with residents' ratings improving from advanced beginner (48 on a 0–100 scale) in postgraduate year-1 residents (PGY-1s) to competent (75) in PGY-3s. Residents rated themselves as most competent in professionalism (mean, 75.3) and least competent in medical knowledge (mean, 55.8) and systems-based practice (mean, 55.2). In the adjusted regression model, residents' competency ratings increased by level of training and whether they were men. In PGY-3s, there was no difference between men and women. Residents selected areas for improvement that correlated to competencies where they had rated themselves lower (P < .01). Conclusion Residents' self-assessment of their competencies increased by level of training, although residents rated themselves as least competent in medical knowledge and systems-based practice, even as PGY-3s. Residents tended to choose subcompetencies, which they rated as lower to focus on improving.


Sign in / Sign up

Export Citation Format

Share Document