scholarly journals A Systematic Review of the Use of Google Glass in Graduate Medical Education

2019 ◽  
Vol 11 (6) ◽  
pp. 637-648
Author(s):  
Joseph F. Carrera ◽  
Connor C. Wang ◽  
William Clark ◽  
Andrew M. Southerland

ABSTRACT Background Graduate medical education (GME) has emphasized the assessment of trainee competencies and milestones; however, sufficient in-person assessment is often constrained. Using mobile hands-free devices, such as Google Glass (GG) for telemedicine, allows for remote supervision, education, and assessment of residents. Objective We reviewed available literature on the use of GG in GME in the clinical learning environment, its use for resident supervision and education, and its clinical utility and technical limitations. Methods We conducted a systematic review in accordance with 2009 PRISMA guidelines. Applicable studies were identified through a review of PubMed, MEDLINE, and Web of Science databases for articles published from January 2013 to August 2018. Two reviewers independently screened titles, abstracts, and full-text articles that reported using GG in GME and assessed the quality of the studies. A systematic review of these studies appraised the literature for descriptions of its utility in GME. Results Following our search and review process, 37 studies were included. The majority evaluated GG in surgical specialties (n = 23) for the purpose of surgical/procedural skills training or supervision. GG was predominantly used for video teleconferencing, and photo and video capture. Highlighted positive aspects of GG use included point-of-view broadcasting and capacity for 2-way communication. Most studies cited drawbacks that included suboptimal battery life and HIPAA concerns. Conclusions GG shows some promise as a device capable of enhancing GME. Studies evaluating GG in GME are limited by small sample sizes and few quantitative data. Overall experience with use of GG in GME is generally positive.

2013 ◽  
Vol 5 (2) ◽  
pp. 211-218 ◽  
Author(s):  
Kenneth A. Locke ◽  
Carol K. Bates ◽  
Reena Karani ◽  
Shobhina G. Chheda

Abstract Background A rapidly evolving body of literature in medical education can impact the practice of clinical educators in graduate medical education. Objective To aggregate studies published in the medical education literature in 2011 to provide teachers in general internal medicine with an overview of the current, relevant medical education literature. Review We systematically searched major medical education journals and the general clinical literature for medical education studies with sound design and relevance to the educational practice of graduate medical education teachers. We chose 12 studies, grouped into themes, using a consensus method, and critiqued these studies. Results Four themes emerged. They encompass (1) learner assessment, (2) duty hour limits and teaching in the inpatient setting, (3) innovations in teaching, and (4) learner distress. With each article we also present recommendations for how readers may use them as resources to update their clinical teaching. While we sought to identify the studies with the highest quality and greatest relevance to educators, limitation of the studies selected include their single-site and small sample nature, and the frequent lack of objective measures of outcomes. These limitations are shared with the larger body of medical education literature. Conclusions The themes and the recommendations for how to incorporate this information into clinical teaching have the potential to inform the educational practice of general internist educators as well as that of teachers in other specialties.


2021 ◽  
Vol 13 (4) ◽  
pp. 553-560
Author(s):  
Deborah Simpson ◽  
Matthew McDiarmid ◽  
Tricia La Fratta ◽  
Nicole Salvo ◽  
Jacob L. Bidwell ◽  
...  

ABSTRACT Background The clinical learning environment (CLE) is a priority focus in medical education. The Accreditation Council for Graduate Medical Education Clinical Learning Environment Review's (CLER) recent addition of teaming and health care systems obligates educators to monitor these areas. Tools to evaluate the CLE would ideally be: (1) appropriate for all health care team members on a specific unit/project; (2) informed by contemporary learning environment frameworks; and (3) feasible/quick to complete. No existing CLE evaluation tool meets these criteria. Objective This report describes the creation and preliminary validity evidence for a Clinical Learning Environment Quick Survey (CLEQS). Methods Survey items were identified from the literature and other data sources, sorted into 1 of 4 learning environment domains (personal, social, organizational, material) and reviewed by multiple stakeholders and experts. Leaders from 6 interprofessional graduate medical education quality improvement/patient safety teams distributed this voluntary survey to their clinical team members (November 2019–mid-January 2021) using electronic or paper formats. Validity evidence for this instrument was based on the content, response process, internal structure, reliability, relations to other variables, and consequences. Results Two hundred one CLEQS responses were obtained, taking 1.5 minutes on average to complete with good reliability (Cronbach's α ≥ 0.83). The Cronbach alpha for each CE domain with the overall item ranged from 0.50 for personal to 0.79 for social. There were strong associations with other measures and clarity about improvement targets. Conclusions CLEQS meets the 3 criteria for evaluating CLEs. Reliability data supports its internal consistency, and initial validity evidence is promising.


2019 ◽  
Vol 43 (4) ◽  
pp. 386-395 ◽  
Author(s):  
Anne L. Walsh ◽  
Susan Lehmann ◽  
Jeffrey Zabinski ◽  
Maria Truskey ◽  
Taylor Purvis ◽  
...  

2019 ◽  
Vol 15 (1) ◽  
pp. 1-11 ◽  
Author(s):  
Ryan Zimmerman ◽  
Richard Alweis ◽  
Alexandra Short ◽  
Tom Wasser ◽  
Anthony Donato

Sign in / Sign up

Export Citation Format

Share Document