scholarly journals Human factors in the development of complications of airway management: preliminary evaluation of an interview tool

Anaesthesia ◽  
2013 ◽  
Vol 68 (8) ◽  
pp. 817-825 ◽  
Author(s):  
R. Flin ◽  
E. Fioratou ◽  
C. Frerk ◽  
C. Trotter ◽  
T. M. Cook
2020 ◽  
Vol 163 (5) ◽  
pp. 1000-1002
Author(s):  
Ahmad K. Abou-Foul

On December 14, 1799, 3 prominent physicians—Craik, Brown, and Dick—gathered to examine America’s first president, George Washington. He was complaining of severe throat symptoms and was being treated with bloodletting, blistering, and enemas. Dick advised performing an immediate tracheotomy to secure the airway. Both Craik and Brown were not keen on trying tracheotomy and overruled that proposal. Washington was not involved in making that decision. He most likely had acute epiglottitis that proved to be fatal at the end. If Dick had prevailed, a tracheotomy could have saved Washington’s life. Human factors analysis of these events shows that his physicians were totally fixated on repeating futile treatments and could not comprehend the need for a radical alternative, like tracheotomy. That was aggravated by an impaired situational awareness and significant resistance to change. Leadership model was also based on hierarchy instead of competency, which might have also contributed to Washington’s death.


2009 ◽  
Vol 91 (4) ◽  
pp. 321-325 ◽  
Author(s):  
Victoria Mason ◽  
Selina Balloo ◽  
Dominic Upton ◽  
Kamal Heer ◽  
Phil Higton ◽  
...  

INTRODUCTION A range of human factors have been shown to impact on surgical performance although little is known about the impact of training on the views of surgeons towards these factors or how receptive surgeons are to such training. SUBJECTS AND METHODS This was an observational pilot study using a short questionnaire designed to elicit views of surgeons towards a range of human factors prior to, and immediately following, a course designed to address human factors in surgical performance. Focus groups were also conducted before and immediately after the course to elicit views. RESULTS Of all the human factors assessed, decision-making was rated on a visual analogue scale as having the biggest impact on performance both before and after the course. In general, views of human factors changed following the course, most notably an increase in the extent to which work stress, interpersonal difficulties and personality were believed to affect performance. Three themes emerged from the focus groups: (i) personal professional development; (ii) the relationship between trainer and trainee; and (iii) the changing perspective. CONCLUSIONS Surgeons from a range of specialties are receptive to training on the impact of human factors on performance and this study has shown that views may change following a course designed to address this. Further training to address the theory–practice gap is warranted in addition to an evaluation of its effectiveness.


Anaesthesia ◽  
2018 ◽  
Vol 73 (8) ◽  
pp. 980-989 ◽  
Author(s):  
R. Schnittker ◽  
S. Marshall ◽  
T. Horberry ◽  
K. L. Young

1983 ◽  
Vol 27 (11) ◽  
pp. 892-895
Author(s):  
David M. Gilfoil ◽  
J. Thomas Murray ◽  
John Van Praag

There exists a voluminous body of human factors literature pertaining to various aspects of human/computer interface design. This literature is frequently reviewed and cited as source documentation by human factors industry professionals. Traditional “hard copy” methods of storage/retrieval of this information are inefficient because of people, resource, and location constraints. The Ergonomics department at Exxon Office Systems has developed a preliminary version of a computerized information storage and retrieval system. Using this Ergonomic Design Guidelines and Rules (E.D.G.A.R.) system, department members develop and maintain a closer working knowledge of the human factors research literature. They are also able to quickly and accurately retrieve and apply guidelines to a variety of human/computer design situations. The design objectives of the EDGAR system, details of the system itself, and a preliminary evaluation are presented in this paper.


If at first you don't succeed, try again. Often the first or second design alternatives are modified or discarded based on preliminary evaluation. However, there is much to be learned from design ideas that did not work as planned. Unfortunately, the lessons learned from these inadequate designs are not always shared within the human factors community. This panel provides a forum to share the lessons learned by panelists who are actually creating and evaluating designs in the field. The panelists represent a balance of perspectives from academia, industry, and government. Attendees should come away from the session with concrete examples of inadequate interface designs, how they were improved, and an understanding of why the design did not work for a particular application. A successful panel will create a forum to share lessons learned and perhaps prevent practitioners from repeating work that has already been done.


2020 ◽  
pp. 305-315
Author(s):  
Mikael Rewers ◽  
Nicholas Chrimes

2020 ◽  
Author(s):  
Wu Yi Zheng ◽  
Bethany Van Dort ◽  
Romaric Marcilly ◽  
Richard Day ◽  
Rosemary Burke ◽  
...  

BACKGROUND It is well known that recommendations from electronic medication alerts are seldom accepted or acted on by users. Key factors affecting the effectiveness of medication alerts include system usability and alert design. In response, human factors principles that apply knowledge of human capabilities and limitations are increasingly used in the design of health technology to improve usability of systems. OBJECTIVE We set out to develop an evidence-based tool that allows valid and reliable assessment of computerised medication alerting systems. This tool is intended to be used by hospital staff with detailed knowledge of their hospital’s computerised provider order entry (CPOE) system and alerts to identify and address potential system deficiencies. METHODS The Tool for Evaluating Medication Alerting Systems (TEMAS) was developed based on human factors design principles and consists of 66 items. Eighteen staff members from six hospitals used the TEMAS to assess their medication alerting system. Data collected from participant assessments were used to evaluate the validity, reliability, and usability of the TEMAS. Validity was assessed by comparing TEMAS results with prior in-house evaluations; Reliability was measured using Krippendorff’s alpha to determine agreement between assessors; and a short survey was used to determine usability. RESULTS Participants reported mostly negative (n=8/17) and neutral (n=7/17) perceptions of alerts in their medication alerting system. However, validity of the TEMAS was unable to be directly tested as participants were unaware of any results from prior in-house evaluations. Reliability of the TEMAS, as measured by Krippendorff’s alpha, was low to moderate (range: .26 - .46) but participant feedback suggest that an individual’s knowledge of the system varied depending on professional background. In terms of usability, the TEMAS items were generally easy to understand (61%), but participants suggested revisions to items (n=22) to improve clarity. CONCLUSIONS This preliminary evaluation of the TEMAS allowed identification of components of the tool that required modification to improve usability and usefulness. It also revealed that to be effective in facilitating a comprehensive evaluation of a medication alerting system, the TEMAS should be completed by a team of multi-disciplinary hospital staff from both clinical and technical backgrounds in order to maximise knowledge of systems.


Sign in / Sign up

Export Citation Format

Share Document