Crash Cart Usability: A Method in Simulation & Iterative Design

Author(s):  
Matthew N. Jesso ◽  
Yuhao Peng ◽  
Amanda Anderson

Prior to implementation, front line clinical staff assessed a new crash cart in a simulated environment. The contents of the cart were assessed during a simulated code on a number of metrics; time on task, number of physical interactions, errors, as well as qualitative feedback from participants. These metrics helped researchers redesign the cart contents to improve visibility and organization of supplies. Using an iterative design cycle, the redesign cart and contents showed a reduction in time on task, physical interactions, and errors for most scenarios.

Author(s):  
Pamela A. Savage-Knepshield

The Army's acquisition process is transforming to meet the needs of a force that must be agile, adaptive, and responsive to asymmetric threats. Advanced capabilities and technologies, which are urgently needed to enable rapid response to evolving military needs, are being developed and pushed out to troops at unprecedented rates. As a result, not all systems have undergone an iterative design process, received usability feedback from their target users, or had design support from human factors engineers to ensure that unit and Soldier considerations have been addressed. Subsequently, these systems may possess characteristics that induce high cognitive workload, fatigue, detectability, or trigger events that lead to fratricide. When human factors engineers encounter a system that has not derived these benefits, they too must become more agile, adaptive, and responsive to ensure that Soldier feedback is collected and that serious issues are identified and resolved before the system makes its way to the battlefield. Lessons learned while participating in advanced technology and experimentation programs include techniques that facilitate working with small Ns, institutional review boards, rapid survey instrument development, and the collection of qualitative feedback as well as the importance of having a “usability tool kit” available to facilitate data collection efforts in an operational field environment.


2008 ◽  
Vol 31 (1) ◽  
pp. 83 ◽  
Author(s):  
Jody Condit Fagan ◽  
Meris A. Mandernach ◽  
Carl S. Nelson ◽  
Jonathan R. Paulo ◽  
Grover Saunders

<p>Discovery tools are emerging in libraries. These tools offer library patrons the ability to concurrently search the library catalog and journal articles. While vendors rush to provide feature-rich interfaces and access to as much content as possible, librarians wonder about the usefulness of these tools to library patrons. In order to learn about both the utility and usability of EBSCO Discovery Service, James Madison University conducted a usability test with eight students and two faculty members. The test consisted of nine tasks focused on common patron requests or related to the utility of specific discovery tool features. Software recorded participants’ actions and time on task, human observers judged the success of each task, and a post-survey questionnaire gathered qualitative feedback and comments from the participants.  Overall, participants were successful at most tasks, but specific usability problems suggested some interface changes for both EBSCO Discovery Service and JMU’s customizations of the tool.  The study also raised several questions for libraries above and beyond any specific discovery tool interface, including the scope and purpose of a discovery tool versus other library systems, working with the large result sets made possible by discovery tools, and navigation between the tool and other library services and resources.  This article will be of interest to those who are investigating discovery tools, selecting products, integrating discovery tools into a library web presence, or performing evaluations of similar systems.</p>


1989 ◽  
Vol 33 (5) ◽  
pp. 259-263 ◽  
Author(s):  
Thomas T. Hewett

Increasingly, the design of interactive computing systems appears to be a process of iterative design and re-design. One important factor in successful iterative design is iterative evaluation-evaluation as part of each design cycle. This paper argues that different evaluation-design cycles may require different types of methodologies and different types of questions or measures to fully satisfy differing evaluation goals. Furthermore, evaluation procedures and measures themselves need to be designed and re-designed, a process more easily accomplished during system development. Examples based upon design projects illustrate some of the ways in which the nature and uses of evaluation procedures and information may change in different cycles of iterative evaluation.


2021 ◽  
Author(s):  
George Kachergis ◽  
Samaher Radwan ◽  
Bria Long ◽  
Judith Fan ◽  
Michael Lingelbach ◽  
...  

Curiosity is a fundamental driver of human behavior, and yet because of its open-ended nature and the wide variety of behaviors it inspires in different contexts, it is remarkably difficult to study in a laboratory context. A promising approach to developing and testing theories of curiosity is to instantiate them in artificial agents that are able to act and explore in a simulated environment, and then compare the behavior of these agents to humans exploring the same stimuli. Here we propose a new experimental paradigm for examining children’s – and AI agents’ – curiosity about objects’ physical interactions. We let them choose which object to drop another object onto in order to create the most interesting effect. We compared adults’ (N=155) and children’s choices (N=66; 3-7 year-olds) and found that both children and adults show a strong preference for choosing target objects that could potentially contain the dropped object. Adults alone also make choices consistent with achieving support relations. We contextualize our results using heuristic computational models based on 3D physical simulations of the same scenarios judged by participants.


Author(s):  
Richard Joyce ◽  
Stephen K. Robinson

The use of virtual reality (VR) to provide a higher fidelity simulation environment earlier in the design cycle of a new cockpit has benefits in development cost and time, but practitioners may have concerns that use of virtual environments may change feedback. In this work, we aimed to test our VR environment against a non-VR simulator in a mock design study to evaluate if and how subject feedback and performance changed. Two separate groups of subjects evaluated the same two designs, one group using VR and the other a touchscreen desktop simulator. The results indicate that both groups provided similar qualitative feedback on the two designs. Some quantitative performance measures changed between groups, but conclusions made from comparing designs within groups was consistent. We describe our findings on which quantitative measures are best for evaluation in a virtual environment.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
E. S. Anderson ◽  
T. R. L. Griffiths ◽  
T. Forey ◽  
F. Wobi ◽  
R. I. Norman ◽  
...  

Abstract Background Aviation has used a real-time observation method to advance anonymised feedback to the front-line and improve safe practice. Using an experiential learning method, this pilot study aimed to develop an observation-based real-time learning tool for final-year medical students with potential wider use in clinical practice. Methods Using participatory action research, we collected data on medical students’ observations of real-time clinical practice. The observation data was analysed thematically and shared with a steering group of experts to agree a framework for recording observations. A sample of students (observers) and front-line clinical staff (observed) completed one-to-one interviews on their experiences. The interviews were analysed using thematic analysis. Results Thirty-seven medical students identified 917 issues in wards, theatres and clinics in an acute hospital trust. These issues were grouped into the themes of human influences, work environment and systems. Aviation approaches were adapted to develop an app capable of recording real-time positive and negative clinical incidents. Five students and eleven clinical staff were interviewed and shared their views on the value of a process that helped them learn and has the potential to advance the quality of practice. Concerns were shared about how the observational process is managed. Conclusion The study developed an app (Healthcare Team Observations for Patient Safety—HTOPS), for recording good and poor clinical individual and team behaviour in acute-care practice. The process advanced medical student learning about patient safety. The tool can identify the totality of patient safety practice and illuminate strength and weakness. HTOPS offers the opportunity for collective ownership of safety concerns without blame and has been positively received by all stakeholders. The next steps will further refine the app for use in all clinical areas for capturing light noise.


2020 ◽  
Author(s):  
Elizabeth Susan Anderson ◽  
Leyshon Griffiths ◽  
Teri Forey ◽  
Fatima Wobi ◽  
Robert Norman ◽  
...  

Abstract Background Aviation has used a real-time observation method to advance anonymised feedback to the front-line and improve safe practice. Seeking a patient-safety experiential learning method for final year medical students we embarked on a pilot study for similarly observing real-time clinical practice in context. Methods Using Participatory Action Research, we collected data on medical students’ observations of real time clinical practice. The observation data was analysed thematically and shared with a steering group of experts to agree a framework for recording observations. A sample of students (observers) and front-line clinical staff (observed) completed on-to-one interviews on their experiences. The interviews were analysed using thematic analysis. Results Thirty-seven medical students identified 917 issues in wards, theatres and clinics in an acute hospital Trust. They formed the themes of human factors, systems and environment. Aviation approaches were adapted to form an app capable of recording real time positive and negative clinical incidents. Five students and eleven clinical staff were interviewed and shared their agreement of the value of a process that helped them learn and has the potential to advance the quality of practice. Concerns were shared about how the observational process is managed. Conclusion The final product, an app called Healthcare Team Observations for Patient Safety (HTOPS), used by observing senior medical students can record both positive and negative clinical individual and team behaviours in acute-care practice. Early findings are promising as they highlight the totality of patient safety practice and can illuminate good and poor practice. The findings have been positively received and welcomed by the observed Hospital Trust. The next steps will further refine the app for use in all clinical areas for capturing light noise. HTOPS offers the opportunity for collective ownership of safety concerns without blame and has been positively received by staff and medical students.


2020 ◽  
Author(s):  
Steven Warburton ◽  
Mark Perry

This paper presents an overview of the development of a pattern language for designing online learning and teaching. The aim of these patterns is to facilitate teachers in designing their online teaching environment using a scaffolded methodology that maintains and confirms their individual design agency. Design patterns capture expert knowledge in ways that can be reused to solve problems but never in the same way twice. Here, the authors provide an overview of the patterns, their interrelationship, and reflect on the value of design pattern and their use, highlighting how they can be deployed to enhance online teaching in an iterative design cycle approach.


Sign in / Sign up

Export Citation Format

Share Document