Exploring simple visual languages for real time human-computer interaction

Author(s):  
P. Ayala ◽  
I. Barandiaran ◽  
D. Vicente ◽  
M. Grana
2018 ◽  
Vol 09 (04) ◽  
pp. 841-848
Author(s):  
Kevin King ◽  
John Quarles ◽  
Vaishnavi Ravi ◽  
Tanvir Chowdhury ◽  
Donia Friday ◽  
...  

Background Through the Health Information Technology for Economic and Clinical Health Act of 2009, the federal government invested $26 billion in electronic health records (EHRs) to improve physician performance and patient safety; however, these systems have not met expectations. One of the cited issues with EHRs is the human–computer interaction, as exhibited by the excessive number of interactions with the interface, which reduces clinician efficiency. In contrast, real-time location systems (RTLS)—technologies that can track the location of people and objects—have been shown to increase clinician efficiency. RTLS can improve patient flow in part through the optimization of patient verification activities. However, the data collected by RTLS have not been effectively applied to optimize interaction with EHR systems. Objectives We conducted a pilot study with the intention of improving the human–computer interaction of EHR systems by incorporating a RTLS. The aim of this study is to determine the impact of RTLS on process metrics (i.e., provider time, number of rooms searched to find a patient, and the number of interactions with the computer interface), and the outcome metric of patient identification accuracy Methods A pilot study was conducted in a simulated emergency department using a locally developed camera-based RTLS-equipped EHR that detected the proximity of subjects to simulated patients and displayed patient information when subjects entered the exam rooms. Ten volunteers participated in 10 patient encounters with the RTLS activated (RTLS-A) and then deactivated (RTLS-D). Each volunteer was monitored and actions recorded by trained observers. We sought a 50% improvement in time to locate patients, number of rooms searched to locate patients, and the number of mouse clicks necessary to perform those tasks. Results The time required to locate patients (RTLS-A = 11.9 ± 2.0 seconds vs. RTLS-D = 36.0 ± 5.7 seconds, p < 0.001), rooms searched to find patient (RTLS-A = 1.0 ± 1.06 vs. RTLS-D = 3.8 ± 0.5, p < 0.001), and number of clicks to access patient data (RTLS-A = 1.0 ± 0.06 vs. RTLS-D = 4.1 ± 0.13, p < 0.001) were significantly reduced with RTLS-A relative to RTLS-D. There was no significant difference between RTLS-A and RTLS-D for patient identification accuracy. Conclusion This pilot demonstrated in simulation that an EHR equipped with real-time location services improved performance in locating patients and reduced error compared with an EHR without RTLS. Furthermore, RTLS decreased the number of mouse clicks required to access information. This study suggests EHRs equipped with real-time location services that automates patient location and other repetitive tasks may improve physician efficiency, and ultimately, patient safety.


Author(s):  
Paolo Bottoni ◽  
Maria Francesca Costabile ◽  
Stefano Levialdi

This chapter introduces an approach to the theory of visual languages, based on the notion of visual sentence as defined by the integration of pictures and descriptions. The paper proceeds by firstly tracking the history of the ideas that stemmed from the initial IEEE Workshop held at Hiroshima (Japan) during 1984 and then gradually progressing towards the formalisms that build up the theory of visual languages. The theory of visual sentences allows a coherent view of both static and dynamic aspects of human-computer interaction, as well as of the relations between the user and the machine during the interaction.


Photonics ◽  
2019 ◽  
Vol 6 (3) ◽  
pp. 90 ◽  
Author(s):  
Bosworth ◽  
Russell ◽  
Jacob

Over the past decade, the Human–Computer Interaction (HCI) Lab at Tufts University has been developing real-time, implicit Brain–Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS). This paper reviews the work of the lab; we explore how we have used fNIRS to develop BCIs that are based on a variety of human states, including cognitive workload, multitasking, musical learning applications, and preference detection. Our work indicates that fNIRS is a robust tool for the classification of brain-states in real-time, which can provide programmers with useful information to develop interfaces that are more intuitive and beneficial for the user than are currently possible given today’s human-input (e.g., mouse and keyboard).


Author(s):  
Xiangyang Li ◽  
Zhili Zhang ◽  
Feng Liang ◽  
Qinhe Gao ◽  
Lilong Tan

Aiming at the human–computer interaction control (HCIC) requirements of multi operators in collaborative virtual maintenance (CVM), real-time motion capture and simulation drive of multi operators with optical human motion capture system (HMCS) is proposed. The detailed realization process of real-time motion capture and data drive for virtual operators in CVM environment is presented to actualize the natural and online interactive operations. In order to ensure the cooperative and orderly interactions of virtual operators with the input operations of actual operators, collaborative HCIC model is established according to specific planning, allocating and decision-making of different maintenance tasks as well as the human–computer interaction features and collaborative maintenance operation features among multi maintenance trainees in CVM process. Finally, results of the experimental implementation validate the effectiveness and practicability of proposed methods, models, strategies and mechanisms.


Sign in / Sign up

Export Citation Format

Share Document