scholarly journals Continuous Distant Measurement of the User’s Heart Rate in Human-Computer Interaction Applications

Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4205 ◽  
Author(s):  
Przybyło

In real world scenarios, the task of estimating heart rate (HR) using video plethysmography (VPG) methods is difficult because many factors could contaminate the pulse signal (i.e. a subjects’ movement, illumination changes). This article presents the evaluation of a VPG system designed for continuous monitoring of the user's heart rate during typical human-computer interaction scenarios. The impact of human activities while working at the computer (i.e. reading and writing text, playing a game) on the accuracy of HR VPG measurements was examined. Three commonly used signal extraction methods were evaluated: green (G), green-red difference (GRD), blind source separation (ICA). A new method based on an excess green (ExG) image representation was proposed. Three algorithms for estimating pulse rate were used: power spectral density (PSD), autoregressive modeling (AR) and time domain analysis (TIME). In summary, depending on the scenario being studied, different combinations of signal extraction methods and the pulse estimation algorithm ensure optimal heart rate detection results. The best results were obtained for the ICA method: average RMSE = 6.1 bpm (beats per minute). The proposed ExG signal representation outperforms other methods except ICA (RMSE = 11.2 bpm compared to 14.4 bpm for G and 13.0 bmp for GRD). ExG also is the best method in terms of proposed success rate metric (sRate).

2018 ◽  
Vol 09 (04) ◽  
pp. 841-848
Author(s):  
Kevin King ◽  
John Quarles ◽  
Vaishnavi Ravi ◽  
Tanvir Chowdhury ◽  
Donia Friday ◽  
...  

Background Through the Health Information Technology for Economic and Clinical Health Act of 2009, the federal government invested $26 billion in electronic health records (EHRs) to improve physician performance and patient safety; however, these systems have not met expectations. One of the cited issues with EHRs is the human–computer interaction, as exhibited by the excessive number of interactions with the interface, which reduces clinician efficiency. In contrast, real-time location systems (RTLS)—technologies that can track the location of people and objects—have been shown to increase clinician efficiency. RTLS can improve patient flow in part through the optimization of patient verification activities. However, the data collected by RTLS have not been effectively applied to optimize interaction with EHR systems. Objectives We conducted a pilot study with the intention of improving the human–computer interaction of EHR systems by incorporating a RTLS. The aim of this study is to determine the impact of RTLS on process metrics (i.e., provider time, number of rooms searched to find a patient, and the number of interactions with the computer interface), and the outcome metric of patient identification accuracy Methods A pilot study was conducted in a simulated emergency department using a locally developed camera-based RTLS-equipped EHR that detected the proximity of subjects to simulated patients and displayed patient information when subjects entered the exam rooms. Ten volunteers participated in 10 patient encounters with the RTLS activated (RTLS-A) and then deactivated (RTLS-D). Each volunteer was monitored and actions recorded by trained observers. We sought a 50% improvement in time to locate patients, number of rooms searched to locate patients, and the number of mouse clicks necessary to perform those tasks. Results The time required to locate patients (RTLS-A = 11.9 ± 2.0 seconds vs. RTLS-D = 36.0 ± 5.7 seconds, p < 0.001), rooms searched to find patient (RTLS-A = 1.0 ± 1.06 vs. RTLS-D = 3.8 ± 0.5, p < 0.001), and number of clicks to access patient data (RTLS-A = 1.0 ± 0.06 vs. RTLS-D = 4.1 ± 0.13, p < 0.001) were significantly reduced with RTLS-A relative to RTLS-D. There was no significant difference between RTLS-A and RTLS-D for patient identification accuracy. Conclusion This pilot demonstrated in simulation that an EHR equipped with real-time location services improved performance in locating patients and reduced error compared with an EHR without RTLS. Furthermore, RTLS decreased the number of mouse clicks required to access information. This study suggests EHRs equipped with real-time location services that automates patient location and other repetitive tasks may improve physician efficiency, and ultimately, patient safety.


Facial images have always been used for various analytical and research purposes as they contain abundant information about personal characteristics, including identity, emotional expression, gender, age, etc. A human image is often defined as a complex signal composed of many facial attributes such as skin colour and geometric facial features. Nowadays the real-world applications of facial images have brought in a new dawn in the field of biometrics, security and surveillance, and these attributes play a crucial role in the same. Unrestricted and unintended access to certain resources and information to the minors has a history of physical and psychological implications, which makes age, in particular, more significant among these attributes. Consider a scenario where users may require an age‐ specific human computer interaction system that can estimate age for secure system access control or intelligence gathering. Automatic human age estimation using facial image analysis will come as a rescue with its potential applications in the field of Age Specific Human Computer Interactions and numerous real‐ world applications which include human computer interaction and multimedia communication. Here, we aim to identify and classify images provided as input into two main categories, adults and minors. This classification would act as an access controller to the desired resources or information. MATLAB was used to identify the younger and older images. Initially we got the databases of features extracted from the input images using different feature extraction methods. Later we compared the several trained databases to get a specific range for younger and older images. This range then became the basis for identifying the young and the old


2012 ◽  
Vol 1 ◽  
pp. 101-122 ◽  
Author(s):  
Sharon O'Brien

This paper seeks to characterise translation as a form of human–computer interaction. The evolution of translator–computer interaction is explored, and the challenges and benefits are enunciated. The concept of cognitive ergonomics is drawn on to argue for a more caring and inclusive approach towards the translator by developers of translation technology. A case is also made for wider acceptance by the translation community of the benefits of the technology at their disposal and for more humanistic research on the impact of technology on the translator, the translation profession, and the translation process.


2015 ◽  
Vol 1 (1) ◽  
pp. 12 ◽  
Author(s):  
Stuart Reeves

<div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p><span>The human-computer interaction (HCI) has had a long and troublesome relationship to the role of ‘science’. HCI’s status as an academic object in terms of coherence and adequacy is often in question—leading to desires for establishing a true scientific discipline. In this paper I explore formative cognitive science influences on HCI, through the impact of early work on the design of input devices. The paper discusses a core idea that I argue has animated much HCI research since: the notion of scientific design spaces. In evaluating this concept, I disassemble the broader ‘picture of science’ in HCI and its role in constructing a disciplinary order for the increasingly diverse and overlapping research communities that contribute in some way to what we call ‘HCI’. In concluding I explore notions of rigour and debates around how we might reassess HCI’s disciplinarity.</span></p></div></div></div>


2009 ◽  
pp. 80-94
Author(s):  
Chris Baber

In this chapter the evaluation of human computer interaction (HCI) with mobile technologies is considered. The ISO 9241 notion of ‘context of use’ helps to define evaluation in terms of the ‘fitness-for-purpose’ of a given device to perform given tasks by given users in given environments. It is suggested that conventional notions of usability can be useful for considering some aspects of the design of displays and interaction devices, but that additional approaches are needed to fully understand the use of mobile technologies. These additional approaches involve dual-task studies in which the device is used whilst performing some other activity, and subjective evaluation on the impact of the technology on the person.


Sign in / Sign up

Export Citation Format

Share Document