scholarly journals Benefits of Subliminal Feedback Loops in Human-Computer Interaction

2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
Walter Ritter

A lot of efforts have been directed to enriching human-computer interaction to make the user experience more pleasing or efficient. In this paper, we briefly present work in the fields of subliminal perception and affective computing, before we outline a new approach to add analog communication channels to the human-computer interaction experience. In this approach, in addition to symbolic predefined mappings of input to output, a subliminal feedback loop is used that provides feedback in evolutionary subliminal steps. In two studies involving concentration-intensive games, we investigated the impact of this approach. In a first study evolutionary feedback loops adjusted the user interface of a memory game whereas in the second study the lighting of the test room was adjusted dynamically. The results show that in settings with an evolutionary feedback loop test participants were able to reach significantly higher scores compared to the static counterparts. Finally, we discuss the impact that such subliminally working applications might have on the user's acceptance.

2018 ◽  
Vol 09 (04) ◽  
pp. 841-848
Author(s):  
Kevin King ◽  
John Quarles ◽  
Vaishnavi Ravi ◽  
Tanvir Chowdhury ◽  
Donia Friday ◽  
...  

Background Through the Health Information Technology for Economic and Clinical Health Act of 2009, the federal government invested $26 billion in electronic health records (EHRs) to improve physician performance and patient safety; however, these systems have not met expectations. One of the cited issues with EHRs is the human–computer interaction, as exhibited by the excessive number of interactions with the interface, which reduces clinician efficiency. In contrast, real-time location systems (RTLS)—technologies that can track the location of people and objects—have been shown to increase clinician efficiency. RTLS can improve patient flow in part through the optimization of patient verification activities. However, the data collected by RTLS have not been effectively applied to optimize interaction with EHR systems. Objectives We conducted a pilot study with the intention of improving the human–computer interaction of EHR systems by incorporating a RTLS. The aim of this study is to determine the impact of RTLS on process metrics (i.e., provider time, number of rooms searched to find a patient, and the number of interactions with the computer interface), and the outcome metric of patient identification accuracy Methods A pilot study was conducted in a simulated emergency department using a locally developed camera-based RTLS-equipped EHR that detected the proximity of subjects to simulated patients and displayed patient information when subjects entered the exam rooms. Ten volunteers participated in 10 patient encounters with the RTLS activated (RTLS-A) and then deactivated (RTLS-D). Each volunteer was monitored and actions recorded by trained observers. We sought a 50% improvement in time to locate patients, number of rooms searched to locate patients, and the number of mouse clicks necessary to perform those tasks. Results The time required to locate patients (RTLS-A = 11.9 ± 2.0 seconds vs. RTLS-D = 36.0 ± 5.7 seconds, p < 0.001), rooms searched to find patient (RTLS-A = 1.0 ± 1.06 vs. RTLS-D = 3.8 ± 0.5, p < 0.001), and number of clicks to access patient data (RTLS-A = 1.0 ± 0.06 vs. RTLS-D = 4.1 ± 0.13, p < 0.001) were significantly reduced with RTLS-A relative to RTLS-D. There was no significant difference between RTLS-A and RTLS-D for patient identification accuracy. Conclusion This pilot demonstrated in simulation that an EHR equipped with real-time location services improved performance in locating patients and reduced error compared with an EHR without RTLS. Furthermore, RTLS decreased the number of mouse clicks required to access information. This study suggests EHRs equipped with real-time location services that automates patient location and other repetitive tasks may improve physician efficiency, and ultimately, patient safety.


Author(s):  
Lesley Axelrod ◽  
Kate Hone

In a culture which places increasing emphasis on happiness and wellbeing, multimedia technologies include emotional design to improve commercial edge. This chapter explores affective computing and illustrates how innovative technologies are capable of emotional recognition and display. Research in this domain has emphasised solving the technical difficulties involved, through the design of ever more complex recognition algorithms. But fundamental questions about the use of such technology remain neglected. Can it really improve human-computer interaction? For which types of application is it suitable? How is it best implemented? What ethical considerations are there? We review this field and discuss the need for user-centred design. We describe and give evidence from a study that explores some of the user issues in affective computing.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2308 ◽  
Author(s):  
Dilana Hazer-Rau ◽  
Sascha Meudt ◽  
Andreas Daucher ◽  
Jennifer Spohrs ◽  
Holger Hoffmann ◽  
...  

In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing Interest, Overload, Normal, Easy, Underload, and Frustration. Each sequence is followed by subjective feedbacks to validate the induction, a respiration baseline to level off the physiological reactions, and a summary of results. Further, prior to the experiment, three questionnaires related to emotion regulation (ERQ), emotional control (TEIQue-SF), and personality traits (TIPI) were collected from each subject to evaluate the stability of the induction paradigm. Based on this HCI scenario, the University of Ulm Multimodal Affective Corpus (uulmMAC), consisting of two homogenous samples of 60 participants and 100 recording sessions was generated. We recorded 16 sensor modalities including 4 × video, 3 × audio, and 7 × biophysiological, depth, and pose streams. Further, additional labels and annotations were also collected. After recording, all data were post-processed and checked for technical and signal quality, resulting in the final uulmMAC dataset of 57 subjects and 95 recording sessions. The evaluation of the reported subjective feedbacks shows significant differences between the sequences, well consistent with the induced states, and the analysis of the questionnaires shows stable results. In summary, our uulmMAC database is a valuable contribution for the field of affective computing and multimodal data analysis: Acquired in a mobile interactive scenario close to real HCI, it consists of a large number of subjects and allows transtemporal investigations. Validated via subjective feedbacks and checked for quality issues, it can be used for affective computing and machine learning applications.


RENOTE ◽  
2009 ◽  
Vol 7 (3) ◽  
pp. 390-400
Author(s):  
Maria Augusta Silveira Netto Nunes

This paper describes how human psychological aspects have been used in lifelike synthetic agents in order to provide believability during the human-computer interaction. We describe a brief survey of applications where Affective Computing Scientists have applied psychological aspects, like Emotion and Personality. Based on those aspects we describe the effort done by Affective Computing scientists in order to create a Markup Language to express and standardize Emotions. Because they have not yet concentrated their effort on Personality, here, we propose a starting point to create a Markup Language to express Personality.


2017 ◽  
Vol 59 (6) ◽  
Author(s):  
Anna Luusua ◽  
Johanna Ylipulli ◽  
Emilia Rönkkö

AbstractWhile the smart city agenda is critiqued for its focus on technology and business led solutions, a new approach to design has been introduced: nonanthropocentric design aims to decenter the human as the focus of design. We build on relevant works in Human-Computer Interaction (HCI) through discussing and comparing relevant theories in the social sciences and by analyzing design examples. This approach to HCI is necessary if humanity is to meet the challenges of the Anthropocene, the era in which human activity affects the Earth on a geological scale.


2012 ◽  
Vol 1 ◽  
pp. 101-122 ◽  
Author(s):  
Sharon O'Brien

This paper seeks to characterise translation as a form of human–computer interaction. The evolution of translator–computer interaction is explored, and the challenges and benefits are enunciated. The concept of cognitive ergonomics is drawn on to argue for a more caring and inclusive approach towards the translator by developers of translation technology. A case is also made for wider acceptance by the translation community of the benefits of the technology at their disposal and for more humanistic research on the impact of technology on the translator, the translation profession, and the translation process.


2015 ◽  
Vol 1 (1) ◽  
pp. 12 ◽  
Author(s):  
Stuart Reeves

<div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p><span>The human-computer interaction (HCI) has had a long and troublesome relationship to the role of ‘science’. HCI’s status as an academic object in terms of coherence and adequacy is often in question—leading to desires for establishing a true scientific discipline. In this paper I explore formative cognitive science influences on HCI, through the impact of early work on the design of input devices. The paper discusses a core idea that I argue has animated much HCI research since: the notion of scientific design spaces. In evaluating this concept, I disassemble the broader ‘picture of science’ in HCI and its role in constructing a disciplinary order for the increasingly diverse and overlapping research communities that contribute in some way to what we call ‘HCI’. In concluding I explore notions of rigour and debates around how we might reassess HCI’s disciplinarity.</span></p></div></div></div>


2021 ◽  
Author(s):  
Michael J Lyons

Twenty-five years ago, my colleagues Miyuki Kamachi and Jiro Gyoba and I designed and photographed JAFFE, a set of facial expression images intended for use in a study of face perception. In 2019, without seeking permission or informing us, Kate Crawford and Trevor Paglen exhibited JAFFE in two widely publicized art shows. In addition, they published a nonfactual account of the images in the essay “Excavating AI: The Politics of Images in Machine Learning Training Sets.” The present article recounts the creation of the JAFFE dataset and unravels each of Crawford and Paglen’s fallacious statements. I also discuss JAFFE more broadly in connection with research on facial expression, affective computing, and human-computer interaction.


2009 ◽  
pp. 80-94
Author(s):  
Chris Baber

In this chapter the evaluation of human computer interaction (HCI) with mobile technologies is considered. The ISO 9241 notion of ‘context of use’ helps to define evaluation in terms of the ‘fitness-for-purpose’ of a given device to perform given tasks by given users in given environments. It is suggested that conventional notions of usability can be useful for considering some aspects of the design of displays and interaction devices, but that additional approaches are needed to fully understand the use of mobile technologies. These additional approaches involve dual-task studies in which the device is used whilst performing some other activity, and subjective evaluation on the impact of the technology on the person.


Sign in / Sign up

Export Citation Format

Share Document