computer interactions
Recently Published Documents


TOTAL DOCUMENTS

204
(FIVE YEARS 42)

H-INDEX

14
(FIVE YEARS 3)

2022 ◽  
Vol 18 (1) ◽  
pp. 0-0

It is essential to democracy that voters trust voting systems enough to participate in elections and use these systems. Unfortunately, voter trust has been found to be low in many situations, which could detrimentally impact human-computer interactions in voting. Therefore, it is important to understand the degree to which voters trust any specific voting method. Voting researchers have developed and used measures of overall trust in technology; yet researchers have long argued that trust in systems is domain-specific, implying that system-specific measures should be used instead. To address this latter point, this paper describes the development of a psychometrically reliable and validated instrument called the Trust in Voting Systems (TVS) measure. The TVS not only allows researchers to understand group mean differences in trust across voting systems; it also allows researchers to understand individual differences in trust within systems—all of which collectively serves to inform and improve voting systems.


2021 ◽  
Vol 2 ◽  
Author(s):  
Rachel Proffitt ◽  
Stephanie Glegg ◽  
Tal Krasovsky ◽  
Belinda Lange ◽  
Danielle Levac ◽  
...  

2021 ◽  
Vol 25 (4) ◽  
pp. 1031-1045
Author(s):  
Helang Lai ◽  
Keke Wu ◽  
Lingli Li

Emotion recognition in conversations is crucial as there is an urgent need to improve the overall experience of human-computer interactions. A promising improvement in this field is to develop a model that can effectively extract adequate contexts of a test utterance. We introduce a novel model, termed hierarchical memory networks (HMN), to address the issues of recognizing utterance level emotions. HMN divides the contexts into different aspects and employs different step lengths to represent the weights of these aspects. To model the self dependencies, HMN takes independent local memory networks to model these aspects. Further, to capture the interpersonal dependencies, HMN employs global memory networks to integrate the local outputs into global storages. Such storages can generate contextual summaries and help to find the emotional dependent utterance that is most relevant to the test utterance. With an attention-based multi-hops scheme, these storages are then merged with the test utterance using an addition operation in the iterations. Experiments on the IEMOCAP dataset show our model outperforms the compared methods with accuracy improvement.


10.2196/25218 ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. e25218
Author(s):  
Samar Helou ◽  
Victoria Abou-Khalil ◽  
Riccardo Iacobucci ◽  
Elie El Helou ◽  
Ken Kiyono

Background The study of doctor-patient-computer interactions is a key research area for examining doctor-patient relationships; however, studying these interactions is costly and obtrusive as researchers usually set up complex mechanisms or intrude on consultations to collect, then manually analyze the data. Objective We aimed to facilitate human-computer and human-human interaction research in clinics by providing a computational ethnography tool: an unobtrusive automatic classifier of screen gaze and dialogue combinations in doctor-patient-computer interactions. Methods The classifier’s input is video taken by doctors using their computers' internal camera and microphone. By estimating the key points of the doctor's face and the presence of voice activity, we estimate the type of interaction that is taking place. The classification output of each video segment is 1 of 4 interaction classes: (1) screen gaze and dialogue, wherein the doctor is gazing at the computer screen while conversing with the patient; (2) dialogue, wherein the doctor is gazing away from the computer screen while conversing with the patient; (3) screen gaze, wherein the doctor is gazing at the computer screen without conversing with the patient; and (4) other, wherein no screen gaze or dialogue are detected. We evaluated the classifier using 30 minutes of video provided by 5 doctors simulating consultations in their clinics both in semi- and fully inclusive layouts. Results The classifier achieved an overall accuracy of 0.83, a performance similar to that of a human coder. Similar to the human coder, the classifier was more accurate in fully inclusive layouts than in semi-inclusive layouts. Conclusions The proposed classifier can be used by researchers, care providers, designers, medical educators, and others who are interested in exploring and answering questions related to screen gaze and dialogue in doctor-patient-computer interactions.


2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


Sign in / Sign up

Export Citation Format

Share Document