scholarly journals An Overview of Human Performance Models in Human-Computer Interactions

Author(s):  
Li Pi ◽  
Jibin Yin
2019 ◽  
Vol 7 (3) ◽  
pp. 919-925
Author(s):  
Kelvin Wambani Siovi ◽  
Cheruiyot Willison Kipruto ◽  
Agnes Mindila

2021 ◽  
Vol 25 (4) ◽  
pp. 1031-1045
Author(s):  
Helang Lai ◽  
Keke Wu ◽  
Lingli Li

Emotion recognition in conversations is crucial as there is an urgent need to improve the overall experience of human-computer interactions. A promising improvement in this field is to develop a model that can effectively extract adequate contexts of a test utterance. We introduce a novel model, termed hierarchical memory networks (HMN), to address the issues of recognizing utterance level emotions. HMN divides the contexts into different aspects and employs different step lengths to represent the weights of these aspects. To model the self dependencies, HMN takes independent local memory networks to model these aspects. Further, to capture the interpersonal dependencies, HMN employs global memory networks to integrate the local outputs into global storages. Such storages can generate contextual summaries and help to find the emotional dependent utterance that is most relevant to the test utterance. With an attention-based multi-hops scheme, these storages are then merged with the test utterance using an addition operation in the iterations. Experiments on the IEMOCAP dataset show our model outperforms the compared methods with accuracy improvement.


2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


Author(s):  
Richard Steinberg ◽  
Raytheon Company ◽  
Alice Diggs ◽  
Raytheon Company ◽  
Jade Driggs

Verification and validation (V&V) for human performance models (HPMs) can be likened to building a house with no bricks, since it is difficult to obtain metrics to validate a model when the system is still in development. HPMs are effective for performing trade-offs between the human system designs factors including number of operators needed, the role of automated tasks versus operator tasks, and member task responsibilities required to operate a system. On a recent government contract, our team used a human performance model to provide additional analysis beyond traditional trade studies. Our team verified the contractually mandated staff size for using the system. This task demanded that the model have sufficient fidelity to provide information for high confidence staffing decisions. It required a method for verifying and validating the model and its results to ensure that it accurately reflected the real world. The situation caused a dilemma because there was no actual system to gather real data to use to validate the model. It is a challenge to validate human performance models, since they support design decisions prior to system. For example, crew models are typically inform the design, staffing needs, and the requirements for each operator’s user interface prior to development. This paper discusses a successful case study for how our team met the V&V challenges with the US Air Force model accreditation authority and successfully accredited our human performance model with enough fidelity for requirements testing on an Air Force Command and Control program.


Sign in / Sign up

Export Citation Format

Share Document