scholarly journals Towards an Engagement-Aware Attentive Artificial Listener for Multi-Party Interactions

2021 ◽  
Vol 8 ◽  
Author(s):  
Catharine Oertel ◽  
Patrik Jonell ◽  
Dimosthenis Kontogiorgos ◽  
Kenneth Funes Mora ◽  
Jean-Marc Odobez ◽  
...  

Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the human-human dialogue, are also beneficial for the perception of a robot in multi-party human-robot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant’s perception of the robot, his behavior as well as the perception of third-party observers.

Author(s):  
Jinseok Woo ◽  
Naoyuki Kubota

Recently, robot architectures with various structures have been developed to improve the human quality of life. Such a robot needs various capabilities such as learning, inference, and prediction for human interaction, and such capabilities are interconnected with each other as a whole system. In the development of a socially-embedded robot partner, human-robot interaction plays an important role. Therefore, in order to develop a socially embedded robot partner, we must consider human communication system. Human Cognition, Emotion, and Behavior should be considered in the development process of the robot partner, and if these factors are fully reflected in the robot partner, then the robot can be used as a socially-friendly robot partner. This book chapter is organized as follows: First, we describe the hardware and software structures. Next, we discuss the cognitive model of the robot partners. Third, we discuss interaction content design for various services. Finally, we discuss the contents of society implementation, and discuss the applicability of robots for social utilization.


2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


2021 ◽  
Vol 9 (2) ◽  
pp. 232596712098700
Author(s):  
Jordan L. Liles ◽  
Richard Danilkowicz ◽  
Jeffrey R. Dugas ◽  
Marc Safran ◽  
Dean Taylor ◽  
...  

Background: The COVID-19 (SARS-COV-2) pandemic has brought unprecedented challenges to the health care system and education models. The reduction in case volume, transition to remote learning, lack of sports coverage opportunities, and decreased clinical interactions have had an immediate effect on orthopaedic sports medicine fellowship programs. Purpose/Hypothesis: Our purpose was to gauge the response to the pandemic from a sports medicine fellowship education perspective. We hypothesized that (1) the COVID-19 pandemic has caused a significant change in training programs, (2) in-person surgical skills training and didactic learning would be substituted with virtual learning, and (3) hands-on surgical training and case numbers would decrease and the percentage of fellows graduating with skill levels commensurate with graduation would decrease. Study Design: Cross-sectional study. Methods: In May 2020, a survey was sent to the fellowship directors of all 90 orthopaedic sports medicine fellowships accredited by the Accreditation Council for Graduate Medical Education; it included questions on program characteristics, educational lectures, and surgical skills. A total of 37 completed surveys (41%) were returned, all of which were deidentified. Responses were compiled and saved on a closed, protected institutional server. Results: In a majority of responding programs (89%), fellows continued to participate in the operating room. Fellows continued with in-person clinical visits in 65% of programs, while 51% had their fellows participate in telehealth visits. Fellows were “redeployed” to help triage and assist with off-service needs in 21% of programs compared with 65% of resident programs having residents rotate off service. Regarding virtual education, 78% of programs have used or are planning to use platforms offered by medical societies, and 49% have used or are planning to use third-party independent education platforms. Of the 37 programs, 30 reported no in-person lectures or meetings, and there was a sharp decline in the number of programs participating in cadaver laboratories (n = 10; 27%) and industry courses (n = 6; 16%). Conclusion: Virtual didactic and surgical education and training as well as telehealth will play a larger role in the coming year than in the past. There are effects to fellows’ exposure to sports coverage and employment opportunities. The biggest challenge will be how to maintain the element of human interaction and connect with patients and trainees at a time when social distancing is needed to curb the spread of COVID-19.


Author(s):  
Jamie Axelrod ◽  
Adam Meyer ◽  
Julie Alexander ◽  
Enjie Hall ◽  
Kristie Orr

Institutions of higher education and their respective disability offices have been challenged with determining how to apply the 2008 Americans with Disabilities Act Amendments Act (ADAAA) in our present-day work settings. Prior to the amendments, third-party documentation was considered essential almost to the point of being non-negotiable in need for most disability offices to facilitate accommodations for disabled students (The authors have made an intentional choice to utilize identity-first language to challenge negative connotations associated with the term disability and highlight the role that inaccessible systems and environments play in disabling people). The ADAAA questioned this mindset. Students with disabilities often found (and still find) themselves burdened financially and procedurally by disability offices requiring documentation to the point where students may not receive the access they truly need. Furthermore, college campuses are increasingly focusing on the limitations of the environment and not the person. As a result of this evolution, the Association on Higher Education and Disability (AHEAD) offered a new framework in 2012 describing how to define documentation. For professionals in the higher education disability field and for those invested in this work, it is critical to grasp the evolving understanding of what constitutes documentation and necessary information to make disability accommodation decisions. Otherwise, disabiled students may be further excluded from higher education access.


Author(s):  
Nik Thompson ◽  
Tanya Jane McGill

This chapter discusses the domain of affective computing and reviews the area of affective tutoring systems: e-learning applications that possess the ability to detect and appropriately respond to the affective state of the learner. A significant proportion of human communication is non-verbal or implicit, and the communication of affective state provides valuable context and insights. Computers are for all intents and purposes blind to this form of communication, creating what has been described as an “affective gap.” Affective computing aims to eliminate this gap and to foster the development of a new generation of computer interfaces that emulate a more natural human-human interaction paradigm. The domain of learning is considered to be of particular note due to the complex interplay between emotions and learning. This is discussed in this chapter along with the need for new theories of learning that incorporate affect. Next, the more commonly applicable means for inferring affective state are identified and discussed. These can be broadly categorized into methods that involve the user’s input and methods that acquire the information independent of any user input. This latter category is of interest as these approaches have the potential for more natural and unobtrusive implementation, and it includes techniques such as analysis of vocal patterns, facial expressions, and physiological state. The chapter concludes with a review of prominent affective tutoring systems in current research and promotes future directions for e-learning that capitalize on the strengths of affective computing.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


2021 ◽  
pp. 027836492110536
Author(s):  
Niels Dehio ◽  
Joshua Smith ◽  
Dennis L. Wigand ◽  
Pouya Mohammadi ◽  
Michael Mistry ◽  
...  

Robotics research into multi-robot systems so far has concentrated on implementing intelligent swarm behavior and contact-less human interaction. Studies of haptic or physical human-robot interaction, by contrast, have primarily focused on the assistance offered by a single robot. Consequently, our understanding of the physical interaction and the implicit communication through contact forces between a human and a team of multiple collaborative robots is limited. We here introduce the term Physical Human Multi-Robot Collaboration (PHMRC) to describe this more complex situation, which we consider highly relevant in future service robotics. The scenario discussed in this article covers multiple manipulators in close proximity and coupled through physical contacts. We represent this set of robots as fingers of an up-scaled agile robot hand. This perspective enables us to employ model-based grasping theory to deal with multi-contact situations. Our torque-control approach integrates dexterous multi-manipulator grasping skills, optimization of contact forces, compensation of object dynamics, and advanced impedance regulation into a coherent compliant control scheme. For this to achieve, we contribute fundamental theoretical improvements. Finally, experiments with up to four collaborative KUKA LWR IV+ manipulators performed both in simulation and real world validate the model-based control approach. As a side effect, we notice that our multi-manipulator control framework applies identically to multi-legged systems, and we execute it also on the quadruped ANYmal subject to non-coplanar contacts and human interaction.


Author(s):  
J. Lindblom ◽  
B. Alenljung

A fundamental challenge of human interaction with socially interactive robots, compared to other interactive products, comes from them being embodied. The embodied nature of social robots questions to what degree humans can interact ‘naturally' with robots, and what impact the interaction quality has on the user experience (UX). UX is fundamentally about emotions that arise and form in humans through the use of technology in a particular situation. This chapter aims to contribute to the field of human-robot interaction (HRI) by addressing, in further detail, the role and relevance of embodied cognition for human social interaction, and consequently what role embodiment can play in HRI, especially for socially interactive robots. Furthermore, some challenges for socially embodied interaction between humans and socially interactive robots are outlined and possible directions for future research are presented. It is concluded that the body is of crucial importance in understanding emotion and cognition in general, and, in particular, for a positive user experience to emerge when interacting with socially interactive robots.


2020 ◽  
Vol 10 (22) ◽  
pp. 7992
Author(s):  
Jinseok Woo ◽  
Yasuhiro Ohyama ◽  
Naoyuki Kubota

This paper presents a robot partner development platform based on smart devices. Humans communicate with others based on the basic motivations of human cooperation and have communicative motives based on social attributes. Understanding and applying these communicative motives become important in the development of socially-embedded robot partners. Therefore, it is becoming more important to develop robots that can be applied according to needs while taking these human communication elements into consideration. The role of a robot partner is more important in not only on the industrial sector but also in households. However, it seems that it will take time to disseminate robots. In the field of service robots, the development of robots according to various needs is important and the system integration of hardware and software becomes crucial. Therefore, in this paper, we propose a robot partner development platform for human-robot interaction. Firstly, we propose a modularized architecture of robot partners using a smart device to realize a flexible update based on the re-usability of hardware and software modules. In addition, we show examples of implementing a robot system using the proposed architecture. Next, we focus on the development of various robots using the modular robot partner system. Finally, we discuss the effectiveness of the proposed robot partner system through social implementation and experiments.


Sign in / Sign up

Export Citation Format

Share Document