The Role of Affective Touch in Human-Robot Interaction: Human Intent and Expectations in Touching the Haptic Creature

2011 ◽  
Vol 4 (2) ◽  
pp. 163-180 ◽  
Author(s):  
Steve Yohanan ◽  
Karon E. MacLean
2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


Author(s):  
Ruth Stock-Homburg

AbstractKnowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.


Philosophies ◽  
2019 ◽  
Vol 4 (1) ◽  
pp. 11 ◽  
Author(s):  
Frank Förster

In this article, I assess an existing language acquisition architecture, which was deployed in linguistically unconstrained human–robot interaction, together with experimental design decisions with regard to their enactivist credentials. Despite initial scepticism with respect to enactivism’s applicability to the social domain, the introduction of the notion of participatory sense-making in the more recent enactive literature extends the framework’s reach to encompass this domain. With some exceptions, both our architecture and form of experimentation appear to be largely compatible with enactivist tenets. I analyse the architecture and design decisions along the five enactivist core themes of autonomy, embodiment, emergence, sense-making, and experience, and discuss the role of affect due to its central role within our acquisition experiments. In conclusion, I join some enactivists in demanding that interaction is taken seriously as an irreducible and independent subject of scientific investigation, and go further by hypothesising its potential value to machine learning.


Author(s):  
Karoline Malchus ◽  
Prisca Stenneken ◽  
Petra Jaecks ◽  
Carolin Meyer ◽  
Oliver Damm ◽  
...  

2008 ◽  
Vol 9 (3) ◽  
pp. 519-550 ◽  
Author(s):  
Nuno Otero ◽  
Chrystopher L. Nehaniv ◽  
Dag Sverre Syrdal ◽  
Kerstin Dautenhahn

This paper describes our general framework for the investigation of how human gestures can be used to facilitate the interaction and communication between humans and robots. Two studies were carried out to reveal which “naturally occurring” gestures can be observed in a scenario where users had to explain to a robot how to perform a home task. Both studies followed a within-subjects design: participants had to demonstrate how to lay a table to a robot using two different methods — utilizing only gestures or gestures and speech. The first study enabled the validation of the COGNIRON coding scheme for human gestures in Human–Robot Interaction (HRI). Based on the data collected in both studies, an annotated video corpus was produced and characteristics such as frequency and duration of the different gestural classes have been gathered to help capture requirements for the designers of HRI systems. The results from the first study regarding the frequencies of the gestural types suggest an interaction between the order of presentation of the two methods and the actual type of gestures produced. However, the analysis of the speech produced along with the gestures did not reveal differences due to ordering of the experimental conditions. The second study expands the issues addressed by the first study: we aimed at extending the role of the interaction partner (the robot) by introducing some positive acknowledgement of the participants’ activity. The results show no significant differences in the distribution of gestures (frequency and duration) between the two explanation methods, in contrast to the previous study. Implications for HRI are discussed focusing on issues relevant for the design of the robot’s communication skills to support the interaction loop with humans in home scenarios.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 971
Author(s):  
Selene Goenaga ◽  
Loraine Navarro ◽  
Christian G. Quintero M. ◽  
Mauricio Pardo

This paper proposes an intelligent system that can hold an interview, using a NAO robot as interviewer playing the role of vocational tutor. For that, twenty behaviors within five personality profiles are classified and categorized into NAO. Five basic emotions are considered: anger, boredom, interest, surprise, and joy. Selected behaviors are grouped according to these five different emotions. Common behaviors (e.g., movements or body postures) used by the robot during vocational guidance sessions are based on a theory of personality traits called the “Five-Factor Model”. In this context, a predefined set of questions is asked by the robot—according to a theoretical model called the “Orientation Model”—about the person’s vocational preferences. Therefore, NAO could react as conveniently as possible during the interview, according to the score of the answer given by the person to the question posed and its personality type. Additionally, based on the answers to these questions, a vocational profile is established, and the robot could provide a recommendation about the person’s vocation. The results show how the intelligent selection of behaviors can be successfully achieved through the proposed approach, making the Human–Robot Interaction friendlier.


Sign in / Sign up

Export Citation Format

Share Document