Context-Awareness for Social Robots

Author(s):  
Helena A. Frijns ◽  
Oliver Schürer

In the present work, we provide a short literature review of three different ways of approaching the topics of context and context-awareness and relate these to developments in Human-Robot Interaction (HRI) and social robotics. We distinguish an engineering approach to context-awareness, the study of social context in human-centred design, and a view of context as a cognitive component in interaction. We propose a revised definition of context to capture these three views and discuss implications.

2012 ◽  
Vol 09 (04) ◽  
pp. 1250028 ◽  
Author(s):  
ELENA TORTA ◽  
RAYMOND H. CUIJPERS ◽  
JAMES F. JUOLA ◽  
DAVID VAN DER POL

Humanoid robots that share the same space with humans need to be socially acceptable and effective as they interact with people. In this paper we focus our attention on the definition of a behavior-based robotic architecture that (1) allows the robot to navigate safely in a cluttered and dynamically changing domestic environment and (2) encodes embodied non-verbal interactions: the robot respects the users personal space (PS) by choosing the appropriate distance and direction of approach. The model of the PS is derived from human–robot interaction tests, and it is described in a convenient mathematical form. The robot's target location is dynamically inferred through the solution of a Bayesian filtering problem. The validation of the overall behavioral architecture shows that the robot is able to exhibit appropriate proxemic behavior.


2021 ◽  
Vol 3 ◽  
Author(s):  
Alberto Martinetti ◽  
Peter K. Chemweno ◽  
Kostas Nizamis ◽  
Eduard Fosch-Villaronga

Policymakers need to consider the impacts that robots and artificial intelligence (AI) technologies have on humans beyond physical safety. Traditionally, the definition of safety has been interpreted to exclusively apply to risks that have a physical impact on persons’ safety, such as, among others, mechanical or chemical risks. However, the current understanding is that the integration of AI in cyber-physical systems such as robots, thus increasing interconnectivity with several devices and cloud services, and influencing the growing human-robot interaction challenges how safety is currently conceptualised rather narrowly. Thus, to address safety comprehensively, AI demands a broader understanding of safety, extending beyond physical interaction, but covering aspects such as cybersecurity, and mental health. Moreover, the expanding use of machine learning techniques will more frequently demand evolving safety mechanisms to safeguard the substantial modifications taking place over time as robots embed more AI features. In this sense, our contribution brings forward the different dimensions of the concept of safety, including interaction (physical and social), psychosocial, cybersecurity, temporal, and societal. These dimensions aim to help policy and standard makers redefine the concept of safety in light of robots and AI’s increasing capabilities, including human-robot interactions, cybersecurity, and machine learning.


Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 267
Author(s):  
Fernando Alonso Martin ◽  
María Malfaz ◽  
Álvaro Castro-González ◽  
José Carlos Castillo ◽  
Miguel Ángel Salichs

The success of social robotics is directly linked to their ability of interacting with people. Humans possess verbal and non-verbal communication skills, and, therefore, both are essential for social robots to get a natural human–robot interaction. This work focuses on the first of them since the majority of social robots implement an interaction system endowed with verbal capacities. In order to do this implementation, we must equip social robots with an artificial voice system. In robotics, a Text to Speech (TTS) system is the most common speech synthesizer technique. The performance of a speech synthesizer is mainly evaluated by its similarity to the human voice in relation to its intelligibility and expressiveness. In this paper, we present a comparative study of eight off-the-shelf TTS systems used in social robots. In order to carry out the study, 125 participants evaluated the performance of the following TTS systems: Google, Microsoft, Ivona, Loquendo, Espeak, Pico, AT&T, and Nuance. The evaluation was performed after observing videos where a social robot communicates verbally using one TTS system. The participants completed a questionnaire to rate each TTS system in relation to four features: intelligibility, expressiveness, artificiality, and suitability. In this study, four research questions were posed to determine whether it is possible to present a ranking of TTS systems in relation to each evaluated feature, or, on the contrary, there are no significant differences between them. Our study shows that participants found differences between the TTS systems evaluated in terms of intelligibility, expressiveness, and artificiality. The experiments also indicated that there was a relationship between the physical appearance of the robots (embodiment) and the suitability of TTS systems.


Author(s):  
Peter Remmers

Effects of anthropomorphism or zoomorphism in social robotics motivate two opposing tendencies in the philosophy and ethics of robots: a ‘rational’ tendency that discourages excessive anthropomorphism because it is based on an illusion and a ‘visionary’ tendency that promotes the relational reality of human-robot interaction. I argue for two claims: First, the opposition between these tendencies cannot be resolved and leads to a kind of technological antinomy. Second, we can deal with this antinomy by way of an analogy between our treatment of robots as social interactors and the perception of objects in pictures according to a phenomenological theory of image perception. Following this analogy, human- or animal-likeness in social robots is interpreted neither as a psychological illusion, nor as a relational reality. Instead, robots belong to a special ontological category shaped by perception and interaction, similar to objects in images.


Author(s):  
Joanna K. Malinowska

AbstractGiven that empathy allows people to form and maintain satisfying social relationships with other subjects, it is no surprise that this is one of the most studied phenomena in the area of human–robot interaction (HRI). But the fact that the term ‘empathy’ has strong social connotations raises a question: can it be applied to robots? Can we actually use social terms and explanations in relation to these inanimate machines? In this article, I analyse the range of uses of the term empathy in the field of HRI studies and social robotics, and consider the substantial, functional and relational positions on this issue. I focus on the relational (cooperational) perspective presented by Luisa Damiano and Paul Dumouchel, who interpret emotions (together with empathy) as being the result of affective coordination. I also reflect on the criteria that should be used to determine when, in such relations, we are dealing with actual empathy.


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180037 ◽  
Author(s):  
Joshua Skewes ◽  
David M. Amodio ◽  
Johanna Seibt

The field of social robotics offers an unprecedented opportunity to probe the process of impression formation and the effects of identity-based stereotypes (e.g. about gender or race) on social judgements and interactions. We present the concept of fair proxy communication—a form of robot-mediated communication that proceeds in the absence of potentially biasing identity cues—and describe how this application of social robotics may be used to illuminate implicit bias in social cognition and inform novel interventions to reduce bias. We discuss key questions and challenges for the use of robots in research on the social cognition of bias and offer some practical recommendations. We conclude by discussing boundary conditions of this new form of interaction and by raising some ethical concerns about the inclusion of social robots in psychological research and interventions. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


Sign in / Sign up

Export Citation Format

Share Document