Cooperative gazing behaviors in human multi-robot interaction

2013 ◽  
Vol 14 (3) ◽  
pp. 390-418 ◽  
Author(s):  
Tian Xu ◽  
Hui Zhang ◽  
Chen Yu

When humans are addressing multiple robots with informative speech acts (Clark & Carlson 1982), their cognitive resources are shared between all the participating robot agents. For each moment, the user’s behavior is not only determined by the actions of the robot that they are directly gazing at, but also shaped by the behaviors from all the other robots in the shared environment. We define cooperative behavior as the action performed by the robots that are not capturing the user’s direct attention. In this paper, we are interested in how the human participants adjust and coordinate their own behavioral cues when the robot agents are performing different cooperative gaze behaviors. A novel gaze-contingent platform was designed and implemented. The robots’ behaviors were triggered by the participant’s attentional shifts in real time. Results showed that the human participants were highly sensitive when the robot agents were performing different cooperative gazing behaviors. Keywords: human-robot interaction; multi-robot interaction; multiparty interaction; eye gaze cue; embodied conversational agent

Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 680
Author(s):  
Ethan Jones ◽  
Winyu Chinthammit ◽  
Weidong Huang ◽  
Ulrich Engelke ◽  
Christopher Lueg

Control of robot arms is often required in engineering and can be performed by using different methods. This study examined and symmetrically compared the use of a controller, eye gaze tracker and a combination thereof in a multimodal setup for control of a robot arm. Tasks of different complexities were defined and twenty participants completed an experiment using these interaction modalities to solve the tasks. More specifically, there were three tasks: the first was to navigate a chess piece from a square to another pre-specified square; the second was the same as the first task, but required more moves to complete; and the third task was to move multiple pieces to reach a solution to a pre-defined arrangement of the pieces. Further, while gaze control has the potential to be more intuitive than a hand controller, it suffers from limitations with regard to spatial accuracy and target selection. The multimodal setup aimed to mitigate the weaknesses of the eye gaze tracker, creating a superior system without simply relying on the controller. The experiment shows that the multimodal setup improves performance over the eye gaze tracker alone ( p < 0.05 ) and was competitive with the controller only setup, although did not outperform it ( p > 0.05 ).


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180026 ◽  
Author(s):  
Hatice Gunes ◽  
Oya Celiktutan ◽  
Evangelos Sariyanidi

Communication with humans is a multi-faceted phenomenon where the emotions, personality and non-verbal behaviours, as well as the verbal behaviours, play a significant role, and human–robot interaction (HRI) technologies should respect this complexity to achieve efficient and seamless communication. In this paper, we describe the design and execution of five public demonstrations made with two HRI systems that aimed at automatically sensing and analysing human participants’ non-verbal behaviour and predicting their facial action units, facial expressions and personality in real time while they interacted with a small humanoid robot. We describe an overview of the challenges faced together with the lessons learned from those demonstrations in order to better inform the science and engineering fields to design and build better robots with more purposeful interaction capabilities. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


2017 ◽  
Vol 6 (1) ◽  
pp. 25 ◽  
Author(s):  
Henny Admoni ◽  
Brian Scassellati

Author(s):  
Lue-Feng Chen ◽  
◽  
Zhen-Tao Liu ◽  
Min Wu ◽  
Fangyan Dong ◽  
...  

A multi-robot behavior adaptation mechanism that adapts to human intention is proposed for human-robot interaction (HRI), where information-driven fuzzy friend-Q learning (IDFFQ) is used to generate an optimal behavior-selection policy, and intention is understood mainly based on human emotions. This mechanism aims to endow robots with human-oriented interaction capabilities to understand and adapt their behaviors to human intentions. It also decreases the response time (RT) of robots by embedding the human identification information such as religion for behavior selection, and increases the satisfaction of humans by considering their deep-level information, including intention and emotion, so as to make interactions run smoothly. Experiments is performed in a scenario of drinking at a bar. Results show that the learning steps of the proposal is 51 steps less than that of the fuzzy production rule based friend-Q learning (FPRFQ), and the robots’ RT is about 25% of the time consumed by FPRFQ. Additionally, emotion recognition and intention understanding achieved an accuracy of 80.36% and 85.71%, respectively. Moreover, a subjective evaluation of customers through a questionnaire obtains a reaction of “satisfied.” Based on these preliminary experiments, the proposal is being extended to service robots for behavior adaptation to customers’ intention to drink at a bar.


2013 ◽  
pp. 281-301
Author(s):  
Mohan Sridharan

Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are increasingly deployed to interact with humans in our homes and offices, robots need the ability to operate autonomously based on sensory cues and high-level feedback from non-expert human participants. Towards this objective, this chapter describes an integrated framework that jointly addresses the learning, adaptation, and interaction challenges associated with robust human-robot interaction in real-world application domains. The novel probabilistic framework consists of: (a) a bootstrap learning algorithm that enables a robot to learn layered graphical models of environmental objects and adapt to unforeseen dynamic changes; (b) a hierarchical planning algorithm based on partially observable Markov decision processes (POMDPs) that enables the robot to reliably and efficiently tailor learning, sensing, and processing to the task at hand; and (c) an augmented reinforcement learning algorithm that enables the robot to acquire limited high-level feedback from non-expert human participants, and merge human feedback with the information extracted from sensory cues. Instances of these algorithms are implemented and fully evaluated on mobile robots and in simulated domains using vision as the primary source of information in conjunction with range data and simplistic verbal inputs. Furthermore, a strategy is outlined to integrate these components to achieve robust human-robot interaction in real-world application domains.


Sign in / Sign up

Export Citation Format

Share Document