scholarly journals How Can Physiological Computing Benefit Human-Robot Interaction?

Robotics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 100
Author(s):  
Raphaëlle N. Roy ◽  
Nicolas Drougard ◽  
Thibault Gateau ◽  
Frédéric Dehais ◽  
Caroline P. C. Chanel

As systems grow more automatized, the human operator is all too often overlooked. Although human-robot interaction (HRI) can be quite demanding in terms of cognitive resources, the mental states (MS) of the operators are not yet taken into account by existing systems. As humans are no providential agents, this lack can lead to hazardous situations. The growing number of neurophysiology and machine learning tools now allows for efficient operators’ MS monitoring. Sending feedback on MS in a closed-loop solution is therefore at hand. Involving a consistent automated planning technique to handle such a process could be a significant asset. This perspective article was meant to provide the reader with a synthesis of the significant literature with a view to implementing systems that adapt to the operator’s MS to improve human-robot operations’ safety and performance. First of all, the need for this approach is detailed regarding remote operation, an example of HRI. Then, several MS identified as crucial for this type of HRI are defined, along with relevant electrophysiological markers. A focus is made on prime degraded MS linked to time-on-task and task demands, as well as collateral MS linked to system outputs (i.e., feedback and alarms). Lastly, the principle of symbiotic HRI is detailed and one solution is proposed to include the operator state vector into the system using a mixed-initiative decisional framework to drive such an interaction.

2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 296 ◽  
Author(s):  
Caroline P. C. Chanel ◽  
Raphaëlle N. Roy ◽  
Frédéric Dehais ◽  
Nicolas Drougard

The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose.


2011 ◽  
Vol 5 (1) ◽  
pp. 83-105 ◽  
Author(s):  
Jessie Y. C. Chen

A military vehicle crew station environment was simulated and a series of three experiments was conducted to examine the workload and performance of the combined position of the gunner and robotics operator in a multitasking environment. The study also evaluated whether aided target recognition (AiTR) capabilities (delivered through tactile and/or visual cuing) for the gunnery task might benefit the concurrent robotics and communication tasks and how the concurrent task performance might be affected when the AiTR was unreliable (i.e., false alarm prone or miss prone). Participants’ spatial ability was consistently found to be a reliable predictor of their targeting task performance as well as their modality preference for the AiTR display. Participants’ attentional control was found to significantly affect the way they interacted with unreliable automated systems.


Author(s):  
Antonio Bicchi ◽  
Michele Bavaro ◽  
Gianluca Boccadamo ◽  
Davide De Carli ◽  
Roberto Filippini ◽  
...  

Complexity ◽  
2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Yiming Jiang ◽  
Chenguang Yang ◽  
Jing Na ◽  
Guang Li ◽  
Yanan Li ◽  
...  

As an imitation of the biological nervous systems, neural networks (NNs), which have been characterized as powerful learning tools, are employed in a wide range of applications, such as control of complex nonlinear systems, optimization, system identification, and patterns recognition. This article aims to bring a brief review of the state-of-the-art NNs for the complex nonlinear systems by summarizing recent progress of NNs in both theory and practical applications. Specifically, this survey also reviews a number of NN based robot control algorithms, including NN based manipulator control, NN based human-robot interaction, and NN based cognitive control.


2018 ◽  
Vol 15 (4) ◽  
pp. 172988141877319 ◽  
Author(s):  
S M Mizanoor Rahman ◽  
Ryojun Ikeura

In the first step, a one degree of freedom power assist robotic system is developed for lifting lightweight objects. Dynamics for human–robot co-manipulation is derived that includes human cognition, for example, weight perception. A novel admittance control scheme is derived using the weight perception–based dynamics. Human subjects lift a small-sized, lightweight object with the power assist robotic system. Human–robot interaction and system characteristics are analyzed. A comprehensive scheme is developed to evaluate the human–robot interaction and performance, and a constrained optimization algorithm is developed to determine the optimum human–robot interaction and performance. The results show that the inclusion of weight perception in the control helps achieve optimum human–robot interaction and performance for a set of hard constraints. In the second step, the same optimization algorithm and control scheme are used for lifting a heavy object with a multi-degree of freedom power assist robotic system. The results show that the human–robot interaction and performance for lifting the heavy object are not as good as that for lifting the lightweight object. Then, weight perception–based intelligent controls in the forms of model predictive control and vision-based variable admittance control are applied for lifting the heavy object. The results show that the intelligent controls enhance human–robot interaction and performance, help achieve optimum human–robot interaction and performance for a set of soft constraints, and produce similar human–robot interaction and performance as obtained for lifting the lightweight object. The human–robot interaction and performance for lifting the heavy object with power assist are treated as intuitive and natural because these are calibrated with those for lifting the lightweight object. The results also show that the variable admittance control outperforms the model predictive control. We also propose a method to adjust the variable admittance control for three degrees of freedom translational manipulation of heavy objects based on human intent recognition. The results are useful for developing controls of human friendly, high performance power assist robotic systems for heavy object manipulation in industries.


2013 ◽  
Vol 14 (3) ◽  
pp. 390-418 ◽  
Author(s):  
Tian Xu ◽  
Hui Zhang ◽  
Chen Yu

When humans are addressing multiple robots with informative speech acts (Clark & Carlson 1982), their cognitive resources are shared between all the participating robot agents. For each moment, the user’s behavior is not only determined by the actions of the robot that they are directly gazing at, but also shaped by the behaviors from all the other robots in the shared environment. We define cooperative behavior as the action performed by the robots that are not capturing the user’s direct attention. In this paper, we are interested in how the human participants adjust and coordinate their own behavioral cues when the robot agents are performing different cooperative gaze behaviors. A novel gaze-contingent platform was designed and implemented. The robots’ behaviors were triggered by the participant’s attentional shifts in real time. Results showed that the human participants were highly sensitive when the robot agents were performing different cooperative gazing behaviors. Keywords: human-robot interaction; multi-robot interaction; multiparty interaction; eye gaze cue; embodied conversational agent


Sign in / Sign up

Export Citation Format

Share Document