scholarly journals Effect of brain stimulation on mechanisms of social cognition is modulated by individual preferences for human versus robot agents

Author(s):  
Abdulaziz Abubshait ◽  
Eva Wiese

When we interact with others, we use nonverbal behavior such as changes in gaze direction to make inferences about what people think or what they want to do next – a process called mentalizing. Previous studies have shown that how we react to others’ gaze signals depends on how much “mind” we ascribe to the gazer, and that this process of mind perception is related to activation in brain areas that process social information (i.e., social brain). Although brain stimulation studies have identified prefrontal structures like the ventromedial prefrontal cortex (vmPFC) as the potential neural substrate through which mind perception modulates social-cognitive processes like attentional orienting to gaze cues (i.e., gaze following), little is known about whether and how individual differences in preferences for human versus robot agents modulate this relationship. To address this question, the current study examines how transcranial direct current stimulation (tDCS) of left prefrontal versus left temporo-parietal areas affects attentional orienting to gaze signals as a function of the participants’ preferences for human ( Human Gaze Followers, HGF) versus robot ( Robot Gaze Followers; RGF) agents at baseline (prior to brain stimulation). Results show that prefrontal (but not temporo-parietal) stimulation positively affected attentional orienting to gaze signals for HGFs for the human but not the robot gazer; RGFs showed no effect of brain stimulation in neither of the stimulation conditions. These findings inform how preferences for human versus nonhuman agent types can influence subsequent interactions and communications in human-robot interaction.

2019 ◽  
Vol 374 (1771) ◽  
pp. 20180430 ◽  
Author(s):  
Eva Wiese ◽  
Abdulaziz Abubshait ◽  
Bobby Azarian ◽  
Eric J. Blumberg

In social interactions, we rely on non-verbal cues like gaze direction to understand the behaviour of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behaviour, a process called mind perception . While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e. attentional orienting to gaze cues) with either a human or a robot agent (i.e. manipulation of mind perception) while transcranial direct current stimulation (tDCS) was applied to prefrontal and temporo-parietal brain areas. The results show that temporo-parietal stimulation did not modulate mechanisms of social attention, neither in response to the human nor in response to the robot agent, whereas prefrontal stimulation enhanced attentional orienting in response to human gaze cues and attenuated attentional orienting in response to robot gaze cues. The findings suggest that mind perception modulates low-level mechanisms of social cognition via prefrontal structures, and that a certain degree of mind perception is essential in order for prefrontal stimulation to affect mechanisms of social attention. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


Author(s):  
Jacquelyn L. Schreck ◽  
Olivia B. Newton ◽  
Jihye Song ◽  
Stephen M. Fiore

This study examined how human-robot interaction is influenced by individual differences in theory of mind ability. Participants engaged in a hallway navigation task with a robot over a number of trials. The display on the robot and its proxemics behavior was manipulated, and participants made mental state attributions across trials. Participant ability in theory of mind was also assessed. Results show that proxemics behavior and robotic display characteristics differentially influence the degree to which individuals perceive the robot when making mental state attributions about self or other. Additionally, theory of mind ability interacted with proxemics and display characteristics. The findings illustrate the importance of understanding individual differences in higher level cognition. As robots become more social, the need to understand social cognitive processes in human-robot interactions increases. Results are discussed in the context of how individual differences and social signals theory inform research in human-robot interaction.


Author(s):  
Samantha F. Warta ◽  
Katelynn A. Kapalo ◽  
Andrew Best ◽  
Stephen M. Fiore

Robotic teammates are becoming prevalent in increasingly complex and dynamic operational and social settings. For this reason, the perception of robots operating in such environments has transitioned from the perception of robots as tools, extending human capabilities, to the perception of robots as teammates, collaborating with humans and displaying complex social cognitive processes. The goal of this paper is to introduce a discussion on an integrated set of robotic design elements, as well as provide support for the idea that human-robot interaction requires a clearer understanding of social cognitive constructs to optimize human-robot collaboration. We develop a set of research questions addressing these constructs with the goal of improving the engineering of artificial cognitive systems reliant on natural human-robot interaction.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2216
Author(s):  
Syed Tanweer Shah Bukhari ◽  
Wajahat Mahmood Qazi

The challenge in human–robot interaction is to build an agent that can act upon human implicit statements, where the agent is instructed to execute tasks without explicit utterance. Understanding what to do under such scenarios requires the agent to have the capability to process object grounding and affordance learning from acquired knowledge. Affordance has been the driving force for agents to construct relationships between objects, their effects, and actions, whereas grounding is effective in the understanding of spatial maps of objects present in the environment. The main contribution of this paper is to propose a methodology for the extension of object affordance and grounding, the Bloom-based cognitive cycle, and the formulation of perceptual semantics for the context-based human–robot interaction. In this study, we implemented YOLOv3 to formulate visual perception and LSTM to identify the level of the cognitive cycle, as cognitive processes synchronized in the cognitive cycle. In addition, we used semantic networks and conceptual graphs as a method to represent knowledge in various dimensions related to the cognitive cycle. The visual perception showed average precision of 0.78, an average recall of 0.87, and an average F1 score of 0.80, indicating an improvement in the generation of semantic networks and conceptual graphs. The similarity index used for the lingual and visual association showed promising results and improves the overall experience of human–robot interaction.


Author(s):  
Ali Momen ◽  
Eva Wiese

Social robots with expressive gaze have positive effects on human-robot interaction. In particular, research suggests that when robots are programmed to express introverted or extroverted gaze behavior, individuals enjoy interacting more with robots that match their personality. However, how this affects social-cognitive performance during human-robot interactions has not been thoroughly examined yet. In the current paper, we examine whether the perceived match between human and robot personality positively affects the degree to which the robot’s gaze is followed (i.e., gaze cueing, as a proxy for more complex social-cognitive behavior). While social attention has been examined extensively outside of human-robot interaction, recent research shows that a robot’s gaze is attended to in a similar way as a human’s gaze. While our results did not support the hypothesis that gaze cueing would be strongest when the participant’s personality matched the robot’s personality, we did find evidence that participants followed the gaze of introverted robots more strongly than the gaze of extroverted robots. This finding suggests that agent’s displaying extroverted gaze behavior may hurt performance in human-robot interaction.


2018 ◽  
Author(s):  
Ali Momen ◽  
Eva Wiese

Social robots with expressive gaze have positive effects on human-robot interaction. In particular, research suggests that when robots are programmed to express introverted or extraverted gaze behavior, individuals enjoy interacting more with robots that match their personality. However, how this affects social-cognitive performance during human-robot interactions has not been thoroughly examined yet. In the current paper, we examine whether the perceived match between human and robot personality positively affects the degree to which the robot’s gaze is followed (i.e., gaze cueing, as a proxy for more complex social-cognitive behavior). While social attention has been examined extensively outside of human-robot interaction, recent research shows that a robot’s gaze is attended to in a similar way as a human’s gaze. While our results did not support the hypothesis that gaze cueing would be strongest when the participant’s personality matched the robot’s personality, we did find evidence that participants followed the gaze of introverted robots more strongly than the gaze of extroverted robots. This finding suggests that agent’s displaying extraverted gaze behavior may hurt performance in human-robot interaction


Author(s):  
Anshu Saxena Arora ◽  
Amit Arora

Research on human-robot interaction (HRI) is growing; however, focus on the congruent socio-behavioral HRI research fields of social cognition, socio-behavioral intentions, and code of ethics is lacking. Humans possess an inherent ability of integrating perception, cognition, and action; while robots may have limitations as they may not recognize an object or a being, navigate a terrain, and/or comprehend written or verbal language and instructions. This HRI research focuses on issues and challenges for both humans and robots from social, behavioral, technical, and ethical perspectives. The human ability to anthropomorphize robots and adoption of ‘intentional mindset' toward robots through xenocentrism have added new dimensions to HRI. Robotic anthropomorphism plays a significant role in how humans can be successful companions of robots. This research explores social cognitive intelligence versus artificial intelligence with a focus on privacy protections and ethical implications of HRI while designing robots that are ethical, cognitively and artificially intelligent, and social human-like agents.


2021 ◽  
Vol 11 (16) ◽  
pp. 7426
Author(s):  
Furong Deng ◽  
Yu Zhou ◽  
Sifan Song ◽  
Zijian Jiang ◽  
Lifu Chen ◽  
...  

Gaze-following is an effective way for intention understanding in human–robot interaction, which aims to follow the gaze of humans to estimate what object is being observed. Most of the existing methods require people and objects to appear in the same image. Due to the limitation in the view of the camera, these methods are not applicable in practice. To address this problem, we propose a method of gaze following that utilizes a geometric map for better estimation. With the help of the map, this method is competitive for cross-frame estimation. On the basis of this method, we propose a novel gaze-based image caption system, which has been studied for the first time. Our experiments demonstrate that the system follows the gaze and describes objects accurately. We believe that this system is competent for autistic children’s rehabilitation training, pension service robots, and other applications.


2021 ◽  
Author(s):  
Nicolas Spatola ◽  
Thierry Chaminade

Abstract Human-human and human-robot interaction are often compared with the overarching question of the differences in terms of cognitive processes engaged and what can explain these differences. However, research addressing this topic, especially in neuro-imagery, use extremely artificial interaction settings. Also, they neglect a crucial parameter of human social cognition: interaction is an adaptive (rather than fixed) process. Building upon the first fMRI paradigm requiring participants to interact online with both a human and a robot in a dyadic setting, we investigate the differences and changes of brain activity during the two type of interactions in a whole brain analysis. Our results show that, grounding on a common default level, the activity in specific neural regions associated with social cognition (e.g. Posterior Cingulate Cortex) increase in HHI while remaining stable in HRI. We discuss these results regarding the iterative process of deepening the social engagement facing humans but not robots.


Author(s):  
Levern Q. Currie ◽  
Eva Wiese

Robotic agents are becoming increasingly pervasive in society, and have already begun advancing fields such as healthcare, education, and industry. However, despite their potential to do good for society, many people still feel unease when imaging a future where robots and humans work and live together in shared environments, partly because robots are not generally trusted or ascribed human-like socio-emotional skills such as mentalizing and empathizing. In addition, performing tasks conjointly with robots can be frustrating and ineffective partially due to the fact that neuronal networks involved in action understanding and execution (i.e., the action-perception network; APN) are underactivated in human-robot interaction (HRI). While a number of studies has linked underactivation in APN to reduced abilities to predict a robot’s actions, little is known about how performing a competitive task together with a robot affects one’s own ability to execute or suppress an action. In the current experiment, we use a Go/No-Go task that requires participants to give a response on Go trials and suppress a response on No-Go trials to examine whether the performance of human players is impacted by whether they play the game against a robot believed to be controlled by a human as opposed to being pre-programmed. Preliminary data shows higher false alarm rates on No-Go trials, higher hit rates on Go trials, longer reaction times on Go trials and higher inverse efficiency scores in the human-controlled versus the pre-programmed condition. The results show that mind perception (here: perceiving actions as human-controlled) significantly impacted action execution of human players in a competitive human-robot interaction game.


Sign in / Sign up

Export Citation Format

Share Document