scholarly journals Brain stimulation to left prefrontal cortex modulates mechanisms of social attention

Author(s):  
Eva Wiese ◽  
Aziz Abubshait ◽  
Bobby Azarian ◽  
Eric J. Blumberg

In social interactions, we rely on nonverbal cues like gaze direction to understand the behavior of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behavior, a process called mind perception. While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e., attentional orienting to gaze cues) with either a human or a robot agent (i.e., variation of mind perception), while transcranial direct current stimulation (tDCS) was applied either to prefrontal or temporo-parietal areas, both regions that have been linked to mind perception in previous studies. The results show that stimulation to temporo-parietal areas did not modulate social attention, neither in response to the human nor the robot agent. In contrast, stimulation to prefrontal areas enhanced attentional orienting in response to hu-man gaze cues and attenuated attentional orienting in response to robot gaze cues. Post-hoc analyses revealed that prefrontal stimulation particularly affected those participants who have followed human gaze more strongly than robot gaze at baseline. These findings suggest that mind perception modulates low-level mechanisms of social cognition via pre-frontal structures, and that a certain degree of mind perception is essential in order to benefit from active stimulation to prefrontal areas.

2019 ◽  
Vol 374 (1771) ◽  
pp. 20180430 ◽  
Author(s):  
Eva Wiese ◽  
Abdulaziz Abubshait ◽  
Bobby Azarian ◽  
Eric J. Blumberg

In social interactions, we rely on non-verbal cues like gaze direction to understand the behaviour of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behaviour, a process called mind perception . While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e. attentional orienting to gaze cues) with either a human or a robot agent (i.e. manipulation of mind perception) while transcranial direct current stimulation (tDCS) was applied to prefrontal and temporo-parietal brain areas. The results show that temporo-parietal stimulation did not modulate mechanisms of social attention, neither in response to the human nor in response to the robot agent, whereas prefrontal stimulation enhanced attentional orienting in response to human gaze cues and attenuated attentional orienting in response to robot gaze cues. The findings suggest that mind perception modulates low-level mechanisms of social cognition via prefrontal structures, and that a certain degree of mind perception is essential in order for prefrontal stimulation to affect mechanisms of social attention. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


2020 ◽  
Author(s):  
Abdulaziz Abubshait ◽  
Ali Momen ◽  
Eva Wiese

Understanding and reacting to others’ nonverbal social signals, such as changes in gaze direction (i.e., gaze cue), is essential for social interactions, as its important for processes such as joint attention and mentalizing. Although attentional orienting in response to gaze cues has a strong reflexive component, accumulating evidence shows that it can be top-down controlled by context information regarding the signals’ social relevance. For example, when a gazer is believed to be an entity “with a mind” (i.e., mind perception), people exert more top-down control on attention orienting. Although increasing an agent’s physical human-likeness can enhance mind perception, it could have negative consequences on top-down control of social attention when a gazer’s physical appearance is categorically ambiguous (i.e., difficult to categorize as human or nonhuman), as resolving this ambiguity would require using cognitive resources that otherwise could be used to top-down control attention orienting. To examine this question, we used mouse-tracking to explore if categorically ambiguos agents are associated with increased processing costs (Experiment 1), whether categorically ambiguous stimuli negatively impact top-down control of social attention (Experiment 2) and if resolving the conflict related to the agent’s categorical ambiguity (using exposure) would restore top-down control to orient attention (Experiment 3). The findings suggest that categorically ambigious stimuli are associated with cognitive conflict, which negatively impact the ability to exert top-down control on attentional orienting in a counterpredicitive gaze cueing paradigm; this negative impact, however, is attenuated when being pre-exposed to the stimuli prior to the gaze cueing task. Taken together, these findings suggest that manipulating physical human-likeness is a powerful way to affect mind perception in human-robot interaction but has a diminishing returns effect on social attention when it is categorically ambiguous due to drainage of cognitive resources and impairment of top-down control.


2020 ◽  
Author(s):  
Abdulaziz Abubshait ◽  
Patrick P. Weis ◽  
Eva Wiese

Social signals, such as changes in gaze direction, are essential cues to predict others’ mental states and behaviors (i.e., mentalizing). Studies show that humans can mentalize with non-human agents when they perceive a mind in them (i.e., mind perception). Robots that physically and/or behaviorally resemble humans likely trigger mind perception, which enhances the relevance of social cues and improves social-cognitive performance. The current ex-periments examine whether the effect of physical and behavioral influencers of mind perception on social-cognitive processing is modulated by the lifelikeness of a social interaction. Participants interacted with robots of varying degrees of physical (humanlike vs. robot-like) and behavioral (reliable vs. random) human-likeness while the lifelikeness of a social attention task was manipulated across five experiments. The first four experiments manipulated lifelikeness via the physical realism of the robot images (Study 1 and 2), the biological plausibility of the social signals (Study 3), and the plausibility of the social con-text (Study 4). They showed that humanlike behavior affected social attention whereas appearance affected mind perception ratings. However, when the lifelikeness of the interaction was increased by using videos of a human and a robot sending the social cues in a realistic environment (Study 5), social attention mechanisms were affected both by physical appearance and behavioral features, while mind perception ratings were mainly affected by physical appearance. This indicates that in order to understand the effect of physical and behavioral features on social cognition, paradigms should be used that adequately simulate the lifelikeness of social interactions.


Author(s):  
Abdulaziz Abubshait ◽  
Eva Wiese

When we interact with others, we use nonverbal behavior such as changes in gaze direction to make inferences about what people think or what they want to do next – a process called mentalizing. Previous studies have shown that how we react to others’ gaze signals depends on how much “mind” we ascribe to the gazer, and that this process of mind perception is related to activation in brain areas that process social information (i.e., social brain). Although brain stimulation studies have identified prefrontal structures like the ventromedial prefrontal cortex (vmPFC) as the potential neural substrate through which mind perception modulates social-cognitive processes like attentional orienting to gaze cues (i.e., gaze following), little is known about whether and how individual differences in preferences for human versus robot agents modulate this relationship. To address this question, the current study examines how transcranial direct current stimulation (tDCS) of left prefrontal versus left temporo-parietal areas affects attentional orienting to gaze signals as a function of the participants’ preferences for human ( Human Gaze Followers, HGF) versus robot ( Robot Gaze Followers; RGF) agents at baseline (prior to brain stimulation). Results show that prefrontal (but not temporo-parietal) stimulation positively affected attentional orienting to gaze signals for HGFs for the human but not the robot gazer; RGFs showed no effect of brain stimulation in neither of the stimulation conditions. These findings inform how preferences for human versus nonhuman agent types can influence subsequent interactions and communications in human-robot interaction.


2018 ◽  
Author(s):  
Eva Wiese ◽  
George Buzzell ◽  
Aziz Abubshait ◽  
Paul Beatty

In social interactions, we rely on nonverbal cues like gaze direction to understand the behavior of others. How we react to these cues is affected by whether they are believed to originate from an entity with a mind, capable of having internal states (i.e., mind perception). While prior work has established a set of neural regions linked to social-cognitive processes like mind perception, the degree to which activation within this network relates to performance in subsequent social-cognitive tasks remains unclear. In the current study, participants performed a mind perception task (i.e. judging the likelihood that faces, varying in physical human-likeness, have internal states) while event-related fMRI was collected. Afterwards, participants performed a social-cognitive task outside the scanner, during which they were cued by the gaze of the same faces that they previously judged within the mind perception task. Parametric analyses of the fMRI data revealed activity within ventromedial prefrontal cortex (vmPFC) to be related to both mind ratings inside the scanner and gaze-cueing performance outside the scanner. In addition, other social brain regions were related to gaze-cueing performance, including frontal areas like the left insula, dorsolateral prefrontal cortex and inferior frontal gyrus, as well as temporal areas like the left temporo-parietal junction and bilateral temporal gyri. The findings suggest that functions subserved by the vmPFC are relevant to both mind perception and social attention, implicating a role of vmPFC in the top-down modulation of low-level social-cognitive processes.


2007 ◽  
Author(s):  
F. Lucidi ◽  
A. Zelli ◽  
L. Mallia ◽  
C. Grano ◽  
C. Violani

2019 ◽  
Author(s):  
Mahsa Barzy ◽  
Heather Jane Ferguson ◽  
David Williams

Socio-communication is profoundly impaired among autistic individuals. Difficulties representing others’ mental states have been linked to modulations of gaze and speech, which have also been shown to be impaired in autism. Despite these observed impairments in ‘real-world’ communicative settings, research has mostly focused on lab-based experiments, where the language is highly structured. In a pre-registered experiment, we recorded eye movements and verbal responses while adults (N=50) engaged in a real-life conversation. Conversation topic either related to the self, a familiar other, or an unfamiliar other (e.g. "Tell me who is your/your mother’s/Marina’s favourite celebrity and why?”). Results replicated previous work, showing reduced attention to socially-relevant information among autistic participants (i.e. less time looking at the experimenter’s face, and more time looking around the background), compared to typically-developing controls. Importantly, perspective modulated social attention in both groups; talking about an unfamiliar other reduced attention to potentially distracting or resource demanding social information, and increased looks to non-social background. Social attention did not differ between self and familiar other contexts- reflecting greater shared knowledge for familiar/similar others. Autistic participants spent more time looking at the background when talking about an unfamiliar other vs. themselvesFuture research should investigate the cognitive mechanisms underlying this effect.


2019 ◽  
Author(s):  
Mark K Ho ◽  
Fiery Andrews Cushman ◽  
Michael L. Littman ◽  
Joseph L. Austerweil

Theory of mind enables an observer to interpret others' behavior in terms of unobservable beliefs, desires, intentions, feelings, and expectations about the world. This also empowers the person whose behavior is being observed: By intelligently modifying her actions, she can influence the mental representations that an observer ascribes to her, and by extension, what the observer comes to believe about the world. That is, she can engage in intentionally communicative demonstrations. Here, we develop a computational account of generating and interpreting communicative demonstrations by explicitly distinguishing between two interacting types of planning. Typically, instrumental planning aims to control states of the physical environment, whereas belief-directed planning aims to influence an observer's mental representations. Our framework (1) extends existing formal models of pragmatics and pedagogy to the setting of value-guided decision-making, (2) captures how people modify their intentional behavior to show what they know about the reward or causal structure of an environment, and (3) helps explain data on infant and child imitation in terms of literal versus pragmatic interpretation of adult demonstrators' actions. Additionally, our analysis of belief-directed intentionality and mentalizing sheds light on the socio-cognitive mechanisms that underlie distinctly human forms of communication, culture, and sociality.


Author(s):  
Rhyse Bendell ◽  
Jessica Williams ◽  
Stephen M. Fiore ◽  
Florian Jentsch

Artificial intelligence has been developed to perform all manner of tasks but has not gained capabilities to support social cognition. We suggest that teams comprised of both humans and artificially intelligent agents cannot achieve optimal team performance unless all teammates have the capacity to employ social-cognitive mechanisms. These form the foundation for generating inferences about their counterparts and enable execution of informed, appropriate behaviors. Social intelligence and its utilization are known to be vital components of human-human teaming processes due to their importance in guiding the recognition, interpretation, and use of the signals that humans naturally use to shape their exchanges. Although modern sensors and algorithms could allow AI to observe most social cues, signals, and other indicators, the approximation of human-to-human social interaction -based upon aggregation and modeling of such cues is currently beyond the capacity of potential AI teammates. Partially, this is because humans are notoriously variable. We describe an approach for measuring social-cognitive features to produce the raw information needed to create human agent profiles that can be operated upon by artificial intelligences.


Author(s):  
Yingxu Wang

Consciousness is the sense of self and the sign of life in natural intelligence. One of the profound myths in cognitive informatics, psychology, brain science, and computational intelligence is how consciousness is generated by physiological organs and neural networks in the bran. This paper presents a formal model and a cognitive process of consciousness in order to explain how abstract consciousness is generated and what its cognitive mechanisms are. The hierarchical levels of consciousness are explored from the facets of neurology, physiology, and computational intelligence. A rigorous mathematical model of consciousness is created that elaborates the nature of consciousness. The cognitive process of consciousness is formally described using denotational mathematics. It is recognized that consciousness is a set of real-time mental information about bodily and emotional status of an individual stored in the cerebellums known as the Conscious Status Memory (CSM) and is processed/interpreted by the thalamus. The abstract intelligence model of consciousness can be applied in cognitive informatics, cognitive computing, and computational intelligence toward the mimicry and simulation of human perception and awareness of the internal states, external environment, and their interactions in reflexive, perceptive, cognitive, and instructive intelligence.


Sign in / Sign up

Export Citation Format

Share Document