scholarly journals Neurocomputational mechanisms of prosocial learning and links to empathy

2016 ◽  
Vol 113 (35) ◽  
pp. 9763-9768 ◽  
Author(s):  
Patricia L. Lockwood ◽  
Matthew A. J. Apps ◽  
Vincent Valton ◽  
Essi Viding ◽  
Jonathan P. Roiser

Reinforcement learning theory powerfully characterizes how we learn to benefit ourselves. In this theory, prediction errors—the difference between a predicted and actual outcome of a choice—drive learning. However, we do not operate in a social vacuum. To behave prosocially we must learn the consequences of our actions for other people. Empathy, the ability to vicariously experience and understand the affect of others, is hypothesized to be a critical facilitator of prosocial behaviors, but the link between empathy and prosocial behavior is still unclear. During functional magnetic resonance imaging (fMRI) participants chose between different stimuli that were probabilistically associated with rewards for themselves (self), another person (prosocial), or no one (control). Using computational modeling, we show that people can learn to obtain rewards for others but do so more slowly than when learning to obtain rewards for themselves. fMRI revealed that activity in a posterior portion of the subgenual anterior cingulate cortex/basal forebrain (sgACC) drives learning only when we are acting in a prosocial context and signals a prosocial prediction error conforming to classical principles of reinforcement learning theory. However, there is also substantial variability in the neural and behavioral efficiency of prosocial learning, which is predicted by trait empathy. More empathic people learn more quickly when benefitting others, and their sgACC response is the most selective for prosocial learning. We thus reveal a computational mechanism driving prosocial learning in humans. This framework could provide insights into atypical prosocial behavior in those with disorders of social cognition.

eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Geert-Jan Will ◽  
Robb B Rutledge ◽  
Michael Moutoussis ◽  
Raymond J Dolan

Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an ‘interpersonal vulnerability’ dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability.


2021 ◽  
Author(s):  
Daniel Martins ◽  
Patricia Lockwood ◽  
Jo Cutler ◽  
Rosalyn J. Moran ◽  
Yannis Paloyelis

Humans often act in the best interests of others. However, how we learn which actions result in good outcomes for other people and the neurochemical systems that support this "prosocial learning" remain poorly understood. Using computational models of reinforcement learning, functional magnetic resonance imaging and dynamic causal modelling, we examined how different doses of intranasal oxytocin, a neuropeptide linked to social cognition, impact how people learn to benefit others (prosocial learning) and whether this influence could be dissociated from how we learn to benefit ourselves (self-oriented learning). We show that a low dose of oxytocin prevented decreases in prosocial performance over time, despite no impact on self-oriented learning. Critically, oxytocin produced dose-dependent changes in the encoding of prediction errors (PE) in the midbrain-subgenual anterior cingulate cortex (sgACC) pathway specifically during prosocial learning. Our findings reveal a new role of oxytocin in prosocial learning by modulating computations of PEs in the midbrain-sgACC pathway.


2019 ◽  
Author(s):  
Erdem Pulcu

AbstractWe are living in a dynamic world in which stochastic relationships between cues and outcome events create different sources of uncertainty1 (e.g. the fact that not all grey clouds bring rain). Living in an uncertain world continuously probes learning systems in the brain, guiding agents to make better decisions. This is a type of value-based decision-making which is very important for survival in the wild and long-term evolutionary fitness. Consequently, reinforcement learning (RL) models describing cognitive/computational processes underlying learning-based adaptations have been pivotal in behavioural2,3 and neural sciences4–6, as well as machine learning7,8. This paper demonstrates the suitability of novel update rules for RL, based on a nonlinear relationship between prediction errors (i.e. difference between the agent’s expectation and the actual outcome) and learning rates (i.e. a coefficient with which agents update their beliefs about the environment), that can account for learning-based adaptations in the face of environmental uncertainty. These models illustrate how learners can flexibly adapt to dynamically changing environments.


2019 ◽  
Author(s):  
Patricia L. Lockwood ◽  
Miriam Klein-Flügge ◽  
Ayat Abdurahman ◽  
Molly J. Crockett

AbstractMoral behaviour requires learning how our actions help or harm others. Theoretical accounts of learning propose a key division between ‘model-free’ algorithms that efficiently cache outcome values in actions and ‘model-based’ algorithms that prospectively map actions to outcomes, a distinction that may be critical for moral learning. Here, we tested the engagement of these learning mechanisms and their neural basis as participants learned to avoid painful electric shocks for themselves and a stranger. We found that model-free learning was prioritized when avoiding harm to others compared to oneself. Model-free prediction errors for others relative to self were tracked in the thalamus/caudate at the time of the outcome. At the time of choice, a signature of model-free moral learning was associated with responses in subgenual anterior cingulate cortex (sgACC), and resisting this model-free influence was predicted by stronger connectivity between sgACC and dorsolateral prefrontal cortex. Finally, multiple behavioural and neural correlates of model-free moral learning varied with individual differences in moral judgment. Our findings suggest moral learning favours efficiency over flexibility and is underpinned by specific neural mechanisms.


2020 ◽  
Vol 117 (44) ◽  
pp. 27719-27730 ◽  
Author(s):  
Patricia L. Lockwood ◽  
Miriam C. Klein-Flügge ◽  
Ayat Abdurahman ◽  
Molly J. Crockett

Moral behavior requires learning how our actions help or harm others. Theoretical accounts of learning propose a key division between “model-free” algorithms that cache outcome values in actions and “model-based” algorithms that map actions to outcomes. Here, we tested the engagement of these mechanisms and their neural basis as participants learned to avoid painful electric shocks for themselves and a stranger. We found that model-free decision making was prioritized when learning to avoid harming others compared to oneself. Model-free prediction errors for others relative to self were tracked in the thalamus/caudate. At the time of choice, neural activity consistent with model-free moral learning was observed in subgenual anterior cingulate cortex (sgACC), and switching after harming others was associated with stronger connectivity between sgACC and dorsolateral prefrontal cortex. Finally, model-free moral learning varied with individual differences in moral judgment. Our findings suggest moral learning favors efficiency over flexibility and is underpinned by specific neural mechanisms.


2020 ◽  
Author(s):  
Moritz Moeller ◽  
Jan Grohn ◽  
Sanjay Manohar ◽  
Rafal Bogacz

AbstractReinforcement learning theories propose that humans choose based on the estimated values of available options, and that they learn from rewards by reducing the difference between the experienced and expected value. In the brain, such prediction errors are broadcasted by dopamine. However, choices are not only influenced by expected value, but also by risk. Like reinforcement learning, risk preferences are modulated by dopamine: enhanced dopamine levels induce risk-seeking. Learning and risk preferences have so far been studied independently, even though it is commonly assumed that they are (partly) regulated by the same neurotransmitter. Here, we use a novel learning task to look for prediction-error induced risk-seeking in human behavior and pupil responses. We find that prediction errors are positively correlated with risk-preferences in imminent choices. Physiologically, this effect is indexed by pupil dilation: only participants whose pupil response indicates that they experienced the prediction error also show the behavioral effect.


2018 ◽  
Author(s):  
J. Haarsma ◽  
P.C. Fletcher ◽  
H. Ziauddeen ◽  
T.J. Spencer ◽  
K.M.J. Diederen ◽  
...  

AbstractThe predictive coding framework construes the brain as performing a specific form of hierarchical Bayesian inference. In this framework the precision of cortical unsigned prediction error (surprise) signals is proposed to play a key role in learning and decision-making, and to be controlled by dopamine. To test this hypothesis, we re-analysed an existing data-set from healthy individuals who received a dopamine agonist, antagonist or placebo and who performed an associative learning task under different levels of outcome precision. Computational reinforcement-learning modelling of behaviour provided support for precision-weighting of unsigned prediction errors. Functional MRI revealed coding of unsigned prediction errors relative to their precision in bilateral superior frontal gyri and dorsal anterior cingulate. Cortical precision-weighting was (i) perturbed by the dopamine antagonist sulpiride, and (ii) associated with task performance. These findings have important implications for understanding the role of dopamine in reinforcement learning and predictive coding in health and illness.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Ida Momennejad ◽  
A Ross Otto ◽  
Nathaniel D Daw ◽  
Kenneth A Norman

Making decisions in sequentially structured tasks requires integrating distally acquired information. The extensive computational cost of such integration challenges planning methods that integrate online, at decision time. Furthermore, it remains unclear whether ‘offline’ integration during replay supports planning, and if so which memories should be replayed. Inspired by machine learning, we propose that (a) offline replay of trajectories facilitates integrating representations that guide decisions, and (b) unsigned prediction errors (uncertainty) trigger such integrative replay. We designed a 2-step revaluation task for fMRI, whereby participants needed to integrate changes in rewards with past knowledge to optimally replan decisions. As predicted, we found that (a) multi-voxel pattern evidence for off-task replay predicts subsequent replanning; (b) neural sensitivity to uncertainty predicts subsequent replay and replanning; (c) off-task hippocampus and anterior cingulate activity increase when revaluation is required. These findings elucidate how the brain leverages offline mechanisms in planning and goal-directed behavior under uncertainty.


2019 ◽  
Author(s):  
Roberto Viviani ◽  
Lisa Dommes ◽  
Julia Bosch ◽  
Michael Steffens ◽  
Anna Paul ◽  
...  

AbstractTheoretical models of dopamine function stemming from reinforcement learning theory have emphasized the importance of prediction errors, which signal changes in the expectation of impending rewards. Much less is known about the effects of mean reward rates, which may be of motivational significance due to their role in computing the optimal effort put into exploiting reward opportunities. Here, we used a reinforcement learning model to design three functional neuroimaging studies and disentangle the effects of changes in reward expectations and mean reward rates, showing recruitment of specific regions in the brainstem regardless of prediction errors. While changes in reward expectations activated ventral striatal areas as in previous studies, mean reward rates preferentially modulated the substantia nigra/ventral tegmental area, deep layers of the superior colliculi, and a posterior pontomesencephalic region. These brainstem structures may work together to set motivation and attentional efforts levels according to perceived reward opportunities.


Corpora ◽  
2019 ◽  
Vol 14 (3) ◽  
pp. 351-378 ◽  
Author(s):  
Isabel Durán-Muñoz

This paper attempts to shed some light on the importance of adjectives in the linguistic characterisation of tourism discourse in English in general and in adventure tourism in particular as well as to prove how significant the difference in usage is compared to the general language. It seeks to understand the role that adjectives play in this specific subdomain and to contribute to the linguistic characterisation of tourism discourse in this respect. It also aims to confirm or reject previous assumptions regarding the use, and frequency of use, of adjectives and adjectival patterns in this specialised domain and, in general, to promote the study of adjectivisation in domain-specific discourses. To do so, it proposes a corpus-based study that measures the keyness of adjectives in promotional texts of the adventure tourism domain in English by comparing their usage in the compiled corpus to the two most relevant reference corpora of English (coca and the bnc).


Sign in / Sign up

Export Citation Format

Share Document