scholarly journals Human endogenous oxytocin and its neural correlates show adaptive responses to social touch based on recent social context

2021 ◽  
Author(s):  
Linda Handlin ◽  
Giovanni Novembre ◽  
Helene Lindholm ◽  
Robin Kampe ◽  
India Morrison

Both oxytocin (OT) and touch are key mediators of social attachment. In rodents, tactile stimulation elicits endogenous release of OT, potentially facilitating attachment and other forms of prosocial behavior, yet the relationship between endogenous OT and neural modulation remains unexplored in humans. Using serial sampling of plasma hormone levels during functional neuroimaging, we show that contextual circumstances of social touch facilitate or inhibit not only current hormonal and brain responses, but also calibrate later responses. Namely, touch from a romantic partner enhanced subsequent OT release for touch from an unfamiliar stranger, yet OT responses to partner touch were dampened following stranger touch. Hypothalamus and dorsal raphe activation reflected plasma OT changes during the initial interaction. In the subsequent social interaction, OT modulation depended on the previous interaction, mediated by precuneus and parietal-temporal cortex pathways, including a region of medial prefrontal cortex that also covaried with plasma cortisol. These findings demonstrate that hormonal neuromodulation during successive human social interactions is adaptive to social context, and they point to mechanisms that flexibly calibrate receptivity in social encounters.

2020 ◽  
Author(s):  
Tianyu Gao ◽  
Yue Pu ◽  
Jingyi Zhou ◽  
Guo Zheng ◽  
Yuqing Zhou ◽  
...  

AbstractDeath awareness influences multiple aspects of human lives, but its psychological constructs and underlying brain mechanisms remain unclear. We address these by measuring behavioral and brain responses to images of human skulls. We show that skulls relative to control stimuli delay responses to life-related words but speed responses to death-related words. Skulls compared to the control stimuli induce early deactivations in the posterior ventral temporal cortex followed by activations in the posterior and anterior ventral temporal cortices. The early and late neural modulations by perceived skulls respectively predict skull-induced changes of behavioral responses to life- and death-related words and the early neural modulation further predicts death anxiety. Our findings decompose skull-induced death awareness into two-stage neural processes of a lifeless state of a former life.One sentence summaryBehavioral and brain imaging findings decompose skull-induced death awareness into two-stage neural processes of a lifeless state of a former life.


2010 ◽  
Vol 24 (2) ◽  
pp. 76-82 ◽  
Author(s):  
Martin M. Monti ◽  
Adrian M. Owen

Recent evidence has suggested that functional neuroimaging may play a crucial role in assessing residual cognition and awareness in brain injury survivors. In particular, brain insults that compromise the patient’s ability to produce motor output may render standard clinical testing ineffective. Indeed, if patients were aware but unable to signal so via motor behavior, they would be impossible to distinguish, at the bedside, from vegetative patients. Considering the alarming rate with which minimally conscious patients are misdiagnosed as vegetative, and the severe medical, legal, and ethical implications of such decisions, novel tools are urgently required to complement current clinical-assessment protocols. Functional neuroimaging may be particularly suited to this aim by providing a window on brain function without requiring patients to produce any motor output. Specifically, the possibility of detecting signs of willful behavior by directly observing brain activity (i.e., “brain behavior”), rather than motoric output, allows this approach to reach beyond what is observable at the bedside with standard clinical assessments. In addition, several neuroimaging studies have already highlighted neuroimaging protocols that can distinguish automatic brain responses from willful brain activity, making it possible to employ willful brain activations as an index of awareness. Certainly, neuroimaging in patient populations faces some theoretical and experimental difficulties, but willful, task-dependent, brain activation may be the only way to discriminate the conscious, but immobile, patient from the unconscious one.


2020 ◽  
Vol 124 (6) ◽  
pp. 1560-1570
Author(s):  
Paolo Papale ◽  
Andrea Leo ◽  
Giacomo Handjaras ◽  
Luca Cecchetti ◽  
Pietro Pietrini ◽  
...  

There are several possible ways of characterizing the shape of an object. Which shape description better describes our brain responses while we passively perceive objects? Here, we employed three competing shape models to explain brain representations when viewing real objects. We found that object shape is encoded in a multidimensional fashion and thus defined by the interaction of multiple features.


2003 ◽  
Vol 9 (6) ◽  
pp. 446-455 ◽  
Author(s):  
Simon Surguladze ◽  
Paul Keedwell ◽  
Mary Phillips

Three main approaches are used to explore the neural correlates of mood disorder: neuropsychological studies, neuroimaging studies and post-mortem investigations. Lesion studies implicate disturbances in the frontal lobe, basal ganglia, striatum and anterior temporal cortex. Early neurocognitive and neuropathological investigations led to a ‘hypofrontality’ hypothesis of unipolar and bipolar depression, but functional neuroimaging has revealed a more complex picture. Thus, increased metabolism may occur in the subgenual anterior cingulate gyrus in resting-state studies of depression and sad-mood induction. Antidepressants may reduce this activity. Amygdala hyperactivation also is associated with affective disorders. Task-related studies reveal abnormal biases in memory, the experience of pleasure and the perception of emotional facial expressions. There is still little clarity whether the abnormalities in brain activation represent state or trait characteristics of affective disorders.


2018 ◽  
Author(s):  
Anthony Stigliani ◽  
Brianna Jeska ◽  
Kalanit Grill-Spector

ABSTRACTHow do high-level visual regions process the temporal aspects of our visual experience? While the temporal sensitivity of early visual cortex has been studied with fMRI in humans, temporal processing in high-level visual cortex is largely unknown. By modeling neural responses with millisecond precision in separate sustained and transient channels, and introducing a flexible encoding framework that captures differences in neural temporal integration time windows and response nonlinearities, we predict fMRI responses across visual cortex for stimuli ranging from 33 ms to 20 s. Using this innovative approach, we discovered that lateral category-selective regions respond to visual transients associated with stimulus onsets and offsets but not sustained visual information. Thus, lateral category-selective regions compute moment-tomoment visual transitions, but not stable features of the visual input. In contrast, ventral category-selective regions respond to both sustained and transient components of the visual input. Responses to sustained stimuli exhibit adaptation, whereas responses to transient stimuli are surprisingly larger for stimulus offsets than onsets. This large offset transient response may reflect a memory trace of the stimulus when it is no longer visible, whereas the onset transient response may reflect rapid processing of new items. Together, these findings reveal previously unconsidered, fundamental temporal mechanisms that distinguish visual streams in the human brain. Importantly, our results underscore the promise of modeling brain responses with millisecond precision to understand the underlying neural computations.AUTHOR SUMMARYHow does the brain encode the timing of our visual experience? Using functional magnetic resonance imaging (fMRI) and a temporal encoding model with millisecond resolution, we discovered that visual regions in the lateral and ventral processing streams fundamentally differ in their temporal processing of the visual input. Regions in lateral temporal cortex process visual transients associated with stimulus onsets and offsets but not the unchanging aspects of the visual input. That is, they compute moment-to-moment changes in the visual input. In contrast, regions in ventral temporal cortex process both stable and transient components, with the former exhibiting adaptation. Surprisingly, in these ventral regions responses to stimulus offsets were larger than onsets. We suggest that the former may reflect a memory trace of the stimulus, when it is no longer visible, and the latter may reflect rapid processing of new items at stimulus onset. Together, these findings (i) reveal a fundamental temporal mechanism that distinguishes visual streams and (ii) highlight both the importance and utility of modeling brain responses with millisecond precision to understand the temporal dynamics of neural computations in the human brain.


2019 ◽  
Author(s):  
Kamila M. Jozwik ◽  
Michael Lee ◽  
Tiago Marques ◽  
Martin Schrimpf ◽  
Pouya Bashivan

Image features computed by specific convolutional artificial neural networks (ANNs) can be used to make state-of-the-art predictions of primate ventral stream responses to visual stimuli.However, in addition to selecting the specific ANN and layer that is used, the modeler makes other choices in preprocessing the stimulus image and generating brain predictions from ANN features. The effect of these choices on brain predictivity is currently underexplored.Here, we directly evaluated many of these choices by performing a grid search over network architectures, layers, image preprocessing strategies, feature pooling mechanisms, and the use of dimensionality reduction. Our goal was to identify model configurations that produce responses to visual stimuli that are most similar to the human neural representations, as measured by human fMRI and MEG responses. In total, we evaluated more than 140,338 model configurations. We found that specific configurations of CORnet-S best predicted fMRI responses in early visual cortex, and CORnet-R and SqueezeNet models best predicted fMRI responses in inferior temporal cortex. We found specific configurations of VGG-16 and CORnet-S models that best predicted the MEG responses.We also observed that downsizing input images to ~50-75% of the input tensor size lead to better performing models compared to no downsizing (the default choice in most brain models for vision). Taken together, we present evidence that brain predictivity is sensitive not only to which ANN architecture and layer is used, but choices in image preprocessing and feature postprocessing, and these choices should be further explored.


2020 ◽  
Author(s):  
Blake Johnson ◽  
Qinqing Meng ◽  
Ioanna Anastasopoulou ◽  
Louise Ratko ◽  
Tunde Szalay ◽  
...  

AbstractArticulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been possible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), used for the first time to obtain kinematic profiles of oro-facial movements during speech together with concurrent magnetoencephalographic (MEG) measurements of neuromotor brain activity. MASK was used to characterise speech kinematics in a healthy adult, and the results were compared to measurements from the same participant with a conventional electromagnetic articulography (EMA) setup. We also characterised speech movement kinematics with MASK in a group of ten typically developing children, aged 8-12 years. Analyses targeted the gestural landmarks of the utterances /ida/, /ila/ and reiterated productions of /pataka/. These results demonstrate that the MASK technique can be used to reliably characterise movement profiles and kinematic parameters that reflect development of speech motor control, together with MEG measurements of brain responses from speech sensorimotor cortex. This new capability sets the stage for cross-disciplinary efforts to understand the developmental neurobiology of human speech production.


2013 ◽  
Vol 25 (11) ◽  
pp. 1824-1850 ◽  
Author(s):  
Krist A. Noonan ◽  
Elizabeth Jefferies ◽  
Maya Visser ◽  
Matthew A. Lambon Ralph

Semantic cognition requires a combination of semantic representations and executive control processes to direct activation in a task- and time-appropriate fashion [Jefferies, E., & Lambon Ralph, M. A. Semantic impairment in stroke aphasia versus semantic dementia: A case-series comparison. Brain, 129, 2132–2147, 2006]. We undertook a formal meta-analysis to investigate which regions within the large-scale semantic network are specifically associated with the executive component of semantic cognition. Previous studies have described in detail the role of left ventral pFC in semantic regulation. We examined 53 studies that contrasted semantic tasks with high > low executive requirements to determine whether cortical regions beyond the left pFC show the same response profile to executive semantic demands. Our findings revealed that right pFC, posterior middle temporal gyrus (pMTG) and dorsal angular gyrus (bordering intraparietal sulcus) were also consistently recruited by executively demanding semantic tasks, demonstrating patterns of activation that were highly similar to the left ventral pFC. These regions overlap with the lesions in aphasic patients who exhibit multimodal semantic impairment because of impaired regulatory control (semantic aphasia)—providing important convergence between functional neuroimaging and neuropsychological studies of semantic cognition. Activation in dorsal angular gyrus and left ventral pFC was consistent across all types of executive semantic manipulation, regardless of whether the task was receptive or expressive, whereas pMTG activation was only observed for manipulation of control demands within receptive tasks. Second, we contrasted executively demanding tasks tapping semantics and phonology. Our findings revealed substantial overlap between the two sets of contrasts within left ventral pFC, suggesting this region underpins domain-general control mechanisms. In contrast, we observed relative specialization for semantic control within pMTG as well as the most ventral aspects of left pFC (BA 47), consistent with our proposal of a distributed network underpinning semantic control.


2001 ◽  
Vol 13 (6) ◽  
pp. 754-765 ◽  
Author(s):  
A. L. Giraud ◽  
C. J. Price

Several previous functional imaging experiments have demonstrated that auditory presentation of speech, relative to tones or scrambled speech, activate the superior temporal sulci (STS) bilaterally. In this study, we attempted to segregate the neural responses to phonological, lexical, and semantic input by contrasting activation elicited by heard words, meaningless syllables, and environmental sounds. Inevitable differences between the duration and amplitude of each stimulus type were controlled with auditory noise bursts matched to each activation stimulus. Half the subjects were instructed to say “okay” in response to presentation of all stimuli. The other half repeated back the words and syllables, named the source of the sounds, and said “okay” to the control stimuli (noise bursts). We looked for stimulus effects that were consistent across task. The results revealed that central regions in the STS were equally responsive to speech (words and syllables) and familiar sounds, whereas the posterior and anterior regions of the left superior temporal gyrus were more active for speech. The effect of semantic input was small but revealed more activation in the inferior temporal cortex for words and familiar sounds than syllables and noise. In addition, words (relative to syllables, sounds, and noise) enhanced activation in the temporo-parietal areas that have previously been linked to modality independent semantic processing. Thus, in cognitive terms, we dissociate phono-logical (speech) and semantic responses and propose that word specificity arises from functional integration among shared phonological and semantic areas.


Sign in / Sign up

Export Citation Format

Share Document