scholarly journals When gut feelings teach the brain to fear pain: Context-dependent activation of the central fear network in a novel interoceptive conditioning paradigm

NeuroImage ◽  
2021 ◽  
pp. 118229
Author(s):  
Adriane Icenhour ◽  
Liubov Petrakova ◽  
Nelly Hazzan ◽  
Nina Theysohn ◽  
Christian J. Merz ◽  
...  
2021 ◽  
pp. 1-12
Author(s):  
Joonkoo Park ◽  
Sonia Godbole ◽  
Marty G. Woldorff ◽  
Elizabeth M. Brannon

Abstract Whether and how the brain encodes discrete numerical magnitude differently from continuous nonnumerical magnitude is hotly debated. In a previous set of studies, we orthogonally varied numerical (numerosity) and nonnumerical (size and spacing) dimensions of dot arrays and demonstrated a strong modulation of early visual evoked potentials (VEPs) by numerosity and not by nonnumerical dimensions. Although very little is known about the brain's response to systematic changes in continuous dimensions of a dot array, some authors intuit that the visual processing stream must be more sensitive to continuous magnitude information than to numerosity. To address this possibility, we measured VEPs of participants viewing dot arrays that changed exclusively in one nonnumerical magnitude dimension at a time (size or spacing) while holding numerosity constant and compared this to a condition where numerosity was changed while holding size and spacing constant. We found reliable but small neural sensitivity to exclusive changes in size and spacing; however, changing numerosity elicited a much more robust modulation of the VEPs. Together with previous work, these findings suggest that sensitivity to magnitude dimensions in early visual cortex is context dependent: The brain is moderately sensitive to changes in size and spacing when numerosity is held constant, but sensitivity to these continuous variables diminishes to a negligible level when numerosity is allowed to vary at the same time. Neurophysiological explanations for the encoding and context dependency of numerical and nonnumerical magnitudes are proposed within the framework of neuronal normalization.


Rhetorik ◽  
2018 ◽  
Vol 37 (1) ◽  
pp. 68-93
Author(s):  
Markus H. Woerner ◽  
Ricca Edmondson

Abstract Using an understanding of rhetoric as a method of communicative reasoning capable of providing grounds for conviction in those to whom it is addressed, this article argues that the formation of medical diagnoses shares a structure with Aristotle’s account of the rhetorical syllogism (the enthymeme). Here the argument itself (logos), together with characterological elements (ethos) and emotions (pathos), are welded together so that each affects the operation of the others. In the initial three sections of the paper, we contend, first, that diagnoses, as verdictive performatives, differ from scientific claims in being irreducibly personal and context-dependent; secondly, that they fit the structure of voluntary action as analysed by Aristotle and Aquinas; thirdly, that as practical syllogisms they differ from theoretical syllogisms, for example in taking effect in action, being ›addressed‹, and being intrinsically embedded in wider contexts of medical communication and practices. In the remaining sections we apply this account to textual evidence about diagnosis, drawing on work by the brain surgeon Henry Marsh. A rhetorical analysis of his observations on the formation of diagnostic opinions in situilluminates how moral, social and emotional features are fused with the cognitive aspects of medical judgement, making or marring how diagnoses and treatment are enacted. In other words, a philosophical- rhetorical account of diagnosis can help us to appreciate how medical diagnosis takes effect. We briefly conclude with some implications of our work for how diagnostic processes could in practice be better supported.


2021 ◽  
Author(s):  
Javier Orlandi ◽  
Mohammad Adbolrahmani ◽  
Ryo Aoki ◽  
Dmitry Lyamzin ◽  
Andrea Benucci

Abstract Choice information appears in the brain as distributed signals with top-down and bottom-up components that together support decision-making computations. In sensory and associative cortical regions, the presence of choice signals, their strength, and area specificity are known to be elusive and changeable, limiting a cohesive understanding of their computational significance. In this study, examining the mesoscale activity in mouse posterior cortex during a complex visual discrimination task, we found that broadly distributed choice signals defined a decision variable in a low-dimensional embedding space of multi-area activations, particularly along the ventral visual stream. The subspace they defined was near-orthogonal to concurrently represented sensory and motor-related activations, and it was modulated by task difficulty and contextually by the animals’ attention state. To mechanistically relate choice representations to decision-making computations, we trained recurrent neural networks with the animals’ choices and found an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. In conclusion, our results demonstrated an independent decision variable broadly represented in the posterior cortex, controlled by task features and cognitive demands. Its dynamics reflected decision computations, possibly linked to context-dependent feedback signals used for probabilistic-inference computations in variable animal-environment interactions.


2015 ◽  
Vol 10 (1) ◽  
pp. 91-101 ◽  
Author(s):  
Yuwen CHUNG-DAVIDSON ◽  
Huiyong WANG ◽  
Anne M. SCOTT ◽  
Weiming LI

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Eugenio Manassero ◽  
Ludovica Mana ◽  
Giulia Concina ◽  
Annamaria Renna ◽  
Benedetto Sacchetti

Abstract One strategy to address new potential dangers is to generate defensive responses to stimuli that remind learned threats, a phenomenon called fear generalization. During a threatening experience, the brain encodes implicit and explicit memory traces. Nevertheless, there is a lack of studies comparing implicit and explicit response patterns to novel stimuli. Here, by adopting a discriminative threat conditioning paradigm and a two-alternative forced-choice recognition task, we found that the implicit reactions were selectively elicited by the learned threat and not by a novel similar but perceptually discriminable stimulus. Conversely, subjects explicitly misidentified the same novel stimulus as the learned threat. This generalization response was not due to stress-related interference with learning, but related to the embedded threatening value. Therefore, we suggest a dissociation between implicit and explicit threat recognition profiles and propose that the generalization of explicit responses stems from a flexible cognitive mechanism dedicated to the prediction of danger.


2008 ◽  
Vol 276 (1655) ◽  
pp. 279-289 ◽  
Author(s):  
Erina Hara ◽  
Lubica Kubikova ◽  
Neal A Hessler ◽  
Erich D Jarvis

Social context has been shown to have a profound influence on brain activation in a wide range of vertebrate species. Best studied in songbirds, when males sing undirected song, the level of neural activity and expression of immediate early genes (IEGs) in several song nuclei is dramatically higher or lower than when they sing directed song to other birds, particularly females. This differential social context-dependent activation is independent of auditory input and is not simply dependent on the motor act of singing. These findings suggested that the critical sensory modality driving social context-dependent differences in the brain could be visual cues. Here, we tested this hypothesis by examining IEG activation in song nuclei in hemispheres to which visual input was normal or blocked. We found that covering one eye blocked visually induced IEG expression throughout both contralateral visual pathways of the brain, and reduced activation of the contralateral ventral tegmental area, a non-visual midbrain motivation-related area affected by social context. However, blocking visual input had no effect on the social context-dependent activation of the contralateral song nuclei during female-directed singing. Our findings suggest that individual sensory modalities are not direct driving forces for the social context differences in song nuclei during singing. Rather, these social context differences in brain activation appear to depend more on the general sense that another individual is present.


2020 ◽  
Author(s):  
Doris Voina ◽  
Stefano Recanatesi ◽  
Brian Hu ◽  
Eric Shea-Brown ◽  
Stefan Mihalas

AbstractAs animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit.Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatio-temporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.Author SummaryThe brain processes information at all times and much of that information is context-dependent. The visual system presents an important example: processing is ongoing, but the context changes dramatically when an animal is still vs. running. How is context-dependent information processing achieved? We take inspiration from recent neurophysiology studies on the role of distinct cell types in primary visual cortex (V1).We find that relatively few “switching units” — akin to the VIP neuron type in V1 in that they turn on and off in the running vs. still context and have connections to and from the main population — is sufficient to drive context dependent image processing. We demonstrate this in a model of feature integration, and in a test of image denoising. The underlying circuit architecture illustrates a concrete computational role for the multiple cell types under increasing study across the brain, and may inspire more flexible neurally inspired computing architectures.


2019 ◽  
Vol 29 (11) ◽  
pp. 4850-4862 ◽  
Author(s):  
Sebastian Weissengruber ◽  
Sang Wan Lee ◽  
John P O’Doherty ◽  
Christian C Ruff

Abstract While it is established that humans use model-based (MB) and model-free (MF) reinforcement learning in a complementary fashion, much less is known about how the brain determines which of these systems should control behavior at any given moment. Here we provide causal evidence for a neural mechanism that acts as a context-dependent arbitrator between both systems. We applied excitatory and inhibitory transcranial direct current stimulation over a region of the left ventrolateral prefrontal cortex previously found to encode the reliability of both learning systems. The opposing neural interventions resulted in a bidirectional shift of control between MB and MF learning. Stimulation also affected the sensitivity of the arbitration mechanism itself, as it changed how often subjects switched between the dominant system over time. Both of these effects depended on varying task contexts that either favored MB or MF control, indicating that this arbitration mechanism is not context-invariant but flexibly incorporates information about current environmental demands.


2021 ◽  
Author(s):  
Javier G. Orlandi ◽  
Mohammad Abdolrahmani ◽  
Ryo Aoki ◽  
Dmitry R. Lyamzin ◽  
Andrea Benucci

Choice information appears in the brain as distributed signals with top-down and bottom-up components that together support decision-making computations. In sensory and associative cortical regions, the presence of choice signals, their strength, and area specificity are known to be elusive and changeable, limiting a cohesive understanding of their computational significance. In this study, examining the mesoscale activity in mouse posterior cortex during a complex visual discrimination task, we found that broadly distributed choice signals defined a decision variable in a low-dimensional embedding space of multi-area activations, particularly along the ventral visual stream. The subspace they defined was near-orthogonal to concurrently represented sensory and motor-related activations, and it was modulated by task difficulty and contextually by the animals’ attention state. To mechanistically relate choice representations to decision-making computations, we trained recurrent neural networks with the animals’ choices and found an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. In conclusion, our results demonstrated an independent decision variable broadly represented in the posterior cortex, controlled by task features and cognitive demands. Its dynamics reflected decision computations, possibly linked to context-dependent feedback signals used for probabilistic-inference computations in variable animal-environment interactions.


Sign in / Sign up

Export Citation Format

Share Document