scholarly journals A somatosensory computation that unifies bodies and tools

2021 ◽  
Author(s):  
Luke Miller ◽  
Cecile Fabio ◽  
Frederique de Vignemont ◽  
Alice Roy ◽  
W. Pieter Medendorp ◽  
...  

It is often claimed that tools are embodied by the user, but whether the brain actually repurposes its body-based computations to perform similar tasks with tools is not known. A fundamental body-based computation used by the somatosensory system is trilateration. Here, the location of touch on a limb is computed by integrating estimates of the distance between sensory input and its boundaries (e.g., elbow and wrist of the forearm). As evidence of this computational mechanism, tactile localization on a limb is most precise near its boundaries and lowest in the middle. If the brain repurposes trilateration to localize touch on a tool, we should observe this computational signature in behavior. In a large sample of participants, we indeed found that localizing touch on a tool produced the signature of trilateration, with highest precision close to the base and tip of the tool. A computational model of trilateration provided a good fit to the observed localization behavior. Importantly, model selection demonstrated that trilateration better explained each participant's behavior than an alternative model of localization. These results have important implications for how trilateration may be implemented by somatosensory neural populations. In sum, the present study suggests that tools are indeed embodied at a computational level, repurposing a fundamental spatial computation.

2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


1967 ◽  
Vol 12 (2) ◽  
pp. 105-124
Author(s):  
Peter Brawley ◽  
Robert Pos

To summarize briefly: Converging data from many disciplines — psychology, psychiatry, social theory, biochemistry, neuropharmacology, neurophysiology — point to the sensory input regulating mechanism of the central nervous system as a critical factor in the production of hallucinoses and psychotic experience. There is good evidence that what we have called the informational underload model ‘holds considerable promise for improving our understanding of many clinical and non-clinical phenomena of interest to psychiatry. The evidence suggests that a neurophysiological, internal informational underload syndrome may be a final common pathway of psychotic experience. The question as to where such a syndrome might occur in the brain, together with the question of whether such an informational underload syndrome might be due to toxins, genetic factors, conditioning processes, anxiety or dissociation, or other causes, has to be left open. What is needed now, is research directed at these two questions: 1) does such an internal informational underload syndrome occur in the brain, 2) when, where, and under what circumstances does it occur?


2021 ◽  
Vol 44 (1) ◽  
Author(s):  
Rava Azeredo da Silveira ◽  
Fred Rieke

Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2020 ◽  
Author(s):  
Matthias Loidolt ◽  
Lucas Rudelt ◽  
Viola Priesemann

AbstractHow does spontaneous activity during development prepare cortico-cortical connections for sensory input? We here analyse the development of sequence memory, an intrinsic feature of recurrent networks that supports temporal perception. We use a recurrent neural network model with homeostatic and spike-timing-dependent plasticity (STDP). This model has been shown to learn specific sequences from structured input. We show that development even under unstructured input increases unspecific sequence memory. Moreover, networks “pre-shaped” by such unstructured input subsequently learn specific sequences faster. The key structural substrate is the emergence of strong and directed synapses due to STDP and synaptic competition. These construct self-amplifying preferential paths of activity, which can quickly encode new input sequences. Our results suggest that memory traces are not printed on a tabula rasa, but instead harness building blocks already present in the brain.


2018 ◽  
Author(s):  
Laura Barca ◽  
giovanni pezzulo

Eating disorders, and in particular anorexia nervosa (AN), are widespread in the western world. Despite an extensive body of research, the mechanisms underlying anorexia nervosa and its striking eating restriction are still elusive. Here we propose an innovative account of anorexia, which elaborates on recent theories of the brain as a predictive machine (Friston, 2010; Friston et al., 2017; Pezzulo, Barca, Friston, 2015). Here we use this (active) interoceptive inference account to explain the starvation behavior characterizing restrictive anorexia. This novel perspective aims at merging computational-level constructs of active inference and altered interoceptive processing to psychological-level theories of cognitive control and self-coherence.


2019 ◽  
Author(s):  
Charlotte Héricé ◽  
Shuzo Sakata

AbstractSleep is a fundamental homeostatic process within the animal kingdom. Although various brain areas and cell types are involved in the regulation of the sleep-wake cycle, it is still unclear how different pathways between neural populations contribute to its regulation. Here we address this issue by investigating the behavior of a simplified network model upon synaptic weight manipulations. Our model consists of three neural populations connected by excitatory and inhibitory synapses. Activity in each population is described by a firing-rate model, which determines the state of the network. Namely wakefulness, rapid eye movement (REM) sleep or non-REM (NREM) sleep. By systematically manipulating the synaptic weight of every pathway, we show that even this simplified model exhibits non-trivial behaviors: for example, the wake-promoting population contributes not just to the induction and maintenance of wakefulness, but also to sleep induction. Although a recurrent excitatory connection of the REM-promoting population is essential for REM sleep genesis, this recurrent connection does not necessarily contribute to the maintenance of REM sleep. The duration of NREM sleep can be shortened or extended by changes in the synaptic strength of the pathways from the NREM-promoting population. In some cases, there is an optimal range of synaptic strengths that affect a particular state, implying that the amount of manipulations, not just direction (i.e., activation or inactivation), needs to be taken into account. These results demonstrate pathway-dependent regulation of sleep dynamics and highlight the importance of systems-level quantitative approaches for sleep-wake regulatory circuits.Author SummarySleep is essential and ubiquitous across animal species. Over the past half-century, various brain areas, cell types, neurotransmitters, and neuropeptides have been identified as part of a sleep-wake regulating circuitry in the brain. However, it is less explored how individual neural pathways contribute to the sleep-wake cycle. In the present study, we investigate the behavior of a computational model by altering the strength of connections between neuronal populations. This computational model is comprised of a simple network where three neuronal populations are connected together, and the activity of each population determines the current state of the model, that is, wakefulness, rapid-eye-movement (REM) sleep or non-REM (NREM) sleep. When we alter the connection strength of each pathway, we observe that the effect of such alterations on the sleep-wake cycle is highly pathway-dependent. Our results provide further insights into the mechanisms of sleep-wake regulation, and our computational approach can complement future biological experiments.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Arthur-Ervin Avramiea ◽  
Richard Hardstone ◽  
Jan-Matthis Lueckmann ◽  
Jan Bím ◽  
Huibert D Mansvelder ◽  
...  

Understanding why identical stimuli give differing neuronal responses and percepts is a central challenge in research on attention and consciousness. Ongoing oscillations reflect functional states that bias processing of incoming signals through amplitude and phase. It is not known, however, whether the effect of phase or amplitude on stimulus processing depends on the long-term global dynamics of the networks generating the oscillations. Here, we show, using a computational model, that the ability of networks to regulate stimulus response based on pre-stimulus activity requires near-critical dynamics—a dynamical state that emerges from networks with balanced excitation and inhibition, and that is characterized by scale-free fluctuations. We also find that networks exhibiting critical oscillations produce differing responses to the largest range of stimulus intensities. Thus, the brain may bring its dynamics close to the critical state whenever such network versatility is required.


2019 ◽  
pp. 145-180
Author(s):  
Vittorio Gallese ◽  
Michele Guerra

This chapter discusses close-ups of the face and body in relation to film and neuroscience. The subheadings are “Touching in the mirror,” which introduces and discusses the opening scenes of Ingmar Bergman’s Persona; “The somatosensory system and multimodality,” which addresses the notion of multimodality, and explains how the brain processes touch and pain; “The social perception of touch,” provides an overview of how the brain processes the vision of touch; “Feeling the film,” in which scenes from Jean Luc Godard’s Une Femme Mariée are analyzed and a suggestion provided for approaching the notion of “haptic vision,” discussed by film theorists, from a neuroscientific perspective; and “Animations,” in which the authors propose that their model of embodied simulation can be used to explain the sense of presence generated by animation films, analyzing Jan Švankmajer’s films and Pixar’s Toy Story.


2019 ◽  
Vol 31 (9) ◽  
pp. 1329-1342
Author(s):  
Alessandro Grillini ◽  
Remco J. Renken ◽  
Frans W. Cornelissen

Two prominent strategies that the human visual system uses to reduce incoming information are spatial integration and selective attention. Whereas spatial integration summarizes and combines information over the visual field, selective attention can single it out for scrutiny. The way in which these well-known mechanisms—with rather opposing effects—interact remains largely unknown. To address this, we had observers perform a gaze-contingent search task that nudged them to deploy either spatial or feature-based attention to maximize performance. We found that, depending on the type of attention employed, visual spatial integration strength changed either in a strong and localized or a more modest and global manner compared with a baseline condition. Population code modeling revealed that a single mechanism can account for both observations: Attention acts beyond the neuronal encoding stage to tune the spatial integration weights of neural populations. Our study shows how attention and integration interact to optimize the information flow through the brain.


Sign in / Sign up

Export Citation Format

Share Document