neural codes
Recently Published Documents


TOTAL DOCUMENTS

175
(FIVE YEARS 44)

H-INDEX

28
(FIVE YEARS 2)

Author(s):  
Brianna Gambacini ◽  
R. Amzi Jeffs ◽  
Sam Macdonald ◽  
Anne Shiu
Keyword(s):  

2021 ◽  
Author(s):  
B. W. Corrigan ◽  
R. A. Gulli ◽  
G. Doucet ◽  
M. Roussy ◽  
R. Luna ◽  
...  

AbstractThe primate hippocampus (HPC) and lateral prefrontal cortex (LPFC) are two brain structures deemed essential to long- and short-term memory functions respectively. Here we hypothesize that although both structures may encode similar information about the environment, the neural codes mediating neuronal communication in HPC and LPFC have differentially evolved to serve their corresponding memory functions. We used a virtual reality task in which animals navigated through a maze using a joystick and selected one of two targets in the arms of the maze according to a learned context-color rule. We found that neurons and neuronal populations in both regions encode similar information about the different task periods. Moreover, using statistical analyses and linear classifiers, we demonstrated that many HPC neurons concentrate spikes temporally into bursts, whereas most LPFC neurons sparsely distribute spikes over time. When integrating spike rates over short intervals, HPC neuronal ensembles reached maximum decoded information with fewer neurons than LPFC ensembles. We propose that HPC principal cells have evolved intrinsic properties that enable burst firing and temporal summation of synaptic potentials that ultimately facilitates synaptic plasticity and long-term memory formation. On the other hand, LPFC pyramidal cells have intrinsic properties that allow sparsely distributing spikes over time enabling encoding of short-term memories via persistent firing without necessarily triggering rapid changes in the synapses.


2021 ◽  
pp. 519-531
Author(s):  
Vidit Kumar ◽  
Vikas Tripathi ◽  
Bhaskar Pant
Keyword(s):  

Author(s):  
Davide Albertini ◽  
Marco Lanzilotto ◽  
Monica Maranesi ◽  
Luca Bonini

The neural processing of others' observed actions recruits a large network of brain regions (the action observation network, AON), in which frontal motor areas are thought to play a crucial role. Since the discovery of mirror neurons (MNs) in the ventral premotor cortex, it has been assumed that their activation was conditional upon the presentation of biological rather than nonbiological motion stimuli, supporting a form of direct visuomotor matching. Nonetheless, nonbiological observed movements have rarely been used as control stimuli to evaluate visual specificity, thereby leaving the issue of similarity among neural codes for executed actions and biological or nonbiological observed movements unresolved. Here, we addressed this issue by recording from two nodes of the AON that are attracting increasing interest, namely the ventro-rostral part of the dorsal premotor area F2 and the mesial pre-supplementary motor area F6 of macaques while they 1) executed a reaching-grasping task, 2) observed an experimenter performing the task, and 3) observed a nonbiological effector moving in the same context. Our findings revealed stronger neuronal responses to the observation of biological than nonbiological movement, but biological and nonbiological visual stimuli produced highly similar neural dynamics and relied on largely shared neural codes, which in turn remarkably differed from those associated with executed actions. These results indicate that, in highly familiar contexts, visuo-motor remapping processes in premotor areas hosting MNs are more complex and flexible than predicted by a direct visuomotor matching hypothesis.


2021 ◽  
Vol 118 (31) ◽  
pp. e2020410118
Author(s):  
Giulia Gennari ◽  
Sébastien Marti ◽  
Marie Palu ◽  
Ana Fló ◽  
Ghislaine Dehaene-Lambertz

Creating invariant representations from an everchanging speech signal is a major challenge for the human brain. Such an ability is particularly crucial for preverbal infants who must discover the phonological, lexical, and syntactic regularities of an extremely inconsistent signal in order to acquire language. Within the visual domain, an efficient neural solution to overcome variability consists in factorizing the input into a reduced set of orthogonal components. Here, we asked whether a similar decomposition strategy is used in early speech perception. Using a 256-channel electroencephalographic system, we recorded the neural responses of 3-mo-old infants to 120 natural consonant–vowel syllables with varying acoustic and phonetic profiles. Using multivariate pattern analyses, we show that syllables are factorized into distinct and orthogonal neural codes for consonants and vowels. Concerning consonants, we further demonstrate the existence of two stages of processing. A first phase is characterized by orthogonal and context-invariant neural codes for the dimensions of manner and place of articulation. Within the second stage, manner and place codes are integrated to recover the identity of the phoneme. We conclude that, despite the paucity of articulatory motor plans and speech production skills, pre-babbling infants are already equipped with a structured combinatorial code for speech analysis, which might account for the rapid pace of language acquisition during the first year.


2021 ◽  
Author(s):  
David J Maisson ◽  
Justin M Fine ◽  
Seng Bum Michael Yoo ◽  
Tyler Daniel Cash-Padgett ◽  
Maya Zhe Wang ◽  
...  

Our ability to effectively choose between dissimilar options implies that information regarding the options values must be available, either explicitly or implicitly, in the brain. Explicit realizations of value involve single neurons whose responses depend on value and not on the specific features that determine it. Implicit realizations, by contrast, come from the coordinated action of neurons that encode specific features. One signature of implicit value coding is that population responses to offers with the same value but different features should occupy semi- or fully orthogonal neural subspaces that are nonetheless linked. Here, we examined responses of neurons in six core value-coding areas in a choice task with risky and safe options. Using stricter criteria than some past studies have used, we find, surprisingly, no evidence for abstract value neurons (i.e., neurons with the response to equally valued risky and safe options) in any of these regions. Moreover, population codes for value resided in orthogonal subspaces; these subspaces were linked through a linear transform of each of their constituent subspaces. These results suggest that in all six regions, populations of neurons embed value implicitly in a distributed population.


2021 ◽  
Author(s):  
Oscar Woolnough ◽  
Cristian Donos ◽  
Aidan Curtis ◽  
Patrick S Rollo ◽  
Zachary J Roccaforte ◽  
...  

Reading words aloud is a foundational aspect of the acquisition of literacy. The rapid rate at which multiple distributed neural substrates are engaged in this process can only be probed via techniques with high spatiotemporal resolution. We used direct intracranial recordings in a large cohort to create a holistic yet fine-grained map of word processing, enabling us to derive the spatiotemporal neural codes of multiple word attributes critical to reading: lexicality, word frequency and orthographic neighborhood. We found that lexicality is encoded by early activity in mid-fusiform (mFus) cortex and precentral sulcus. Word frequency is also first represented in mFus followed by later engagement of the inferior frontal gyrus (IFG) and inferior parietal sulcus (IPS), and orthographic neighborhood is encoded solely in the IPS. A lexicality decoder revealed high weightings for electrodes in the mFus, IPS, anterior IFG and the pre-central sulcus. These results elaborate the neural codes underpinning extant dual-route models of reading, with parallel processing via the lexical route, progressing from mFus to IFG, and the sub-lexical route, progressing from IPS to anterior IFG.


2021 ◽  
Author(s):  
Jonathan Schaffner ◽  
Philippe Tobler ◽  
Todd Hare ◽  
Rafael Polania

It has generally been presumed that sensory information encoded by a nervous system should be as accurate as its biological limitations allow. However, perhaps counter intuitively, accurate representations of sensory signals do not necessarily maximize the organism's chances of survival. To test this hypothesis, we developed a unified normative framework for fitness-maximizing encoding by combining theoretical insights from neuroscience, computer science, and economics. Initially, we applied predictions of this model to neural responses from large monopolar cells (LMCs) in the blowfly retina. We found that neural codes that maximize reward expectation---and not accurate sensory representations---account for retinal LMC activity. We also conducted experiments in humans and find that early sensory areas flexibly adopt neural codes that promote fitness maximization in a retinotopically-specific manner, which impacted decision behavior. Thus, our results provide evidence that fitness-maximizing rules imposed by the environment are applied at the earliest stages of sensory processing.


2021 ◽  
Author(s):  
Giulia Gennari ◽  
Sebastien Marti ◽  
Marie Palu ◽  
Ana Flo ◽  
Ghislaine Dehaene-Lambertz

Creating invariant representations from an ever-changing speech signal is a major challenge for the human brain. Such an ability is particularly crucial for preverbal infants who must discover the phonological, lexical and syntactic regularities of an extremely inconsistent signal in order to acquire language. Within visual perception, an efficient neural solution to overcome signal variability consists in factorizing the input into orthogonal and relevant low-dimensional components. In this study we asked whether a similar neural strategy grounded on phonetic features is recruited in speech perception. Using a 256-channel electroencephalographic system, we recorded the neural responses of 3-month-old infants to 120 natural consonant-vowel syllables with varying acoustic and phonetic profiles. To characterize the specificity and granularity of the elicited representations, we employed a hierarchical generalization approach based on multivariate pattern analyses. We identified two stages of processing. At first, the features of manner and place of articulation were decodable as stable and independent dimensions of neural responsivity. Subsequently, phonetic features were integrated into phoneme-identity (i.e. consonant) neural codes. The latter remained distinct from the representation of the vowel, accounting for the different weights attributed to consonants and vowels in lexical and syntactic computations. This study reveals that, despite the paucity of articulatory motor plans and productive skills, the preverbal brain is already equipped with a structured phonetic space which provides a combinatorial code for speech analysis. The early availability of a stable and orthogonal neural code for phonetic features might account for the rapid pace of language acquisition during the first year.


Sign in / Sign up

Export Citation Format

Share Document