scholarly journals Neural Coding: Time Contraction and Dilation in the Striatum

2015 ◽  
Vol 25 (9) ◽  
pp. R374-R376 ◽  
Author(s):  
Helen Motanis ◽  
Dean V. Buonomano
2019 ◽  
Vol 42 ◽  
Author(s):  
Giulia Frezza ◽  
Pierluigi Zoccolotti

Abstract The convincing argument that Brette makes for the neural coding metaphor as imposing one view of brain behavior can be further explained through discourse analysis. Instead of a unified view, we argue, the coding metaphor's plasticity, versatility, and robustness throughout time explain its success and conventionalization to the point that its rhetoric became overlooked.


2018 ◽  
Vol 3 (6) ◽  
pp. 61-76
Author(s):  
Leslie D. Grush ◽  
Frederick J. Gallun ◽  
Curtis J. Billings
Keyword(s):  

NeuroImage ◽  
2021 ◽  
pp. 118230
Author(s):  
Zhiyao Gao ◽  
Li Zheng ◽  
Rocco Chiou ◽  
André Gouws ◽  
Katya Krieger-Redwood ◽  
...  
Keyword(s):  

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Sidney R. Lehky ◽  
Keiji Tanaka ◽  
Anne B. Sereno

AbstractWhen measuring sparseness in neural populations as an indicator of efficient coding, an implicit assumption is that each stimulus activates a different random set of neurons. In other words, population responses to different stimuli are, on average, uncorrelated. Here we examine neurophysiological data from four lobes of macaque monkey cortex, including V1, V2, MT, anterior inferotemporal cortex, lateral intraparietal cortex, the frontal eye fields, and perirhinal cortex, to determine how correlated population responses are. We call the mean correlation the pseudosparseness index, because high pseudosparseness can mimic statistical properties of sparseness without being authentically sparse. In every data set we find high levels of pseudosparseness ranging from 0.59–0.98, substantially greater than the value of 0.00 for authentic sparseness. This was true for synthetic and natural stimuli, as well as for single-electrode and multielectrode data. A model indicates that a key variable producing high pseudosparseness is the standard deviation of spontaneous activity across the population. Consistently high values of pseudosparseness in the data demand reconsideration of the sparse coding literature as well as consideration of the degree to which authentic sparseness provides a useful framework for understanding neural coding in the cortex.


2020 ◽  
Vol 1 (4) ◽  
pp. 381-401
Author(s):  
Ryan Staples ◽  
William W. Graves

Determining how the cognitive components of reading—orthographic, phonological, and semantic representations—are instantiated in the brain has been a long-standing goal of psychology and human cognitive neuroscience. The two most prominent computational models of reading instantiate different cognitive processes, implying different neural processes. Artificial neural network (ANN) models of reading posit nonsymbolic, distributed representations. The dual-route cascaded (DRC) model instead suggests two routes of processing, one representing symbolic rules of spelling–to–sound correspondence, the other representing orthographic and phonological lexicons. These models are not adjudicated by behavioral data and have never before been directly compared in terms of neural plausibility. We used representational similarity analysis to compare the predictions of these models to neural data from participants reading aloud. Both the ANN and DRC model representations corresponded to neural activity. However, the ANN model representations correlated to more reading-relevant areas of cortex. When contributions from the DRC model were statistically controlled, partial correlations revealed that the ANN model accounted for significant variance in the neural data. The opposite analysis, examining the variance explained by the DRC model with contributions from the ANN model factored out, revealed no correspondence to neural activity. Our results suggest that ANNs trained using distributed representations provide a better correspondence between cognitive and neural coding. Additionally, this framework provides a principled approach for comparing computational models of cognitive function to gain insight into neural representations.


Sign in / Sign up

Export Citation Format

Share Document