scholarly journals Understanding neural code intelligence through program simplification

2021 ◽  
Author(s):  
Md Rafiqul Islam Rabin ◽  
Vincent J. Hellendoorn ◽  
Mohammad Amin Alipour
Keyword(s):  
2021 ◽  
Vol 118 (46) ◽  
pp. e2104779118
Author(s):  
T. Hannagan ◽  
A. Agrawal ◽  
L. Cohen ◽  
S. Dehaene

The visual word form area (VWFA) is a region of human inferotemporal cortex that emerges at a fixed location in the occipitotemporal cortex during reading acquisition and systematically responds to written words in literate individuals. According to the neuronal recycling hypothesis, this region arises through the repurposing, for letter recognition, of a subpart of the ventral visual pathway initially involved in face and object recognition. Furthermore, according to the biased connectivity hypothesis, its reproducible localization is due to preexisting connections from this subregion to areas involved in spoken-language processing. Here, we evaluate those hypotheses in an explicit computational model. We trained a deep convolutional neural network of the ventral visual pathway, first to categorize pictures and then to recognize written words invariantly for case, font, and size. We show that the model can account for many properties of the VWFA, particularly when a subset of units possesses a biased connectivity to word output units. The network develops a sparse, invariant representation of written words, based on a restricted set of reading-selective units. Their activation mimics several properties of the VWFA, and their lesioning causes a reading-specific deficit. The model predicts that, in literate brains, written words are encoded by a compositional neural code with neurons tuned either to individual letters and their ordinal position relative to word start or word ending or to pairs of letters (bigrams).


2015 ◽  
Vol 12 (2) ◽  
pp. 026004 ◽  
Author(s):  
Daniela Sabrina Andres ◽  
Daniel Cerquetti ◽  
Marcelo Merello

2018 ◽  
Author(s):  
Niru Maheswaranathan ◽  
Lane T. McIntosh ◽  
Hidenori Tanaka ◽  
Satchel Grant ◽  
David B. Kastner ◽  
...  

AbstractUnderstanding how the visual system encodes natural scenes is a fundamental goal of sensory neuroscience. We show here that a three-layer network model predicts the retinal response to natural scenes with an accuracy nearing the fundamental limits of predictability. The model’s internal structure is interpretable, in that model units are highly correlated with interneurons recorded separately and not used to fit the model. We further show the ethological relevance to natural visual processing of a diverse set of phenomena of complex motion encoding, adaptation and predictive coding. Our analysis uncovers a fast timescale of visual processing that is inaccessible directly from experimental data, showing unexpectedly that ganglion cells signal in distinct modes by rapidly (< 0.1 s) switching their selectivity for direction of motion, orientation, location and the sign of intensity. A new approach that decomposes ganglion cell responses into the contribution of interneurons reveals how the latent effects of parallel retinal circuits generate the response to any possible stimulus. These results reveal extremely flexible and rapid dynamics of the retinal code for natural visual stimuli, explaining the need for a large set of interneuron pathways to generate the dynamic neural code for natural scenes.


2000 ◽  
Vol 69 (1-2) ◽  
pp. 87-96 ◽  
Author(s):  
Patricia M Di Lorenzo
Keyword(s):  

2008 ◽  
Vol 11 (11) ◽  
pp. 1352-1360 ◽  
Author(s):  
Yukako Yamane ◽  
Eric T Carlson ◽  
Katherine C Bowman ◽  
Zhihong Wang ◽  
Charles E Connor

Sign in / Sign up

Export Citation Format

Share Document