scholarly journals Neural mechanisms underlying the income-achievement gap: the role of the ventral visual stream

2020 ◽  
Author(s):  
Maya L. Rosen ◽  
Lucy A. Lurie ◽  
Kelly Sambrook ◽  
Andrew N. Meltzoff ◽  
Katie A McLaughlin

Children from low-socioeconomic status (SES) households have lower academic achievement than their higher-SES peers. Growing evidence suggests that SES-related differences in brain regions supporting higher-order cognitive abilities may contribute to these differences in achievement. We investigate a novel hypothesis that differences in earlier-developing sensory networks—specifically the ventral visual stream (VVS), which is involved in processing visual stimuli—contribute to SES-related disparities in attention, executive functions, and academic outcomes. In a sample of children (6-8 years, n = 62), we use fMRI to investigate SES-related differences in neural function during two attentional tasks associated with academic achievement and involving interaction between visual processing and top-down control: (i) cued attention—the ability to use an external visual cue to direct spatial attention, and (ii) memory-guided attention—the ability to use past experience to direct spatial attention. SES-related differences emerged in recruitment of the anterior insula, inferior frontal gyrus, and VVS during cued attention. Critically, recruitment of the VVS during both tasks was associated with executive functions and academic achievement. VVS activation during cued attention mediated SES-related differences in academic achievement. Further, the link between VVS activation during both attention tasks and academic achievement was mediated by differences in executive functioning. These findings extend previous work by highlighting that (i) early-developing visual processing regions play an important role in supporting complex attentional processes, (ii) childhood SES is associated with VVS development, and (iii) individual differences in VVS function may be a neural mechanism in the emergence of the income-achievement gap.

Author(s):  
Maya L. Rosen ◽  
Lucy A. Lurie ◽  
Kelly A. Sambrook ◽  
Andrew N. Meltzoff ◽  
Katie A. McLaughlin

2020 ◽  
Author(s):  
Franziska Geiger ◽  
Martin Schrimpf ◽  
Tiago Marques ◽  
James J. DiCarlo

AbstractAfter training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are poor models of the development of the visual system because they posit millions of sequential, precisely coordinated synaptic updates, each based on a labeled image. While ongoing research is pursuing the use of unsupervised proxies for labels, we here explore a complementary strategy of reducing the required number of supervised synaptic updates to produce an adult-like ventral visual stream (as judged by the match to V1, V2, V4, IT, and behavior). Such models might require less precise machinery and energy expenditure to coordinate these updates and would thus move us closer to viable neuroscientific hypotheses about how the visual system wires itself up. Relative to the current leading model of the adult ventral stream, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream. Second, by improving the random distribution of synaptic connectivity, we find that 54% of the brain match can already be achieved “at birth” (i.e. no training at all). Third, we find that, by training only ~5% of model synapses, we can still achieve nearly 80% of the match to the ventral stream. When these three strategies are applied in combination, we find that these new models achieve ~80% of a fully trained model’s match to the brain, while using two orders of magnitude fewer supervised synaptic updates. These results reflect first steps in modeling not just primate adult visual processing during inference, but also how the ventral visual stream might be “wired up” by evolution (a model’s “birth” state) and by developmental learning (a model’s updates based on visual experience).


2018 ◽  
Author(s):  
Jonas Kubilius ◽  
Martin Schrimpf ◽  
Aran Nayebi ◽  
Daniel Bear ◽  
Daniel L. K. Yamins ◽  
...  

AbstractDeep artificial neural networks with spatially repeated processing (a.k.a., deep convolutional ANNs) have been established as the best class of candidate models of visual processing in primate ventral visual processing stream. Over the past five years, these ANNs have evolved from a simple feedforward eight-layer architecture in AlexNet to extremely deep and branching NAS-Net architectures, demonstrating increasingly better object categorization performance and increasingly better explanatory power of both neural and behavioral responses. However, from the neuroscientist’s point of view, the relationship between such very deep architectures and the ventral visual pathway is incomplete in at least two ways. On the one hand, current state-of-the-art ANNs appear to be too complex (e.g., now over 100 levels) compared with the relatively shallow cortical hierarchy (4-8 levels), which makes it difficult to map their elements to those in the ventral visual stream and to understand what they are doing. On the other hand, current state-of-the-art ANNs appear to be not complex enough in that they lack recurrent connections and the resulting neural response dynamics that are commonplace in the ventral visual stream. Here we describe our ongoing efforts to resolve both of these issues by developing a “CORnet” family of deep neural network architectures. Rather than just seeking high object recognition performance (as the state-of-the-art ANNs above), we instead try to reduce the model family to its most important elements and then gradually build new ANNs with recurrent and skip connections while monitoring both performance and the match between each new CORnet model and a large body of primate brain and behavioral data. We report here that our current best ANN model derived from this approach (CORnet-S) is among the top models on Brain-Score, a composite benchmark for comparing models to the brain, but is simpler than other deep ANNs in terms of the number of convolutions performed along the longest path of information processing in the model. All CORnet models are available at github.com/dicarlolab/CORnet, and we plan to up-date this manuscript and the available models in this family as they are produced.


2008 ◽  
Vol 18 (10) ◽  
pp. 2402-2409 ◽  
Author(s):  
Deepak Sarpal ◽  
Bradley R. Buchsbaum ◽  
Philip D. Kohn ◽  
J. Shane Kippenhan ◽  
Carolyn B. Mervis ◽  
...  

2021 ◽  
Author(s):  
Nicholas J Sexton ◽  
Bradley C Love

One reason the mammalian visual system is viewed as hierarchical, such that successive stages of processing contain ever higher-level information, is because of functional correspondences with deep convolutional neural networks (DCNNs). However, these correspondences between brain and model activity involve shared, not task-relevant, variance. We propose a stricter test of correspondence: If a DCNN layer corresponds to a brain region, then replacing model activity with brain activity should successfully drive the DCNN's object recognition decision. Using this approach on three datasets, we found all regions along the ventral visual stream best corresponded with later model layers, indicating all stages of processing contained higher-level information about object category. Time course analyses suggest long-range recurrent connections transmit object class information from late to early visual areas.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


Author(s):  
Alberto Quílez-Robres ◽  
Nieves Moyano ◽  
Alejandra Cortés-Pascual

Academic achievement has been linked to executive functions. However, it is necessary to clarify the different predictive role that executive functions have on general and specific academic achievement and to determine the most predictive executive factor of this academic achievement. The relationship and predictive role between executive functions and their components (initiative, working memory, task monitoring, organization of materials, flexibility, emotional control, inhibition, self-monitoring) with academic achievement are analyzed in this study, both globally and specifically in the areas of Language Arts and Mathematics, in 133 students from 6 to 9 years of age. The relationship obtained in Pearson’s correlation analysis does not differ substantially between overall achievement (r = 0.392) and specific achievement (r = 0.361, r = 0.361), but task monitoring (r = 0.531, r = 0.455, r = 0.446) and working memory (r = 0.512, r = 0.475, r = 0.505) had a greater relationship with general and specific achievement. Finally, regression analyses based on correlation results indicate that executive functions predict general academic performance (14.7%) and specific performance (12.3%, 12.2%) for Language Arts and Mathematics, respectively. Furthermore, working memory and task supervision represent 32.5% of general academic performance, 25.5% of performance in Language Arts, and 27.1% of performance in Mathematics. In conclusion, this study yielded exploratory data on the possible executive functions (task supervision and working memory) responsible for good general academic achievements and specific academic achievements in Mathematics and Language Arts.


Author(s):  
Sigrid Hegna Ingvaldsen ◽  
Tora Sund Morken ◽  
Dordi Austeng ◽  
Olaf Dammann

AbstractResearch on retinopathy of prematurity (ROP) focuses mainly on the abnormal vascularization patterns that are directly visible for ophthalmologists. However, recent findings indicate that children born prematurely also exhibit changes in the retinal cellular architecture and along the dorsal visual stream, such as structural changes between and within cortical areas. Moreover, perinatal sustained systemic inflammation (SSI) is associated with an increased risk for ROP and the visual deficits that follow. In this paper, we propose that ROP might just be the tip of an iceberg we call visuopathy of prematurity (VOP). The VOP paradigm comprises abnormal vascularization of the retina, alterations in retinal cellular architecture, choroidal degeneration, and abnormalities in the visual pathway, including cortical areas. Furthermore, VOP itself might influence the developmental trajectories of cerebral structures and functions deemed responsible for visual processing, thereby explaining visual deficits among children born preterm.


Sign in / Sign up

Export Citation Format

Share Document