scholarly journals A Genetic Model for Understanding Higher Order Visual Processing: Functional Interactions of the Ventral Visual Stream in Williams Syndrome

2008 ◽  
Vol 18 (10) ◽  
pp. 2402-2409 ◽  
Author(s):  
Deepak Sarpal ◽  
Bradley R. Buchsbaum ◽  
Philip D. Kohn ◽  
J. Shane Kippenhan ◽  
Carolyn B. Mervis ◽  
...  
2020 ◽  
Author(s):  
Franziska Geiger ◽  
Martin Schrimpf ◽  
Tiago Marques ◽  
James J. DiCarlo

AbstractAfter training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are poor models of the development of the visual system because they posit millions of sequential, precisely coordinated synaptic updates, each based on a labeled image. While ongoing research is pursuing the use of unsupervised proxies for labels, we here explore a complementary strategy of reducing the required number of supervised synaptic updates to produce an adult-like ventral visual stream (as judged by the match to V1, V2, V4, IT, and behavior). Such models might require less precise machinery and energy expenditure to coordinate these updates and would thus move us closer to viable neuroscientific hypotheses about how the visual system wires itself up. Relative to the current leading model of the adult ventral stream, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream. Second, by improving the random distribution of synaptic connectivity, we find that 54% of the brain match can already be achieved “at birth” (i.e. no training at all). Third, we find that, by training only ~5% of model synapses, we can still achieve nearly 80% of the match to the ventral stream. When these three strategies are applied in combination, we find that these new models achieve ~80% of a fully trained model’s match to the brain, while using two orders of magnitude fewer supervised synaptic updates. These results reflect first steps in modeling not just primate adult visual processing during inference, but also how the ventral visual stream might be “wired up” by evolution (a model’s “birth” state) and by developmental learning (a model’s updates based on visual experience).


2018 ◽  
Author(s):  
Jonas Kubilius ◽  
Martin Schrimpf ◽  
Aran Nayebi ◽  
Daniel Bear ◽  
Daniel L. K. Yamins ◽  
...  

AbstractDeep artificial neural networks with spatially repeated processing (a.k.a., deep convolutional ANNs) have been established as the best class of candidate models of visual processing in primate ventral visual processing stream. Over the past five years, these ANNs have evolved from a simple feedforward eight-layer architecture in AlexNet to extremely deep and branching NAS-Net architectures, demonstrating increasingly better object categorization performance and increasingly better explanatory power of both neural and behavioral responses. However, from the neuroscientist’s point of view, the relationship between such very deep architectures and the ventral visual pathway is incomplete in at least two ways. On the one hand, current state-of-the-art ANNs appear to be too complex (e.g., now over 100 levels) compared with the relatively shallow cortical hierarchy (4-8 levels), which makes it difficult to map their elements to those in the ventral visual stream and to understand what they are doing. On the other hand, current state-of-the-art ANNs appear to be not complex enough in that they lack recurrent connections and the resulting neural response dynamics that are commonplace in the ventral visual stream. Here we describe our ongoing efforts to resolve both of these issues by developing a “CORnet” family of deep neural network architectures. Rather than just seeking high object recognition performance (as the state-of-the-art ANNs above), we instead try to reduce the model family to its most important elements and then gradually build new ANNs with recurrent and skip connections while monitoring both performance and the match between each new CORnet model and a large body of primate brain and behavioral data. We report here that our current best ANN model derived from this approach (CORnet-S) is among the top models on Brain-Score, a composite benchmark for comparing models to the brain, but is simpler than other deep ANNs in terms of the number of convolutions performed along the longest path of information processing in the model. All CORnet models are available at github.com/dicarlolab/CORnet, and we plan to up-date this manuscript and the available models in this family as they are produced.


2020 ◽  
Author(s):  
Maya L. Rosen ◽  
Lucy A. Lurie ◽  
Kelly Sambrook ◽  
Andrew N. Meltzoff ◽  
Katie A McLaughlin

Children from low-socioeconomic status (SES) households have lower academic achievement than their higher-SES peers. Growing evidence suggests that SES-related differences in brain regions supporting higher-order cognitive abilities may contribute to these differences in achievement. We investigate a novel hypothesis that differences in earlier-developing sensory networks—specifically the ventral visual stream (VVS), which is involved in processing visual stimuli—contribute to SES-related disparities in attention, executive functions, and academic outcomes. In a sample of children (6-8 years, n = 62), we use fMRI to investigate SES-related differences in neural function during two attentional tasks associated with academic achievement and involving interaction between visual processing and top-down control: (i) cued attention—the ability to use an external visual cue to direct spatial attention, and (ii) memory-guided attention—the ability to use past experience to direct spatial attention. SES-related differences emerged in recruitment of the anterior insula, inferior frontal gyrus, and VVS during cued attention. Critically, recruitment of the VVS during both tasks was associated with executive functions and academic achievement. VVS activation during cued attention mediated SES-related differences in academic achievement. Further, the link between VVS activation during both attention tasks and academic achievement was mediated by differences in executive functioning. These findings extend previous work by highlighting that (i) early-developing visual processing regions play an important role in supporting complex attentional processes, (ii) childhood SES is associated with VVS development, and (iii) individual differences in VVS function may be a neural mechanism in the emergence of the income-achievement gap.


2021 ◽  
Author(s):  
Nicholas J Sexton ◽  
Bradley C Love

One reason the mammalian visual system is viewed as hierarchical, such that successive stages of processing contain ever higher-level information, is because of functional correspondences with deep convolutional neural networks (DCNNs). However, these correspondences between brain and model activity involve shared, not task-relevant, variance. We propose a stricter test of correspondence: If a DCNN layer corresponds to a brain region, then replacing model activity with brain activity should successfully drive the DCNN's object recognition decision. Using this approach on three datasets, we found all regions along the ventral visual stream best corresponded with later model layers, indicating all stages of processing contained higher-level information about object category. Time course analyses suggest long-range recurrent connections transmit object class information from late to early visual areas.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


2021 ◽  
pp. 1-14
Author(s):  
Jie Huang ◽  
Paul Beach ◽  
Andrea Bozoki ◽  
David C. Zhu

Background: Postmortem studies of brains with Alzheimer’s disease (AD) not only find amyloid-beta (Aβ) and neurofibrillary tangles (NFT) in the visual cortex, but also reveal temporally sequential changes in AD pathology from higher-order association areas to lower-order areas and then primary visual area (V1) with disease progression. Objective: This study investigated the effect of AD severity on visual functional network. Methods: Eight severe AD (SAD) patients, 11 mild/moderate AD (MAD), and 26 healthy senior (HS) controls undertook a resting-state fMRI (rs-fMRI) and a task fMRI of viewing face photos. A resting-state visual functional connectivity (FC) network and a face-evoked visual-processing network were identified for each group. Results: For the HS, the identified group-mean face-evoked visual-processing network in the ventral pathway started from V1 and ended within the fusiform gyrus. In contrast, the resting-state visual FC network was mainly confined within the visual cortex. AD disrupted these two functional networks in a similar severity dependent manner: the more severe the cognitive impairment, the greater reduction in network connectivity. For the face-evoked visual-processing network, MAD disrupted and reduced activation mainly in the higher-order visual association areas, with SAD further disrupting and reducing activation in the lower-order areas. Conclusion: These findings provide a functional corollary to the canonical view of the temporally sequential advancement of AD pathology through visual cortical areas. The association of the disruption of functional networks, especially the face-evoked visual-processing network, with AD severity suggests a potential predictor or biomarker of AD progression.


Author(s):  
Sigrid Hegna Ingvaldsen ◽  
Tora Sund Morken ◽  
Dordi Austeng ◽  
Olaf Dammann

AbstractResearch on retinopathy of prematurity (ROP) focuses mainly on the abnormal vascularization patterns that are directly visible for ophthalmologists. However, recent findings indicate that children born prematurely also exhibit changes in the retinal cellular architecture and along the dorsal visual stream, such as structural changes between and within cortical areas. Moreover, perinatal sustained systemic inflammation (SSI) is associated with an increased risk for ROP and the visual deficits that follow. In this paper, we propose that ROP might just be the tip of an iceberg we call visuopathy of prematurity (VOP). The VOP paradigm comprises abnormal vascularization of the retina, alterations in retinal cellular architecture, choroidal degeneration, and abnormalities in the visual pathway, including cortical areas. Furthermore, VOP itself might influence the developmental trajectories of cerebral structures and functions deemed responsible for visual processing, thereby explaining visual deficits among children born preterm.


NeuroImage ◽  
2016 ◽  
Vol 128 ◽  
pp. 316-327 ◽  
Author(s):  
Marianna Boros ◽  
Jean-Luc Anton ◽  
Catherine Pech-Georgel ◽  
Jonathan Grainger ◽  
Marcin Szwed ◽  
...  

2018 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

AbstractPredictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available, and vice-versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.


Sign in / Sign up

Export Citation Format

Share Document