scholarly journals Evidence from imaging resilience genetics for a protective mechanism against schizophrenia in the ventral visual pathway

Author(s):  
Meike Dorothee Hettwer ◽  
Thomas M. Lancaster ◽  
Eva Raspor ◽  
Peter K. Hahn ◽  
Nina Roth Mota ◽  
...  

Recently, the first genetic variants conferring resilience to schizophrenia have been identified. However, the neurobiological mechanisms underlying their protective effect remain unknown. Current models implicate adaptive neuroplastic changes in the visual system and their pro-cognitive effects in schizophrenia resilience. Here, we test the hypothesis that comparable changes can emerge from schizophrenia resilience genes. To this end, we used structural magnetic resonance imaging to investigate the effects of a schizophrenia polygenic resilience score (PRSResilience) on cortical morphology (discovery sample: n=101; UK Biobank replication sample: n=33,224). We observed positive correlations between PRSResilience and cortical volume in the fusiform gyrus, a central hub within the ventral visual pathway. Our findings indicate that resilience to schizophrenia arises partly from genetically mediated enhancements of visual processing capacities for social and non-social object information. This implies an important role of visual information processing for mitigating schizophrenia risk, which might also be exploitable for early intervention studies.

2019 ◽  
Vol 31 (6) ◽  
pp. 821-836 ◽  
Author(s):  
Elliot Collins ◽  
Erez Freud ◽  
Jana M. Kainerstorfer ◽  
Jiaming Cao ◽  
Marlene Behrmann

Although shape perception is primarily considered a function of the ventral visual pathway, previous research has shown that both dorsal and ventral pathways represent shape information. Here, we examine whether the shape-selective electrophysiological signals observed in dorsal cortex are a product of the connectivity to ventral cortex or are independently computed. We conducted multiple EEG studies in which we manipulated the input parameters of the stimuli so as to bias processing to either the dorsal or ventral visual pathway. Participants viewed displays of common objects with shape information parametrically degraded across five levels. We measured shape sensitivity by regressing the amplitude of the evoked signal against the degree of stimulus scrambling. Experiment 1, which included grayscale versions of the stimuli, served as a benchmark establishing the temporal pattern of shape processing during typical object perception. These stimuli evoked broad and sustained patterns of shape sensitivity beginning as early as 50 msec after stimulus onset. In Experiments 2 and 3, we calibrated the stimuli such that visual information was delivered primarily through parvocellular inputs, which mainly project to the ventral pathway, or through koniocellular inputs, which mainly project to the dorsal pathway. In the second and third experiments, shape sensitivity was observed, but in distinct spatio-temporal configurations from each other and from that elicited by grayscale inputs. Of particular interest, in the koniocellular condition, shape selectivity emerged earlier than in the parvocellular condition. These findings support the conclusion of distinct dorsal pathway computations of object shape, independent from the ventral pathway.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


2019 ◽  
Vol 19 (10) ◽  
pp. 114
Author(s):  
J.Brendan W Ritchie ◽  
Joyce Bosmans ◽  
Shuo Sun ◽  
Kirsten Verhaegen ◽  
Astrid Zeman ◽  
...  

2020 ◽  
Author(s):  
Yaoda Xu ◽  
Maryam Vaziri-Pashkam

ABSTRACTAny given visual object input is characterized by multiple visual features, such as identity, position and size. Despite the usefulness of identity and nonidentity features in vision and their joint coding throughout the primate ventral visual processing pathway, they have so far been studied relatively independently. Here we document the relative coding strength of object identity and nonidentity features in a brain region and how this may change across the human ventral visual pathway. We examined a total of four nonidentity features, including two Euclidean features (position and size) and two non-Euclidean features (image statistics and spatial frequency content of an image). Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with identity outweighed the non-Euclidean features, but not the Euclidean ones, in higher levels of visual processing. A similar analysis was performed in 14 convolutional neural networks (CNNs) pretrained to perform object categorization with varying architecture, depth, and with/without recurrent processing. While the relative coding strength of object identity and nonidentity features in lower CNN layers matched well with that in early human visual areas, the match between higher CNN layers and higher human visual regions were limited. Similar results were obtained regardless of whether a CNN was trained with real-world or stylized object images that emphasized shape representation. Together, by measuring the relative coding strength of object identity and nonidentity features, our approach provided a new tool to characterize feature coding in the human brain and the correspondence between the brain and CNNs.SIGNIFICANCE STATEMENTThis study documented the relative coding strength of object identity compared to four types of nonidentity features along the human ventral visual processing pathway and compared brain responses with those of 14 CNNs pretrained to perform object categorization. Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with the coding strength of the different nonidentity features differed at higher levels of visual processing. While feature coding in lower CNN layers matched well with that of early human visual areas, the match between higher CNN layers and higher human visual regions were limited. Our approach provided a new tool to characterize feature coding in the human brain and the correspondence between the brain and CNNs.


2018 ◽  
Vol 30 (11) ◽  
pp. 1590-1605 ◽  
Author(s):  
Alex Clarke ◽  
Barry J. Devereux ◽  
Lorraine K. Tyler

Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental model against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


2020 ◽  
Author(s):  
Karola Schlegelmilch ◽  
Annie E. Wertz

Visual processing of a natural environment occurs quickly and effortlessly. Yet, little is known about how young children are able to visually categorize naturalistic structures, since their perceptual abilities are still developing. We addressed this question by asking 76 children (age: 4.1-6.1 years) and 72 adults (age: 18-50 years) to first sort cards with greyscale images depicting vegetation, manmade artifacts, and non-living natural elements (e.g., stones) into groups according to visual similarity. Then, they were asked to choose the images' superordinate categories. We analyzed the relevance of different visual properties to the decisions of the participant groups. Children were very well able to interpret complex visual structures. However, children relied on fewer visual properties and, in general, were less likely to include properties which afforded the analysis of detailed visual information in their categorization decisions than adults, suggesting that immaturities of the still-developing visual system affected categorization. Moreover, when sorting according to visual similarity, both groups attended to the images' assumed superordinate categories—in particular to vegetation—in addition to visual properties. Children had a higher relative sensitivity for vegetation than adults did in the classification task when controlling for overall performance differences. Taken together, these findings add to the sparse literature on the role of developing perceptual abilities in processing naturalistic visual input.


2000 ◽  
Vol 17 (1) ◽  
pp. 77-89 ◽  
Author(s):  
ROSARIO M. BALBOA ◽  
NORBERTO M. GRZYWACZ

Lateral inhibition is one of the first and most important stages of visual processing. There are at least four theories related to information theory in the literature for the role of early retinal lateral inhibition. They are based on the spatial redundancy in natural images and the advantage of removing this redundancy from the visual code. Here, we contrast these theories with data from the retina's outer plexiform layer. The horizontal cells' lateral-inhibition extent displays a bell-shape behavior as function of background luminance, whereas all the theories show a fall as luminance increases. It is remarkable that different theories predict the same luminance behavior, explaining “half” of the biological data. We argue that the main reason is how these theories deal with photon-absorption noise. At dim light levels, for which this noise is relatively large, large receptive fields would increase the signal-to-noise ratio through averaging. Unfortunately, such an increase at low luminance levels may smooth out basic visual information of natural images. To explain the biological behavior, we describe an alternate hypothesis, which proposes that the role of early visual lateral inhibition is to deal with noise without missing relevant clues from the visual world, most prominently, the occlusion boundaries between objects.


2017 ◽  
Vol 34 ◽  
Author(s):  
ELIZABETH Y. LITVINA ◽  
CHINFEI CHEN

AbstractThe thalamocortical (TC) relay neuron of the dorsoLateral Geniculate Nucleus (dLGN) has borne its imprecise label for many decades in spite of strong evidence that its role in visual processing transcends the implied simplicity of the term “relay”. The retinogeniculate synapse is the site of communication between a retinal ganglion cell and a TC neuron of the dLGN. Activation of retinal fibers in the optic tract causes reliable, rapid, and robust postsynaptic potentials that drive postsynaptics spikes in a TC neuron. Cortical and subcortical modulatory systems have been known for decades to regulate retinogeniculate transmission. The dynamic properties that the retinogeniculate synapse itself exhibits during and after developmental refinement further enrich the role of the dLGN in the transmission of the retinal signal. Here we consider the structural and functional substrates for retinogeniculate synaptic transmission and plasticity, and reflect on how the complexity of the retinogeniculate synapse imparts a novel dynamic and influential capacity to subcortical processing of visual information.


Scientifica ◽  
2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Ruchi Kothari ◽  
Pradeep Bokariya ◽  
Smita Singh ◽  
Ramji Singh

Visual information is fundamental to how we appreciate our environment and interact with others. The visual evoked potential (VEP) is among those evoked potentials that are the bioelectric signals generated in the striate and extrastriate cortex when the retina is stimulated with light which can be recorded from the scalp electrodes. In the current paper, we provide an overview of the various modalities, techniques, and methodologies which have been employed for visual evoked potentials over the years. In the first part of the paper, we cast a cursory glance on the historical aspect of evoked potentials. Then the growing clinical significance and advantages of VEPs in clinical disorders have been briefly described, followed by the discussion on the earlier and currently available methods for VEPs based on the studies in the past and recent times. Next, we mention the standards and protocols laid down by the authorized agencies. We then summarize the recently developed techniques for VEP. In the concluding section, we lay down prospective research directives related to fundamental and applied aspects of VEPs as well as offering perspectives for further research to stimulate inquiry into the role of visual evoked potentials in visual processing impairment related disorders.


2018 ◽  
Vol 4 (1) ◽  
pp. 311-336 ◽  
Author(s):  
Yaoda Xu

Visual information processing contains two opposite needs. There is both a need to comprehend the richness of the visual world and a need to extract only pertinent visual information to guide thoughts and behavior at a given moment. I argue that these two aspects of visual processing are mediated by two complementary visual systems in the primate brain—specifically, the occipitotemporal cortex (OTC) and the posterior parietal cortex (PPC). The role of OTC in visual processing has been documented extensively by decades of neuroscience research. I review here recent evidence from human imaging and monkey neurophysiology studies to highlight the role of PPC in adaptive visual processing. I first document the diverse array of visual representations found in PPC. I then describe the adaptive nature of visual representation in PPC by contrasting visual processing in OTC and PPC and by showing that visual representations in PPC largely originate from OTC.


Sign in / Sign up

Export Citation Format

Share Document