scholarly journals A New Discovery on Visual Information Dynamic Changes from Retina to V2

Author(s):  
Haixin Zhong ◽  
Rubin Wang

Abstract The information processing mechanisms of the visual nervous system remain to be unsolved scientific issues in neuroscience field, owing to a lack of unified and widely accepted theory for explanation. It has been well documented that approximately 80% of the rich and complicated perceptual information from the real world is transmitted to the visual cortex, only a small fraction of visual information reaches the V1 area. This, nevertheless, does not affect our visual perception. Furthermore, how neurons in V2 encode such a small amount of visual information has yet to be addressed. To this end, the current paper establishes a visual network model for retina-LGN-V1-V2 and quantitatively accounts for that response to the scarcity of visual information and encoding rules, based on the principle of neural mapping from V1 to V2. The results demonstrate that the visual information has a small degree of dynamic degradation when it is mapped from V1 to V2, during which there is a convolution calculation occurring. Therefore, visual information dynamic degradation mainly manifests itself along the pathway of the retina to V1, rather than V1 to V2. The slight changes in the visual information are attributable to the fact that the receptive fields (RFs) of V2 cannot further extract the image features. Meanwhile, despite the scarcity of visual information mapped from the retina, the RFs of V2 can still accurately respond to and encode “corner” information, due to the effects of synaptic plasticity, of which function is not existed in V1. This is a new discovery that has never been noticed before. To sum up, the coding of the “contour” feature (edge and corner) is achieved in the pathway of retina-LGN-V1-V2.

Author(s):  
Haixin Zhong ◽  
Rubin Wang

AbstractThe information processing mechanisms of the visual nervous system remain to be unsolved scientific issues in neuroscience field, owing to a lack of unified and widely accepted theory for explanation. It has been well documented that approximately 80% of the rich and complicated perceptual information from the real world is transmitted to the visual cortex, and only a small fraction of visual information reaches the primary visual cortex (V1). This, nevertheless, does not affect our visual perception. Furthermore, how neurons in the secondary visual cortex (V2) encode such a small amount of visual information has yet to be addressed. To this end, the current paper established a visual network model for retina-lateral geniculate nucleus (LGN)-V1–V2 and quantitatively accounted for that response to the scarcity of visual information and encoding rules, based on the principle of neural mapping from V1 to V2. The results demonstrated that the visual information has a small degree of dynamic degradation when it is mapped from V1 to V2, during which there is a convolution calculation occurring. Therefore, visual information dynamic degradation mainly manifests itself along the pathway of the retina to V1, rather than V1 to V2. The slight changes in the visual information are attributable to the fact that the receptive fields (RFs) of V2 cannot further extract the image features. Meanwhile, despite the scarcity of visual information mapped from the retina, the RFs of V2 can still accurately respond to and encode “corner” information, due to the effects of synaptic plasticity, but the similar function does not exist in V1. This is a new discovery that has never been noticed before. To sum up, the coding of the “contour” feature (edge and corner) is achieved in the pathway of retina-LGN-V1–V2.


2021 ◽  
Vol 13 (22) ◽  
pp. 4528
Author(s):  
Xin Yang ◽  
Lei Hu ◽  
Yongmei Zhang ◽  
Yunqing Li

Remote sensing image change detection (CD) is an important task in remote sensing image analysis and is essential for an accurate understanding of changes in the Earth’s surface. The technology of deep learning (DL) is becoming increasingly popular in solving CD tasks for remote sensing images. Most existing CD methods based on DL tend to use ordinary convolutional blocks to extract and compare remote sensing image features, which cannot fully extract the rich features of high-resolution (HR) remote sensing images. In addition, most of the existing methods lack robustness to pseudochange information processing. To overcome the above problems, in this article, we propose a new method, namely MRA-SNet, for CD in remote sensing images. Utilizing the UNet network as the basic network, the method uses the Siamese network to extract the features of bitemporal images in the encoder separately and perform the difference connection to better generate difference maps. Meanwhile, we replace the ordinary convolution blocks with Multi-Res blocks to extract spatial and spectral features of different scales in remote sensing images. Residual connections are used to extract additional detailed features. To better highlight the change region features and suppress the irrelevant region features, we introduced the Attention Gates module before the skip connection between the encoder and the decoder. Experimental results on a public dataset of remote sensing image CD show that our proposed method outperforms other state-of-the-art (SOTA) CD methods in terms of evaluation metrics and performance.


1995 ◽  
Vol 74 (3) ◽  
pp. 1083-1094 ◽  
Author(s):  
V. J. Brown ◽  
R. Desimone ◽  
M. Mishkin

1. The tail of the caudate nucleus and adjacent ventral putamen (ventrocaudal neostriatum) are major projection sites of the extrastriate visual cortex. Visual information is then relayed, directly or indirectly, to a variety of structures with motor functions. To test for a role of the ventrocaudal neostriatum in stimulus-response association learning, or habit formation, neuronal responses were recorded while monkeys performed a visual discrimination task. Additional data were collected from cells in cortical area TF, which serve as a comparison and control for the caudate data. 2. Two monkeys were trained to perform an asymmetrically reinforced go-no go visual discrimination. The stimuli were complex colored patterns, randomly assigned to be either positive or negative. The monkey was rewarded with juice for releasing a bar when a positive stimulus was presented, whereas a negative stimulus signaled that no reward was available and that the monkey should withhold its response. Neuronal responses were recorded both while the monkey performed the task with previously learned stimuli and while it learned the task with new stimuli. In some cases, responses were recorded during reversal learning. 3. There was no evidence that cells in the ventrocaudal neostriatum were influenced by the reward contingencies of the task. Cells did not fire preferentially to the onset of either positive or negative stimuli; neither did cells fire in response to the reward itself or in association with the motor response of the monkey. Only visual responses were apparent. 4. The visual properties of cells in these structures resembled those of cells in some of the cortical areas projecting to them. Most cells responded selectively to different visual stimuli. The degree of stimulus selectivity was assessed with discriminant analysis and was found to be quantitatively similar to that of inferior temporal cells tested with similar stimuli. Likewise, like inferior temporal cells, many cells in the ventrocaudal neostriatum had large, bilateral receptive fields. Some cells had "doughnut"-shaped receptive fields, with stronger responses in the periphery of both visual fields than at the fovea, similar to the fields of some cells in the superior temporal polysensory area. Although the absence of task-specific responses argues that ventrocaudal neostriatal cells are not themselves the mediators of visual learning in the task employed, their cortical-like visual properties suggest that they might relay visual information important for visuomotor plasticity in other structures. (ABSTRACT TRUNCATED AT 400 WORDS)


2010 ◽  
Vol 104 (5) ◽  
pp. 2624-2633 ◽  
Author(s):  
Catherine A. Dunn ◽  
Carol L. Colby

Our eyes are constantly moving, allowing us to attend to different visual objects in the environment. With each eye movement, a given object activates an entirely new set of visual neurons, yet we perceive a stable scene. One neural mechanism that may contribute to visual stability is remapping. Neurons in several brain regions respond to visual stimuli presented outside the receptive field when an eye movement brings the stimulated location into the receptive field. The stored representation of a visual stimulus is remapped, or updated, in conjunction with the saccade. Remapping depends on neurons being able to receive visual information from outside the classic receptive field. In previous studies, we asked whether remapping across hemifields depends on the forebrain commissures. We found that, when the forebrain commissures are transected, behavior dependent on accurate spatial updating is initially impaired but recovers over time. Moreover, neurons in lateral intraparietal cortex (LIP) continue to remap information across hemifields in the absence of the forebrain commissures. One possible explanation for the preserved across-hemifield remapping in split-brain animals is that neurons in a single hemisphere could represent visual information from both visual fields. In the present study, we measured receptive fields of LIP neurons in split-brain monkeys and compared them with receptive fields in intact monkeys. We found a small number of neurons with bilateral receptive fields in the intact monkeys. In contrast, we found no such neurons in the split-brain animals. We conclude that bilateral representations in area LIP following forebrain commissures transection cannot account for remapping across hemifields.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 59-59
Author(s):  
J M Zanker ◽  
M P Davey

Visual information processing in primate cortex is based on a highly ordered representation of the surrounding world. In addition to the retinotopic mapping of the visual field, systematic variations of the orientation tuning of neurons are described electrophysiologically for the first stages of the visual stream. On the way to understanding the relation of position and orientation representation, in order to give an adequate account of cortical architecture, it will be an essential step to define the minimum spatial requirements for detection of orientation. We addressed the basic question of spatial limits for detecting orientation by comparing computer simulations of simple orientation filters with psychophysical experiments in which the orientation of small lines had to be detected at various positions in the visual field. At sufficiently high contrast levels, the minimum physical length of a line whose orientation can just be resolved is not constant when presented at various eccentricities, but covaries inversely with the cortical magnification factor. A line needs to span less than 0.2 mm on the cortical surface in order to be recognised as oriented, independently of the actual eccentricity at which the stimulus is presented. This seems to indicate that human performance for this task approaches the physical limits, requiring hardly more than approximately three input elements to be activated, in order to detect the orientation of a highly visible line segment. Combined with the estimates for receptive field sizes of orientation-selective filters derived from computer simulations, this experimental result may nourish speculations of how the rather local elementary process underlying orientation detection in the human visual system can be assembled to form much larger receptive fields of the orientation-sensitive neurons known to exist in the primate visual system.


1991 ◽  
Vol 66 (3) ◽  
pp. 777-793 ◽  
Author(s):  
J. W. McClurkin ◽  
T. J. Gawne ◽  
B. J. Richmond ◽  
L. M. Optican ◽  
D. L. Robinson

1. Using behaving monkeys, we studied the visual responses of single neurons in the parvocellular layers of the lateral geniculate nucleus (LGN) to a set of two-dimensional black and white patterns. We found that monkeys could be trained to make sufficiently reliable and stable fixations to enable us to plot and characterize the receptive fields of individual neurons. A qualitative examination of rasters and a statistical analysis of the data revealed that the responses of neurons were related to the stimuli. 2. The data from 5 of the 13 "X-like" neurons in our sample indicated the presence of antagonistic center and surround mechanisms and linear summation of luminance within center and surround mechanisms. We attribute the lack of evidence for surround antagonism in the eight neurons that failed to exhibit center-surround antagonism either to a mismatch between the size of the pixels in the stimuli and the size of the receptive field or to the lack of a surround mechanism (i.e., the type II neurons of Wiesel and Hubel). 3. The data from five other neurons confirm and extend previous reports indicating that the surround regions of X-like neurons can have nonlinearities. The responses of these neurons were not modulated when a contrast-reversing, bipartite stimulus was centered on the receptive field, which suggests a linear summation within the center and surround mechanisms. However, it was frequently the case for these neurons that stimuli of identical pattern but opposite contrast elicited responses of similar polarity, which indicates nonlinear behavior. 4. We found a wide variety of temporal patterns in the responses of individual LGN neurons, which included differences in the magnitude, width, and number of peaks of the initial on-transient and in the magnitude of the later sustained component. These different temporal patterns were repeatable and clearly different for different visual patterns. These results suggest that visual information may be carried in the shape as well as in the amplitude of the response waveform.


2015 ◽  
Vol 114 (2) ◽  
pp. 1321-1330 ◽  
Author(s):  
Christopher A. Procyk ◽  
Cyril G. Eleftheriou ◽  
Riccardo Storchi ◽  
Annette E. Allen ◽  
Nina Milosavljevic ◽  
...  

In advanced retinal degeneration loss of rods and cones leaves melanopsin-expressing intrinsically photosensitive retinal ganglion cells (ipRGCs) as the only source of visual information. ipRGCs drive non-image-forming responses (e.g., circadian photoentrainment) under such conditions but, despite projecting to the primary visual thalamus [dorsal lateral geniculate nucleus (dLGN)], do not support form vision. We wished to determine what precludes ipRGCs supporting spatial discrimination after photoreceptor loss, using a mouse model ( rd/rd cl) lacking rods and cones. Using multielectrode arrays, we found that both RGCs and neurons in the dLGN of this animal have clearly delineated spatial receptive fields. In the retina, they are typically symmetrical, lack inhibitory surrounds, and have diameters in the range of 10–30° of visual space. Receptive fields in the dLGN were larger (diameters typically 30–70°) but matched the retinotopic map of the mouse dLGN. Injections of a neuroanatomical tracer (cholera toxin β-subunit) into the dLGN confirmed that retinotopic order of ganglion cell projections to the dLGN and thalamic projections to the cortex is at least superficially intact in rd/rd cl mice. However, as previously reported for deafferented ipRGCs, onset and offset of light responses have long latencies in the rd/rd cl retina and dLGN. Accordingly, dLGN neurons failed to track dynamic changes in light intensity in this animal. Our data reveal that ipRGCs can convey spatial information in advanced retinal degeneration and identify their poor temporal fidelity as the major limitation in their ability to provide information about spatial patterns under natural viewing conditions.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Jen-Chun Hsiang ◽  
Keith P Johnson ◽  
Linda Madisen ◽  
Hongkui Zeng ◽  
Daniel Kerschensteiner

Neurons receive synaptic inputs on extensive neurite arbors. How information is organized across arbors and how local processing in neurites contributes to circuit function is mostly unknown. Here, we used two-photon Ca2+ imaging to study visual processing in VGluT3-expressing amacrine cells (VG3-ACs) in the mouse retina. Contrast preferences (ON vs. OFF) varied across VG3-AC arbors depending on the laminar position of neurites, with ON responses preferring larger stimuli than OFF responses. Although arbors of neighboring cells overlap extensively, imaging population activity revealed continuous topographic maps of visual space in the VG3-AC plexus. All VG3-AC neurites responded strongly to object motion, but remained silent during global image motion. Thus, VG3-AC arbors limit vertical and lateral integration of contrast and location information, respectively. We propose that this local processing enables the dense VG3-AC plexus to contribute precise object motion signals to diverse targets without distorting target-specific contrast preferences and spatial receptive fields.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2559 ◽  
Author(s):  
Shuai Li ◽  
Yuelei Xu ◽  
Wei Cong ◽  
Shiping Ma ◽  
Mingming Zhu ◽  
...  

Contour is a very important feature in biological visual cognition and has been extensively investigated as a fundamental vision problem. In connection with the limitations of conventional models in detecting image contours in complex scenes, a hierarchical image contour extraction method is proposed based on the biological vision mechanism that draws on the perceptual characteristics of the early vision for features such as edges, shapes, and colours. By simulating the information processing mechanisms of the cells’ receptive fields in the early stages of the biological visual system, we put forward a computational model that combines feedforward, lateral, and feedback neural connections to decode and obtain the image contours. Our model simulations and their results show that the established hierarchical contour detection model can adequately fit the characteristics of the biological experiment, quickly and effectively detect the salient contours in complex scenes, and better suppress the unwanted textures.


2018 ◽  
Author(s):  
Adam P. Morris ◽  
Bart Krekelberg

SummaryHumans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina – and propagated throughout the visual cortical hierarchy – is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded “eye tracker” that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in V1 during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies 1-4, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of (stationary) gaze direction. This decoded signal not only tracked the eye accurately during fixation, but also during fast and slow eye movements, even though the decoder had not been exposed to data from these behavioural states. Moreover, this signal lagged the real eye by approximately the time it took for new visual information to travel from the retina to cortex. Using simulations, we show that this V1 eye position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.


Sign in / Sign up

Export Citation Format

Share Document