Visual processing: Systematic variation in light–dark bias across visual space

2021 ◽  
Vol 31 (18) ◽  
pp. R1095-R1097
Author(s):  
Haleigh N. Mulholland ◽  
Gordon B. Smith
2020 ◽  
Author(s):  
Zixuan Wang ◽  
Yuki Murai ◽  
David Whitney

AbstractPerceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.


1978 ◽  
Vol 22 (1) ◽  
pp. 74-77
Author(s):  
Robert Fox

Virtually all the extensive research on inhibitory interactions among adjacent visual stimuli seen in such phenomena as simultaneous contrast and visual masking have employed situations in which the interacting stimulus elements occupy the same depth plane, i.e., the z-axis values are the same, in deference to the implicit assumption that processing of depth information occurs only after the visual processing of contour information is completed. But there are theoretical reasons and some data suggesting that the interactions among contours depend critically upon their relative positions in depth—interactions may not occur if the stimulus elements occupy different depth positions. The extent to which the metacontrast form of visual masking is dependent upon depth position was investigated in a series of experiments that used stereoscopic contours formed from random-element stereograms as test and mask stimuli. The random-element stereogram generation system permitted large variations in depth to be made without introducing confounding changes in proximal stimulation. The main results are 1) separation of test and mask stimuli in depth substantially reduces masking, and 2) when more than one stimulus is in visual space the stimulus that either appears first or appears closer to the observer receives preferential processing by the visual system.


Author(s):  
Daniel Tomsic ◽  
Julieta Sztarker

Decapod crustaceans, in particular semiterrestrial crabs, are highly visual animals that greatly rely on visual information. Their responsiveness to visual moving stimuli, with behavioral displays that can be easily and reliably elicited in the laboratory, together with their sturdiness for experimental manipulation and the accessibility of their nervous system for intracellular electrophysiological recordings in the intact animal, make decapod crustaceans excellent experimental subjects for investigating the neurobiology of visually guided behaviors. Investigations of crustaceans have elucidated the general structure of their eyes and some of their specializations, the anatomical organization of the main brain areas involved in visual processing and their retinotopic mapping of visual space, and the morphology, physiology, and stimulus feature preferences of a number of well-identified classes of neurons, with emphasis on motion-sensitive elements. This anatomical and physiological knowledge, in connection with results of behavioral experiments in the laboratory and the field, are revealing the neural circuits and computations involved in important visual behaviors, as well as the substrate and mechanisms underlying visual memories in decapod crustaceans.


2020 ◽  
Vol 287 (1930) ◽  
pp. 20200825
Author(s):  
Zixuan Wang ◽  
Yuki Murai ◽  
David Whitney

Perceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Jen-Chun Hsiang ◽  
Keith P Johnson ◽  
Linda Madisen ◽  
Hongkui Zeng ◽  
Daniel Kerschensteiner

Neurons receive synaptic inputs on extensive neurite arbors. How information is organized across arbors and how local processing in neurites contributes to circuit function is mostly unknown. Here, we used two-photon Ca2+ imaging to study visual processing in VGluT3-expressing amacrine cells (VG3-ACs) in the mouse retina. Contrast preferences (ON vs. OFF) varied across VG3-AC arbors depending on the laminar position of neurites, with ON responses preferring larger stimuli than OFF responses. Although arbors of neighboring cells overlap extensively, imaging population activity revealed continuous topographic maps of visual space in the VG3-AC plexus. All VG3-AC neurites responded strongly to object motion, but remained silent during global image motion. Thus, VG3-AC arbors limit vertical and lateral integration of contrast and location information, respectively. We propose that this local processing enables the dense VG3-AC plexus to contribute precise object motion signals to diverse targets without distorting target-specific contrast preferences and spatial receptive fields.


2020 ◽  
Author(s):  
Doris Voina ◽  
Stefano Recanatesi ◽  
Brian Hu ◽  
Eric Shea-Brown ◽  
Stefan Mihalas

AbstractAs animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit.Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatio-temporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.Author SummaryThe brain processes information at all times and much of that information is context-dependent. The visual system presents an important example: processing is ongoing, but the context changes dramatically when an animal is still vs. running. How is context-dependent information processing achieved? We take inspiration from recent neurophysiology studies on the role of distinct cell types in primary visual cortex (V1).We find that relatively few “switching units” — akin to the VIP neuron type in V1 in that they turn on and off in the running vs. still context and have connections to and from the main population — is sufficient to drive context dependent image processing. We demonstrate this in a model of feature integration, and in a test of image denoising. The underlying circuit architecture illustrates a concrete computational role for the multiple cell types under increasing study across the brain, and may inspire more flexible neurally inspired computing architectures.


2022 ◽  
pp. 1-54
Author(s):  
Doris Voina ◽  
Stefano Recanatesi ◽  
Brian Hu ◽  
Eric Shea-Brown ◽  
Stefan Mihalas

Abstract As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.


2018 ◽  
Author(s):  
Marie E. Bellet ◽  
Joachim Bellet ◽  
Hendrikje Nienborg ◽  
Ziad M. Hafed ◽  
Philipp Berens

Saccades are ballistic eye movements that rapidly shift gaze from one location of visual space to another. Detecting saccades in eye movement recordings is important not only for studying the neural mechanisms underlying sensory, motor, and cognitive processes, but also as a clinical and diagnostic tool. However, automatically detecting saccades can be difficult, particularly when such saccades are generated in coordination with other tracking eye movements, like smooth pursuits, or when the saccade amplitude is close to eye tracker noise levels, like with microsaccades. In such cases, labeling by human experts is required, but this is a tedious task prone to variability and error. We developed a convolutional neural network (CNN) to automatically detect saccades at human-level performance accuracy. Our algorithm surpasses state of the art according to common performance metrics, and will facilitate studies of neurophysiological processes underlying saccade generation and visual processing.


2019 ◽  
Author(s):  
Kimberly B. Weldon ◽  
Alexandra Woolgar ◽  
Anina N. Rich ◽  
Mark A. Williams

AbstractEvidence from neuroimaging and brain stimulation studies suggest that visual information about objects in the periphery is fed back to foveal retinotopic cortex in a separate representation that is essential for peripheral perception. The characteristics of this phenomenon has important theoretical implications for the role fovea-specific feedback might play in perception. In this work, we employed a recently developed behavioral paradigm to explore whether late disruption to central visual space impaired perception of color. First, participants performed a shape discrimination task on colored novel objects in the periphery while fixating centrally. Consistent with the results from previous work, a visual distractor presented at fixation ~100ms after presentation of the peripheral stimuli impaired sensitivity to differences in peripheral shapes more than a visual distractor presented at other stimulus onset asynchronies. In a second experiment, participants performed a color discrimination task on the same colored objects. In a third experiment, we further tested for the foveal distractor effect with stimuli restricted to a low-level feature by using homogenous color patches. These two latter experiments resulted in a similar pattern of behavior: a central distractor presented at the critical stimulus onset asynchrony impaired sensitivity to peripheral color differences, but, importantly, the magnitude of the effect depended on whether peripheral objects contained complex shape information. These results taken together suggest that feedback to the foveal confluence is a component of visual processing supporting perception of both object form and color.


1998 ◽  
Vol 353 (1373) ◽  
pp. 1341-1351 ◽  
Author(s):  
Glyn W. Humphreys

I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task–based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within–object representations, where elements are coded as parts of objects, and between–object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se . Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task–based selection of whether within– or between–object codes determine behaviour. Between–object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification.


Sign in / Sign up

Export Citation Format

Share Document