scholarly journals Mice and primates use distinct strategies for visual segmentation

2021 ◽  
Author(s):  
Francisco J. Luongo ◽  
Lu Liu ◽  
Chun Lum Andy Ho ◽  
Janis K. Hesse ◽  
Joseph B Wekselblatt ◽  
...  

The rodent visual system has attracted great interest in recent years due to its experimental tractability, but the fundamental mechanisms used by the mouse to represent the visual world remain unclear. In the primate, researchers have argued from both behavioral and neural evidence that a key step in visual representation is "figure-ground segmentation," the delineation of figures as distinct from backgrounds [1-4]. To determine if mice also show behavioral and neural signatures of figure-ground segmentation, we trained mice on a figure-ground segmentation task where figures were defined by gratings and naturalistic textures moving counterphase to the background. Unlike primates, mice were severely limited in their ability to segment figure from ground using the opponent motion cue, with segmentation behavior strongly dependent on the specific carrier pattern. Remarkably, when mice were forced to localize naturalistic patterns defined by opponent motion, they adopted a strategy of brute force memorization of texture patterns. In contrast, primates, including humans, macaques, and mouse lemurs, could readily segment figures independent of carrier pattern using the opponent motion cue. Consistent with mouse behavior, neural responses to the same stimuli recorded in mouse visual areas V1, RL, and LM also did not support texture-invariant segmentation of figures using opponent motion. Modeling revealed that the texture dependence of both the mouse's behavior and neural responses could be explained by a feedforward neural network lacking explicit segmentation capabilities. These findings reveal a fundamental limitation in the ability of mice to segment visual objects compared to primates.

2013 ◽  
Vol 25 (2) ◽  
pp. 175-187 ◽  
Author(s):  
Jihoon Oh ◽  
Jae Hyung Kwon ◽  
Po Song Yang ◽  
Jaeseung Jeong

Neural responses in early sensory areas are influenced by top–down processing. In the visual system, early visual areas have been shown to actively participate in top–down processing based on their topographical properties. Although it has been suggested that the auditory cortex is involved in top–down control, functional evidence of topographic modulation is still lacking. Here, we show that mental auditory imagery for familiar melodies induces significant activation in the frequency-responsive areas of the primary auditory cortex (PAC). This activation is related to the characteristics of the imagery: when subjects were asked to imagine high-frequency melodies, we observed increased activation in the high- versus low-frequency response area; when the subjects were asked to imagine low-frequency melodies, the opposite was observed. Furthermore, we found that A1 is more closely related to the observed frequency-related modulation than R in tonotopic subfields of the PAC. Our findings suggest that top–down processing in the auditory cortex relies on a mechanism similar to that used in the perception of external auditory stimuli, which is comparable to early visual systems.


2015 ◽  
Vol 27 (4) ◽  
pp. 832-841 ◽  
Author(s):  
Amanda K. Robinson ◽  
Judith Reinhard ◽  
Jason B. Mattingley

Sensory information is initially registered within anatomically and functionally segregated brain networks but is also integrated across modalities in higher cortical areas. Although considerable research has focused on uncovering the neural correlates of multisensory integration for the modalities of vision, audition, and touch, much less attention has been devoted to understanding interactions between vision and olfaction in humans. In this study, we asked how odors affect neural activity evoked by images of familiar visual objects associated with characteristic smells. We employed scalp-recorded EEG to measure visual ERPs evoked by briefly presented pictures of familiar objects, such as an orange, mint leaves, or a rose. During presentation of each visual stimulus, participants inhaled either a matching odor, a nonmatching odor, or plain air. The N1 component of the visual ERP was significantly enhanced for matching odors in women, but not in men. This is consistent with evidence that women are superior in detecting, discriminating, and identifying odors and that they have a higher gray matter concentration in olfactory areas of the OFC. We conclude that early visual processing is influenced by olfactory cues because of associations between odors and the objects that emit them, and that these associations are stronger in women than in men.


2020 ◽  
Vol 117 (23) ◽  
pp. 13145-13150 ◽  
Author(s):  
Insub Kim ◽  
Sang Wook Hong ◽  
Steven K. Shevell ◽  
Won Mok Shim

Color is a perceptual construct that arises from neural processing in hierarchically organized cortical visual areas. Previous research, however, often failed to distinguish between neural responses driven by stimulus chromaticity versus perceptual color experience. An unsolved question is whether the neural responses at each stage of cortical processing represent a physical stimulus or a color we see. The present study dissociated the perceptual domain of color experience from the physical domain of chromatic stimulation at each stage of cortical processing by using a switch rivalry paradigm that caused the color percept to vary over time without changing the retinal stimulation. Using functional MRI (fMRI) and a model-based encoding approach, we found that neural representations in higher visual areas, such as V4 and VO1, corresponded to the perceived color, whereas responses in early visual areas V1 and V2 were modulated by the chromatic light stimulus rather than color perception. Our findings support a transition in the ascending human ventral visual pathway, from a representation of the chromatic stimulus at the retina in early visual areas to responses that correspond to perceptually experienced colors in higher visual areas.


2013 ◽  
Vol 26 (1-2) ◽  
pp. 99
Author(s):  
Amanda Robinson ◽  
Judith Reinhard ◽  
Jason Mattingley

2007 ◽  
Vol 19 (9) ◽  
pp. 1488-1497 ◽  
Author(s):  
J. J. Fahrenfort ◽  
H. S. Scholte ◽  
V. A. F. Lamme

In masking, a stimulus is rendered invisible through the presentation of a second stimulus shortly after the first. Over the years, authors have typically explained masking by postulating some early disruption process. In these feedforward-type explanations, the mask somehow “catches up” with the target stimulus, disrupting its processing either through lateral or interchannel inhibition. However, studies from recent years indicate that visual perception—and most notably visual awareness itself—may depend strongly on cortico-cortical feedback connections from higher to lower visual areas. This has led some researchers to propose that masking derives its effectiveness from selectively interrupting these reentrant processes. In this experiment, we used electroencephalogram measurements to determine what happens in the human visual cortex during detection of a texture-defined square under nonmasked (seen) and masked (unseen) conditions. Electro-encephalogram derivatives that are typically associated with reentrant processing turn out to be absent in the masked condition. Moreover, extrastriate visual areas are still activated early on by both seen and unseen stimuli, as shown by scalp surface Laplacian current source-density maps. This conclusively shows that feedforward processing is preserved, even when subject performance is at chance as determined by objective measures. From these results, we conclude that masking derives its effectiveness, at least partly, from disrupting reentrant processing, thereby interfering with the neural mechanisms of figure-ground segmentation and visual awareness itself.


2003 ◽  
Vol 20 (1) ◽  
pp. 77-84 ◽  
Author(s):  
AN CAO ◽  
PETER H. SCHILLER

Relative motion information, especially relative speed between different input patterns, is required for solving many complex tasks of the visual system, such as depth perception by motion parallax and motion-induced figure/ground segmentation. However, little is known about the neural substrate for processing relative speed information. To explore the neural mechanisms for relative speed, we recorded single-unit responses to relative motion in the primary visual cortex (area V1) of rhesus monkeys while presenting sets of random-dot arrays moving at different speeds. We found that most V1 neurons were sensitive to the existence of a discontinuity in speed, that is, they showed higher responses when relative motion was presented compared to homogenous field motion. Seventy percent of the neurons in our sample responded predominantly to relative rather than to absolute speed. Relative speed tuning curves were similar at different center–surround velocity combinations. These relative motion-sensitive neurons in macaque area V1 probably contribute to figure/ground segmentation and motion discontinuity detection.


2017 ◽  
Author(s):  
M-P. Schallmo ◽  
A.M. Kale ◽  
R. Millin ◽  
A.V. Flevaris ◽  
Z. Brkanac ◽  
...  

AbstractEfficient neural processing depends on regulating responses through suppression and facilitation of neural activity. Utilizing a well-known visual motion paradigm that evokes behavioral suppression and facilitation, and combining 5 different methodologies (behavioral psychophysics, computational modeling, functional MRI, pharmacology, and magnetic resonance spectroscopy), we provide evidence that challenges commonly held assumptions about the neural processes underlying suppression and facilitation. We show that: 1) both suppression and facilitation can emerge from a single, computational principle – divisive normalization; there is no need to invoke separate neural mechanisms, 2) neural suppression and facilitation in the motion-selective area MT mirror perception, but strong suppression also occurs in earlier visual areas, and 3) suppression is not driven by GABA-mediated inhibition. Thus, while commonly used spatial suppression paradigms may provide insight into neural response magnitudes in visual areas, they cannot be used to infer neural inhibition.


2021 ◽  
Vol 15 ◽  
Author(s):  
Chi Zhang ◽  
Xiao-Han Duan ◽  
Lin-Yuan Wang ◽  
Yong-Li Li ◽  
Bin Yan ◽  
...  

Despite the remarkable similarities between convolutional neural networks (CNN) and the human brain, CNNs still fall behind humans in many visual tasks, indicating that there still exist considerable differences between the two systems. Here, we leverage adversarial noise (AN) and adversarial interference (AI) images to quantify the consistency between neural representations and perceptual outcomes in the two systems. Humans can successfully recognize AI images as the same categories as their corresponding regular images but perceive AN images as meaningless noise. In contrast, CNNs can recognize AN images similar as corresponding regular images but classify AI images into wrong categories with surprisingly high confidence. We use functional magnetic resonance imaging to measure brain activity evoked by regular and adversarial images in the human brain, and compare it to the activity of artificial neurons in a prototypical CNN—AlexNet. In the human brain, we find that the representational similarity between regular and adversarial images largely echoes their perceptual similarity in all early visual areas. In AlexNet, however, the neural representations of adversarial images are inconsistent with network outputs in all intermediate processing layers, providing no neural foundations for the similarities at the perceptual level. Furthermore, we show that voxel-encoding models trained on regular images can successfully generalize to the neural responses to AI images but not AN images. These remarkable differences between the human brain and AlexNet in representation-perception association suggest that future CNNs should emulate both behavior and the internal neural presentations of the human brain.


Author(s):  
Matthew J Davidson ◽  
Will Mithen ◽  
Hinze Hogendoorn ◽  
Jeroen J.A. van Boxtel ◽  
Naotsugu Tsuchiya

AbstractAlthough visual awareness of an object typically increases neural responses, we identify a neural response that increases prior to perceptual disappearances, and that scales with the amount of invisibility reported during perceptual filling-in. These findings challenge long-held assumptions regarding the neural correlates of consciousness and entrained visually evoked potentials, by showing that the strength of stimulus-specific neural activity can encode the conscious absence of a stimulus.Significance StatementThe focus of attention and the contents of consciousness frequently overlap. Yet what happens if this common correlation is broken? To test this, we asked human participants to attend and report on the invisibility of four visual objects which seemed to disappear, yet actually remained on screen. We found that neural activity increased, rather than decreased, when targets became invisible. This coincided with measures of attention that also increased when stimuli disappeared. Together, our data support recent suggestions that attention and conscious perception are distinct and separable. In our experiment, neural measures more strongly follow attention.


2020 ◽  
Vol 123 (1) ◽  
pp. 224-233 ◽  
Author(s):  
Matthias Fritsche ◽  
Samuel J. D. Lawrence ◽  
Floris P. de Lange

The visual system adapts to its recent history. A phenomenon related to this is repetition suppression (RS), a reduction in neural responses to repeated compared with nonrepeated visual input. An intriguing hypothesis is that the timescale over which RS occurs across the visual hierarchy is tuned to the temporal statistics of visual input features, which change rapidly in low-level areas but are more stable in higher level areas. Here, we tested this hypothesis by studying the influence of the temporal lag between successive visual stimuli on RS throughout the visual system using functional (f)MRI. Twelve human volunteers engaged in four fMRI sessions in which we characterized the blood oxygen level-dependent response to pairs of repeated and nonrepeated natural images with interstimulus intervals (ISI) ranging from 50 to 1,000 ms to quantify the temporal tuning of RS along the posterior-anterior axis of the visual system. As expected, RS was maximal for short ISIs and decayed with increasing ISI. Crucially, however, and against our hypothesis, RS decayed at a similar rate in early and late visual areas. This finding challenges the prevailing view that the timescale of RS increases along the posterior-anterior axis of the visual system and suggests that RS is not tuned to temporal input regularities. NEW & NOTEWORTHY Visual areas show reduced neural responses to repeated compared with nonrepeated visual input, a phenomenon termed repetition suppression (RS). Here we show that RS decays at a similar rate in low- and high-level visual areas, suggesting that the short-term decay of RS across the visual hierarchy is not tuned to temporal input regularities. This may limit the specificity with which the mechanisms underlying RS could optimize the processing of input features across the visual hierarchy.


Sign in / Sign up

Export Citation Format

Share Document