scholarly journals The influence of object-colour knowledge on emerging object representations in the brain

2019 ◽  
Author(s):  
Lina Teichmann ◽  
Genevieve L. Quek ◽  
Amanda K. Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson ◽  
...  

AbstractThe ability to rapidly and accurately recognise complex objects is a crucial function of the human visual system. To recognise an object, we need to bind incoming visual features such as colour and form together into cohesive neural representations and integrate these with our pre-existing knowledge about the world. For some objects, typical colour is a central feature for recognition; for example, a banana is typically yellow. Here, we applied multivariate pattern analysis on time-resolved neuroimaging (magnetoencephalography) data to examine how object-colour knowledge affects emerging object representations over time. Our results from 20 participants (11 female) show that the typicality of object-colour combinations influences object representations, although not at the initial stages of object and colour processing. We find evidence that colour decoding peaks later for atypical object-colour combinations in comparison to typical object-colour combinations, illustrating the interplay between processing incoming object features and stored object-knowledge. Taken together, these results provide new insights into the integration of incoming visual information with existing conceptual object knowledge.Significance StatementTo recognise objects, we have to be able to bind object features such as colour and shape into one coherent representation and compare it to stored object knowledge. The magnetoencephalography data presented here provide novel insights about the integration of incoming visual information with our knowledge about the world. Using colour as a model to understand the interaction between seeing and knowing, we show that there is a unique pattern of brain activity for congruently coloured objects (e.g., a yellow banana) relative to incongruently coloured objects (e.g., a red banana). This effect of object-colour knowledge only occurs after single object features are processed, demonstrating that conceptual knowledge is accessed relatively late in the visual processing hierarchy.

2021 ◽  
Vol 7 (30) ◽  
pp. eabf2218 ◽  
Author(s):  
Richard Schweitzer ◽  
Martin Rolfs

Rapid eye movements (saccades) incessantly shift objects across the retina. To establish object correspondence, the visual system is thought to match surface features of objects across saccades. Here, we show that an object’s intrasaccadic retinal trace—a signal previously considered unavailable to visual processing—facilitates this match making. Human observers made saccades to a cued target in a circular stimulus array. Using high-speed visual projection, we swiftly rotated this array during the eyes’ flight, displaying continuous intrasaccadic target motion. Observers’ saccades landed between the target and a distractor, prompting secondary saccades. Independently of the availability of object features, which we controlled tightly, target motion increased the rate and reduced the latency of gaze-correcting saccades to the initial presaccadic target, in particular when the target’s stimulus features incidentally gave rise to efficient motion streaks. These results suggest that intrasaccadic visual information informs the establishment of object correspondence and jump-starts gaze correction.


2014 ◽  
Vol 369 (1634) ◽  
pp. 20120391 ◽  
Author(s):  
Gert Westermann ◽  
Denis Mareschal

From at least two months onwards, infants can form perceptual categories. During the first year of life, object knowledge develops from the ability to represent individual object features to representing correlations between attributes and to integrate information from different sources. At the end of the first year, these representations are shaped by labels, opening the way to conceptual knowledge. Here, we review the development of object knowledge and object categorization over the first year of life. We then present an artificial neural network model that models the transition from early perceptual categorization to categories mediated by labels. The model informs a current debate on the role of labels in object categorization by suggesting that although labels do not act as object features they nevertheless affect perceived similarity of perceptually distinct objects sharing the same label. The model presents the first step of an integrated account from early perceptual categorization to language-based concept learning.


2020 ◽  
Author(s):  
Richard Schweitzer ◽  
Martin Rolfs

Rapid eye movements (saccades) incessantly shift objects across the retina. To establish object correspondence, the visual system is thought to match surface features of objects across saccades. Here we show that an object’s intra-saccadic retinal trace – a signal previously considered unavailable to visual processing – facilitates this match-making. Human observers made saccades to a cued target in a circular stimulus array. Using high-speed visual projection, we swiftly rotated this array during the eyes’ flight, displaying continuous intra-saccadic target motion. Observers’ saccades landed between the target and a distractor, prompting secondary saccades. Independently of the availability of object features, which we controlled tightly, target motion increased the rate and reduced the latency of gaze-correcting saccades to the initial pre-saccadic target, in particular when the target’s stimulus features incidentally gave rise to efficient motion streaks. These results suggest that intra-saccadic visual information informs the establishment of object correspondence and jump-starts gaze correction.


2011 ◽  
Vol 106 (3) ◽  
pp. 1389-1398 ◽  
Author(s):  
Jason Fischer ◽  
David Whitney

Natural visual scenes are cluttered. In such scenes, many objects in the periphery can be crowded, blocked from identification, simply because of the dense array of clutter. Outside of the fovea, crowding constitutes the fundamental limitation on object recognition and is thought to arise from the limited resolution of the neural mechanisms that select and bind visual features into coherent objects. Thus it is widely believed that in the visual processing stream, a crowded object is reduced to a collection of dismantled features with no surviving holistic properties. Here, we show that this is not so: an entire face can survive crowding and contribute its holistic attributes to the perceived average of the set, despite being blocked from recognition. Our results show that crowding does not dismantle high-level object representations to their component features.


2020 ◽  
Author(s):  
Florence Campana ◽  
Jacob G. Martin ◽  
Levan Bokeria ◽  
Simon Thorpe ◽  
Xiong Jiang ◽  
...  

AbstractThe commonly accepted “simple-to-complex” model of visual processing in the brain posits that visual tasks on complex objects such as faces are based on representations in high-level visual areas. Yet, recent experimental data showing the visual system’s ability to localize faces in natural images within 100ms (Crouzet et al., 2010) challenge the prevalent hierarchical description of the visual system, and instead suggest the hypothesis of face-selectivity in early visual areas. In the present study, we tested this hypothesis with human participants in two eye tracking experiments, an fMRI experiment and an EEG experiment. We found converging evidence for neural representations selective for upright faces in V1/V2, with latencies starting around 40 ms post-stimulus onset. Our findings suggest a revision of the standard “simple-to-complex” model of hierarchical visual processing.Significance statementVisual processing in the brain is classically described as a series of stages with increasingly complex object representations: early visual areas encode simple visual features (such as oriented bars), and high-level visual areas encode representations for complex objects (such as faces). In the present study, we provide behavioral, fMRI, and EEG evidence for representations of complex objects – namely faces – in early visual areas. Our results challenge the standard “simple-to-complex” model of visual processing, suggesting that it needs to be revised to include neural representations for faces at the lowest levels of the visual hierarchy. Such early object representations would permit the rapid and precise localization of complex objects, as has previously been reported for the object class of faces.


2018 ◽  
Author(s):  
Roberto Bottini ◽  
Stefania Ferraro ◽  
Anna Nigri ◽  
Valeria Cuccarini ◽  
Maria Grazia Bruzzone ◽  
...  

AbstractWe investigated the experiential bases of knowledge by asking whether people that perceive the world in a different way also show a different neurobiology of concepts. We characterized the brain activity of early-blind and sighted individuals during a conceptual retrieval task in which participants rated the perceptual similarity between color and action concepts evoked by spoken words. Adaptation analysis showed that word-pairs referring to perceptually similar colors (e.g., red-orange) or actions (e.g., run-jump) led to repetition-suppression in occipital visual regions in the sighted, regions that are known to encode visual features of objects and events, independently of their category. Early blind showed instead adaptation for similar concepts in language-related regions, but not in occipital cortices. Further analysis contrasting the two categories (color and action), independently of item similarity, activated category-sensitive regions in the pMTG (for actions) and the precuneus (for color) in both sighted and blind. These two regions, however, showed a different connectivity profile as a function of visual deprivation, increasing task-dependent connectivity with reorganized occipital regions in the early blind. Overall, our results show that visual deprivation changes the neural bases of conceptual retrieval, which is partially grounded in sensorimotor experience.Significance StatementDo people with different sensory experience conceive the world differently? We tested whether conceptual knowledge builds on sensory experience by looking at the neurobiology of concepts in early blind individuals. Participants in fMRI heard pairs of words referring to colors (e.g., green-blue) or actions (e.g., jump-run) and rated their perceptual similarity. Perceptual similarity of colors and actions was represented in occipital visual regions in the sighted, but in language-related regions in the blind. Occipital regions in the blind, albeit not encoding perceptual similarity, were however recruited during conceptual retrieval, working in concert with classic semantic hubs such as the Precuneus and the lpMTG. Overall, visual deprivation changes the neural bases of conceptual processing, which is partially grounded in sensorimotor experience.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Michelle Marneweck ◽  
Scott T Grafton

Abstract Humans are seamless in their ability to efficiently and reliably generate fingertip forces to gracefully interact with objects. Such interactions rarely end in awkward outcomes like spilling, crushing, or tilting given advanced motor planning. Here we combine multiband imaging with deconvolution- and Bayesian pattern component modeling of functional magnetic resonance imaging data and in-scanner kinematics, revealing compelling evidence that the human brain differentially represents preparatory information for skillful object interactions depending on the saliency of visual cues. Earlier patterned activity was particularly evident in ventral visual processing stream-, but also selectively in dorsal visual processing stream and cerebellum in conditions of heightened uncertainty when an object’s superficial shape was incompatible rather than compatible with a key underlying object feature.


2021 ◽  
pp. 096372142199033
Author(s):  
Katherine R. Storrs ◽  
Roland W. Fleming

One of the deepest insights in neuroscience is that sensory encoding should take advantage of statistical regularities. Humans’ visual experience contains many redundancies: Scenes mostly stay the same from moment to moment, and nearby image locations usually have similar colors. A visual system that knows which regularities shape natural images can exploit them to encode scenes compactly or guess what will happen next. Although these principles have been appreciated for more than 60 years, until recently it has been possible to convert them into explicit models only for the earliest stages of visual processing. But recent advances in unsupervised deep learning have changed that. Neural networks can be taught to compress images or make predictions in space or time. In the process, they learn the statistical regularities that structure images, which in turn often reflect physical objects and processes in the outside world. The astonishing accomplishments of unsupervised deep learning reaffirm the importance of learning statistical regularities for sensory coding and provide a coherent framework for how knowledge of the outside world gets into visual cortex.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


Sign in / Sign up

Export Citation Format

Share Document