scholarly journals Object-level visual information gets through the bottleneck of crowding

2011 ◽  
Vol 106 (3) ◽  
pp. 1389-1398 ◽  
Author(s):  
Jason Fischer ◽  
David Whitney

Natural visual scenes are cluttered. In such scenes, many objects in the periphery can be crowded, blocked from identification, simply because of the dense array of clutter. Outside of the fovea, crowding constitutes the fundamental limitation on object recognition and is thought to arise from the limited resolution of the neural mechanisms that select and bind visual features into coherent objects. Thus it is widely believed that in the visual processing stream, a crowded object is reduced to a collection of dismantled features with no surviving holistic properties. Here, we show that this is not so: an entire face can survive crowding and contribute its holistic attributes to the perceived average of the set, despite being blocked from recognition. Our results show that crowding does not dismantle high-level object representations to their component features.

2021 ◽  
Author(s):  
Marek A. Pedziwiatr ◽  
Elisabeth von dem Hagen ◽  
Christoph Teufel

Humans constantly move their eyes to explore the environment and obtain information. Competing theories of gaze guidance consider the factors driving eye movements within a dichotomy between low-level visual features and high-level object representations. However, recent developments in object perception indicate a complex and intricate relationship between features and objects. Specifically, image-independent object-knowledge can generate objecthood by dynamically reconfiguring how feature space is carved up by the visual system. Here, we adopt this emerging perspective of object perception, moving away from the simplifying dichotomy between features and objects in explanations of gaze guidance. We recorded eye movements in response to stimuli that appear as meaningless patches on initial viewing but are experienced as coherent objects once relevant object-knowledge has been acquired. We demonstrate that gaze guidance differs substantially depending on whether observers experienced the same stimuli as meaningless patches or organised them into object representations. In particular, fixations on identical images became object-centred, less dispersed, and more consistent across observers once exposed to relevant prior object-knowledge. Observers' gaze behaviour also indicated a shift from exploratory information-sampling to a strategy of extracting information mainly from selected, object-related image areas. These effects were evident from the first fixations on the image. Importantly, however, eye-movements were not fully determined by object representations but were best explained by a simple model that integrates image-computable features and high-level, knowledge-dependent object representations. Overall, the results show how information sampling via eye-movements in humans is guided by a dynamic interaction between image-computable features and knowledge-driven perceptual organisation.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 58 ◽  
Author(s):  
J Daniel McCarthy ◽  
Colin Kupitz ◽  
Gideon P Caplovitz

Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Ivan Larderet ◽  
Pauline MJ Fritsch ◽  
Nanae Gendre ◽  
G Larisa Neagu-Maier ◽  
Richard D Fetter ◽  
...  

Visual systems transduce, process and transmit light-dependent environmental cues. Computation of visual features depends on photoreceptor neuron types (PR) present, organization of the eye and wiring of the underlying neural circuit. Here, we describe the circuit architecture of the visual system of Drosophila larvae by mapping the synaptic wiring diagram and neurotransmitters. By contacting different targets, the two larval PR-subtypes create two converging pathways potentially underlying the computation of ambient light intensity and temporal light changes already within this first visual processing center. Locally processed visual information then signals via dedicated projection interneurons to higher brain areas including the lateral horn and mushroom body. The stratified structure of the larval optic neuropil (LON) suggests common organizational principles with the adult fly and vertebrate visual systems. The complete synaptic wiring diagram of the LON paves the way to understanding how circuits with reduced numerical complexity control wide ranges of behaviors.


2019 ◽  
Author(s):  
Ali Pournaghdali ◽  
Bennett L Schwartz

Studies utilizing continuous flash suppression (CFS) provide valuable information regarding conscious and nonconscious perception. There are, however, crucial unanswered questions regarding the mechanisms of suppression and the level of visual processing in the absence of consciousness with CFS. Research suggests that the answers to these questions depend on the experimental configuration and how we assess consciousness in these studies. The aim of this review is to evaluate the impact of different experimental configurations and the assessment of consciousness on the results of the previous CFS studies. We review studies that evaluated the influence of different experimental configuration on the depth of suppression with CFS and discuss how different assessments of consciousness may impact the results of CFS studies. Finally, we review behavioral and brain recording studies of CFS. In conclusion, previous studies provide evidence for survival of low-level visual information and complete impairment of high-level visual information under the influence of CFS. That is, studies suggest that nonconscious perception of lower-level visual information happens with CFS but there is no evidence for nonconscious highlevel recognition with CFS.


2018 ◽  
Vol 29 (10) ◽  
pp. 4452-4461 ◽  
Author(s):  
Sue-Hyun Lee ◽  
Dwight J Kravitz ◽  
Chris I Baker

Abstract Memory retrieval is thought to depend on interactions between hippocampus and cortex, but the nature of representation in these regions and their relationship remains unclear. Here, we performed an ultra-high field fMRI (7T) experiment, comprising perception, learning and retrieval sessions. We observed a fundamental difference between representations in hippocampus and high-level visual cortex during perception and retrieval. First, while object-selective posterior fusiform cortex showed consistent responses that allowed us to decode object identity across both perception and retrieval one day after learning, object decoding in hippocampus was much stronger during retrieval than perception. Second, in visual cortex but not hippocampus, there was consistency in response patterns between perception and retrieval, suggesting that substantial neural populations are shared for both perception and retrieval. Finally, the decoding in hippocampus during retrieval was not observed when retrieval was tested on the same day as learning suggesting that the retrieval process itself is not sufficient to elicit decodable object representations. Collectively, these findings suggest that while cortical representations are stable between perception and retrieval, hippocampal representations are much stronger during retrieval, implying some form of reorganization of the representations between perception and retrieval.


2019 ◽  
Author(s):  
Nadine Dijkstra ◽  
Luca Ambrogioni ◽  
Marcel A.J. van Gerven

After the presentation of a visual stimulus, cortical visual processing cascades from low-level sensory features in primary visual areas to increasingly abstract representations in higher-level areas. It is often hypothesized that the reverse process underpins the human ability to generate mental images. Under this hypothesis, visual information feeds back from high-level areas as abstract representations are used to construct the sensory representation in primary visual cortices. Such reversals of information flow are also hypothesized to play a central role in later stages of perception. According to predictive processing theories, ambiguous sensory information is resolved using abstract representations coming from high-level areas through oscillatory rebounds between different levels of the visual hierarchy. However, despite the elegance of these theoretical models, to this day there is no direct experimental evidence of the reversion of visual information flow during mental imagery and perception. In the first part of this paper, we provide direct evidence in humans for a reverse order of activation of the visual hierarchy during imagery. Specifically, we show that classification machine learning models trained on brain data at different time points during the early feedforward phase of perception are reactivated in reverse order during mental imagery. In the second part of the paper, we report an 11Hz oscillatory pattern of feedforward and reversed visual processing phases during perception. Together, these results are in line with the idea that during perception, the high-level cause of sensory input is inferred through recurrent hypothesis updating, whereas during imagery, this learned forward mapping is reversed to generate sensory signals given abstract representations.


2019 ◽  
Author(s):  
Lina Teichmann ◽  
Genevieve L. Quek ◽  
Amanda K. Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson ◽  
...  

AbstractThe ability to rapidly and accurately recognise complex objects is a crucial function of the human visual system. To recognise an object, we need to bind incoming visual features such as colour and form together into cohesive neural representations and integrate these with our pre-existing knowledge about the world. For some objects, typical colour is a central feature for recognition; for example, a banana is typically yellow. Here, we applied multivariate pattern analysis on time-resolved neuroimaging (magnetoencephalography) data to examine how object-colour knowledge affects emerging object representations over time. Our results from 20 participants (11 female) show that the typicality of object-colour combinations influences object representations, although not at the initial stages of object and colour processing. We find evidence that colour decoding peaks later for atypical object-colour combinations in comparison to typical object-colour combinations, illustrating the interplay between processing incoming object features and stored object-knowledge. Taken together, these results provide new insights into the integration of incoming visual information with existing conceptual object knowledge.Significance StatementTo recognise objects, we have to be able to bind object features such as colour and shape into one coherent representation and compare it to stored object knowledge. The magnetoencephalography data presented here provide novel insights about the integration of incoming visual information with our knowledge about the world. Using colour as a model to understand the interaction between seeing and knowing, we show that there is a unique pattern of brain activity for congruently coloured objects (e.g., a yellow banana) relative to incongruently coloured objects (e.g., a red banana). This effect of object-colour knowledge only occurs after single object features are processed, demonstrating that conceptual knowledge is accessed relatively late in the visual processing hierarchy.


Neuroforum ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Klaudia P. Szatko ◽  
Katrin Franke

Abstract To provide a compact and efficient input to the brain, sensory systems separate the incoming information into parallel feature channels. In the visual system, parallel processing starts in the retina. Here, the image is decomposed into multiple retinal output channels, each selective for a specific set of visual features like motion, contrast, or edges. In this article, we will summarize recent findings on the functional organization of the retinal output, the neural mechanisms underlying its diversity, and how single visual features, like color, are extracted by the retinal network. Unraveling how the retina – as the first stage of the visual system – filters the visual input is an important step toward understanding how visual information processing guides behavior.


2019 ◽  
Vol 15 (1) ◽  
pp. 26-36
Author(s):  
Sergio Chieffi

Background: Patients with schizophrenia show not only cognitive, but also perceptual deficits. Perceptual deficits may affect different sensory modalities. Among these, the impairment of visual information processing is of particular relevance as demonstrated by the high incidence of visual disturbances. In recent years, the study of neurophysiological mechanisms that underlie visuo-perceptual, -spatial and -motor disorders in schizophrenia has increasingly attracted the interest of researchers. Objective: The study aims to review the existent literature on magnocellular/dorsal (occipitoparietal) visual processing stream impairment in schizophrenia. The impairment of relatively early stages of visual information processing was examined using experimental paradigms such as backward masking, contrast sensitivity, contour detection, and perceptual closure. The deficits of late processing stages were detected by examining visuo-spatial and -motor abilities. Results: Neurophysiological and behavioral studies support the existence of deficits in the processing of visual information along the magnocellular/dorsal pathway. These deficits appear to affect both early and late stages of visual information processing. Conclusion: The existence of disturbances in the early processing of visual information along the magnocellular/dorsal pathway is strongly supported by neurophysiological and behavioral observations. Early magnocellular dysfunction may provide a substrate for late dorsal processing impairment as well as higher-level cognition deficits.


Sign in / Sign up

Export Citation Format

Share Document