scholarly journals Informative neural representations of unseen objects during higher-order processing in human brains and deep artificial networks

2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.

F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 58 ◽  
Author(s):  
J Daniel McCarthy ◽  
Colin Kupitz ◽  
Gideon P Caplovitz

Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.


2015 ◽  
Vol 45 (10) ◽  
pp. 2111-2122 ◽  
Author(s):  
W. Li ◽  
T. M. Lai ◽  
C. Bohon ◽  
S. K. Loo ◽  
D. McCurdy ◽  
...  

BackgroundAnorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities – event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI) – to test for abnormal activity associated with early visual signaling.MethodWe acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems.ResultsAN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces.ConclusionsResults provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.


2019 ◽  
Author(s):  
Ali Pournaghdali ◽  
Bennett L Schwartz

Studies utilizing continuous flash suppression (CFS) provide valuable information regarding conscious and nonconscious perception. There are, however, crucial unanswered questions regarding the mechanisms of suppression and the level of visual processing in the absence of consciousness with CFS. Research suggests that the answers to these questions depend on the experimental configuration and how we assess consciousness in these studies. The aim of this review is to evaluate the impact of different experimental configurations and the assessment of consciousness on the results of the previous CFS studies. We review studies that evaluated the influence of different experimental configuration on the depth of suppression with CFS and discuss how different assessments of consciousness may impact the results of CFS studies. Finally, we review behavioral and brain recording studies of CFS. In conclusion, previous studies provide evidence for survival of low-level visual information and complete impairment of high-level visual information under the influence of CFS. That is, studies suggest that nonconscious perception of lower-level visual information happens with CFS but there is no evidence for nonconscious highlevel recognition with CFS.


2019 ◽  
Author(s):  
Nadine Dijkstra ◽  
Luca Ambrogioni ◽  
Marcel A.J. van Gerven

After the presentation of a visual stimulus, cortical visual processing cascades from low-level sensory features in primary visual areas to increasingly abstract representations in higher-level areas. It is often hypothesized that the reverse process underpins the human ability to generate mental images. Under this hypothesis, visual information feeds back from high-level areas as abstract representations are used to construct the sensory representation in primary visual cortices. Such reversals of information flow are also hypothesized to play a central role in later stages of perception. According to predictive processing theories, ambiguous sensory information is resolved using abstract representations coming from high-level areas through oscillatory rebounds between different levels of the visual hierarchy. However, despite the elegance of these theoretical models, to this day there is no direct experimental evidence of the reversion of visual information flow during mental imagery and perception. In the first part of this paper, we provide direct evidence in humans for a reverse order of activation of the visual hierarchy during imagery. Specifically, we show that classification machine learning models trained on brain data at different time points during the early feedforward phase of perception are reactivated in reverse order during mental imagery. In the second part of the paper, we report an 11Hz oscillatory pattern of feedforward and reversed visual processing phases during perception. Together, these results are in line with the idea that during perception, the high-level cause of sensory input is inferred through recurrent hypothesis updating, whereas during imagery, this learned forward mapping is reversed to generate sensory signals given abstract representations.


2017 ◽  
Author(s):  
D. Pascucci ◽  
G. Mancuso ◽  
E. Santandrea ◽  
C. Della Libera ◽  
G. Plomp ◽  
...  

AbstractEvery instant of perception depends on a cascade of brain processes calibrated to the history of sensory and decisional events. In the present work, we show that human visual perception is constantly shaped by two contrasting forces, exerted by sensory adaptation and past decisions. In a series of experiments, we used multilevel modelling and cross-validation approaches to investigate the impact of previous stimuli and responses on current errors in adjustment tasks. Our results revealed that each perceptual report is permeated by opposite biases from a hierarchy of serially dependent processes: low-level adaptation repels perception away from previous stimuli; high-level, decisional traces attract perceptual reports toward previous responses. Contrary to recent claims, we demonstrated that positive serial dependence does not result from continuity fields operating at the level of early visual processing, but arises from the inertia of decisional templates. This finding is consistent with a Two-process model of serial dependence in which the persistence of read-out weights in a decision unit compensates for sensory adaptation, leading to attractive biases in sequential responses. We propose the first unified account of serial dependence in which functionally distinct mechanisms, operating at different stages, promote the differentiation and integration of visual information over time.


2019 ◽  
Author(s):  
Amarender R. Bogadhi ◽  
Leor N. Katz ◽  
Anil Bollimunta ◽  
David A. Leopold ◽  
Richard J. Krauzlis

AbstractThe evolution of the primate brain is marked by a dramatic increase in the number of neocortical areas that process visual information 1. This cortical expansion supports two hallmarks of high-level primate vision – the ability to selectively attend to particular visual features 2 and the ability to recognize a seemingly limitless number of complex visual objects 3. Given their prominent roles in high-level vision for primates, it is commonly assumed that these cortical processes supersede the earlier versions of these functions accomplished by the evolutionarily older brain structures that lie beneath the cortex. Contrary to this view, here we show that the superior colliculus (SC), a midbrain structure conserved across all vertebrates 4, is necessary for the normal expression of attention-related modulation and object selectivity in a newly identified region of macaque temporal cortex. Using a combination of psychophysics, causal perturbations and fMRI, we identified a localized region in the temporal cortex that is functionally dependent on the SC. Targeted electrophysiological recordings in this cortical region revealed neurons with strong attention-related modulation that was markedly reduced during attention deficits caused by SC inactivation. Many of these neurons also exhibited selectivity for particular visual objects, and this selectivity was also reduced during SC inactivation. Thus, the SC exerts a causal influence on high-level visual processing in cortex at a surprisingly late stage where attention and object selectivity converge, perhaps determined by the elemental forms of perceptual processing the SC has supported since before there was a neocortex.


2011 ◽  
Vol 23 (11) ◽  
pp. 3410-3418 ◽  
Author(s):  
Greg L. West ◽  
Adam A. K. Anderson ◽  
Susanne Ferber ◽  
Jay Pratt

When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.


2011 ◽  
Vol 106 (3) ◽  
pp. 1389-1398 ◽  
Author(s):  
Jason Fischer ◽  
David Whitney

Natural visual scenes are cluttered. In such scenes, many objects in the periphery can be crowded, blocked from identification, simply because of the dense array of clutter. Outside of the fovea, crowding constitutes the fundamental limitation on object recognition and is thought to arise from the limited resolution of the neural mechanisms that select and bind visual features into coherent objects. Thus it is widely believed that in the visual processing stream, a crowded object is reduced to a collection of dismantled features with no surviving holistic properties. Here, we show that this is not so: an entire face can survive crowding and contribute its holistic attributes to the perceived average of the set, despite being blocked from recognition. Our results show that crowding does not dismantle high-level object representations to their component features.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Runnan Cao ◽  
Xin Li ◽  
Alexander Todorov ◽  
Shuo Wang

Abstract An important question in human face perception research is to understand whether the neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed 2 task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.


Author(s):  
Maddalena Boccia ◽  
Valentina Sulpizio ◽  
Federica Bencivenga ◽  
Cecilia Guariglia ◽  
Gaspare Galati

AbstractIt is commonly acknowledged that visual imagery and perception rely on the same content-dependent brain areas in the high-level visual cortex (HVC). However, the way in which our brain processes and organizes previous acquired knowledge to allow the generation of mental images is still a matter of debate. Here, we performed a representation similarity analysis of three previous fMRI experiments conducted in our laboratory to characterize the neural representation underlying imagery and perception of objects, buildings and faces and to disclose possible dissimilarities in the neural structure of such representations. To this aim, we built representational dissimilarity matrices (RDMs) by computing multivariate distances between the activity patterns associated with each pair of stimuli in the content-dependent areas of the HVC and HC. We found that spatial information is widely coded in the HVC during perception (i.e. RSC, PPA and OPA) and imagery (OPA and PPA). Also, visual information seems to be coded in both preferred and non-preferred regions of the HVC, supporting a distributed view of encoding. Overall, the present results shed light upon the spatial coding of imagined and perceived exemplars in the HVC.


Sign in / Sign up

Export Citation Format

Share Document