Functional Dissociations within the Ventral Object Processing Pathway: Cognitive Modules or a Hierarchical Continuum?

2010 ◽  
Vol 22 (11) ◽  
pp. 2460-2479 ◽  
Author(s):  
Rosemary A. Cowell ◽  
Timothy J. Bussey ◽  
Lisa M. Saksida

We examined the organization and function of the ventral object processing pathway. The prevailing theoretical approach in this field holds that the ventral object processing stream has a modular organization, in which visual perception is carried out in posterior regions and visual memory is carried out, independently, in the anterior temporal lobe. In contrast, recent work has argued against this modular framework, favoring instead a continuous, hierarchical account of cognitive processing in these regions. We join the latter group and illustrate our view with simulations from a computational model that extends the perceptual-mnemonic feature-conjunction model of visual discrimination proposed by Bussey and Saksida [Bussey, T. J., & Saksida, L. M. The organization of visual object representations: A connectionist model of effects of lesions in perirhinal cortex. European Journal of Neuroscience, 15, 355–364, 2002]. We use the extended model to revisit early data from Iwai and Mishkin [Iwai, E., & Mishkin, M. Two visual foci in the temporal lobe of monkeys. In N. Yoshii & N. Buchwald (Eds.), Neurophysiological basis of learning and behavior (pp. 1–11). Japan: Osaka University Press, 1968]; this seminal study was interpreted as evidence for the modularity of visual perception and visual memory. The model accounts for a double dissociation in monkeys' visual discrimination performance following lesions to different regions of the ventral visual stream. This double dissociation is frequently cited as evidence for separate systems for perception and memory. However, the model provides a parsimonious, mechanistic, single-system account of the double dissociation data. We propose that the effects of lesions in ventral visual stream on visual discrimination are due to compromised representations within a hierarchical representational continuum rather than impairment in a specific type of learning, memory, or perception. We argue that consideration of the nature of stimulus representations and their processing in cortex is a more fruitful approach than attempting to map cognition onto functional modules.

Neuron ◽  
2007 ◽  
Vol 55 (1) ◽  
pp. 157-167 ◽  
Author(s):  
Ulrike Bingel ◽  
Michael Rose ◽  
Jan Gläscher ◽  
Christian Büchel

2018 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

AbstractPredictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available, and vice-versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.


2021 ◽  
Author(s):  
Kayla M. Ferko ◽  
Anna Blumenthal ◽  
Chris B. Martin ◽  
Daria Proklova ◽  
Lisa M. Saksida ◽  
...  

AbstractObservers perceive their visual environment in unique ways. How ventral visual stream (VVS) regions represent subjectively perceived object characteristics remains poorly understood. We hypothesized that the visual similarity between objects that observers perceive is reflected with highest fidelity in neural activity patterns in perirhinal and anterolateral entorhinal cortex at the apex of the VVS object-processing hierarchy. To address this issue with fMRI, we administered a task that required discrimination between images of exemplars from real-world categories. Further, we obtained ratings of perceived visual similarities. We found that perceived visual similarities predicted discrimination performance in an observer-specific manner. As anticipated, activity patterns in perirhinal and anterolateral entorhinal cortex predicted perceived similarity structure, including those aspects that are observer-specific, with higher fidelity than any other region examined. Our findings provide new evidence that representations of the visual world at the apex of the VVS differ across observers in ways that influence behaviour.


2021 ◽  
Author(s):  
Hayley E Pickering ◽  
Jessica L Peters ◽  
Sheila Crewther

Literature examining the role of visual memory in vocabulary development during childhood is limited, despite it being well known that preverbal infants rely on their visual abilities to form memories and learn new words. Hence, this systematic review and meta-analysis utilised a cognitive neuroscience perspective to examine the association between visual memory and vocabulary development, including moderators such as age and task selection, in neurotypical children aged 2- to 12-years. Visual memory tasks were classified as spatio-temporal span tasks, visuo-perceptual or spatial concurrent array tasks, and executive judgment tasks. Visuo-perceptual concurrent array tasks expected to rely on ventral visual stream processing showed a moderate association with vocabulary, while tasks measuring spatio-temporal spans expected to be associated with dorsal visual stream processing, and executive judgments (central executive), showed only weak correlations with vocabulary. These findings have important implications for all health professionals and researchers interested in language, as they can support the development of more targeted language learning interventions that require ventral visual stream processing.


Neuroreport ◽  
2003 ◽  
Vol 14 (11) ◽  
pp. 1489-1492 ◽  
Author(s):  
M. Vannucci ◽  
Th. Dietl ◽  
N. Pezer ◽  
M. P. Viggiano ◽  
C. Helmstaedter ◽  
...  

2016 ◽  
Vol 115 (3) ◽  
pp. 1542-1555 ◽  
Author(s):  
Maria C. Romero ◽  
Peter Janssen

Visual object information is necessary for grasping. In primates, the anterior intraparietal area (AIP) plays an essential role in visually guided grasping. Neurons in AIP encode features of objects, but no study has systematically investigated the receptive field (RF) of AIP neurons. We mapped the RF of posterior AIP (pAIP) neurons in the central visual field, using images of objects and small line fragments that evoked robust responses, together with less effective stimuli. The RF sizes we measured varied between 3°2 and 90°2, with the highest response either at the fixation point or at parafoveal positions. A large fraction of pAIP neurons showed nonuniform RFs, with multiple local maxima in both ipsilateral and contralateral hemifields. Moreover, the RF profile could depend strongly on the stimulus used to map the RF. Highly similar results were obtained with the smallest stimulus that evoked reliable responses (line fragments measuring 1–2°). The nonuniformity and dependence of the RF profile on the stimulus in pAIP were comparable to previous observations in the anterior part of the lateral intraparietal area (aLIP), but the average RF of pAIP neurons was located at the fovea whereas the average RF of aLIP neurons was located parafoveally. Thus nonuniformity and stimulus dependence of the RF may represent general RF properties of neurons in the dorsal visual stream involved in object analysis, which contrast markedly with those of neurons in the ventral visual stream.


2020 ◽  
Author(s):  
Franziska Geiger ◽  
Martin Schrimpf ◽  
Tiago Marques ◽  
James J. DiCarlo

AbstractAfter training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are poor models of the development of the visual system because they posit millions of sequential, precisely coordinated synaptic updates, each based on a labeled image. While ongoing research is pursuing the use of unsupervised proxies for labels, we here explore a complementary strategy of reducing the required number of supervised synaptic updates to produce an adult-like ventral visual stream (as judged by the match to V1, V2, V4, IT, and behavior). Such models might require less precise machinery and energy expenditure to coordinate these updates and would thus move us closer to viable neuroscientific hypotheses about how the visual system wires itself up. Relative to the current leading model of the adult ventral stream, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream. Second, by improving the random distribution of synaptic connectivity, we find that 54% of the brain match can already be achieved “at birth” (i.e. no training at all). Third, we find that, by training only ~5% of model synapses, we can still achieve nearly 80% of the match to the ventral stream. When these three strategies are applied in combination, we find that these new models achieve ~80% of a fully trained model’s match to the brain, while using two orders of magnitude fewer supervised synaptic updates. These results reflect first steps in modeling not just primate adult visual processing during inference, but also how the ventral visual stream might be “wired up” by evolution (a model’s “birth” state) and by developmental learning (a model’s updates based on visual experience).


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e3466 ◽  
Author(s):  
Vanja Ković ◽  
Jelena Sučević ◽  
Suzy J. Styles

The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects (‘alien life forms’), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.


2018 ◽  
Vol 30 (2) ◽  
pp. 131-143 ◽  
Author(s):  
Johannes Rennig ◽  
Sonja Cornelsen ◽  
Helmut Wilhelm ◽  
Marc Himmelbach ◽  
Hans-Otto Karnath

We examined a stroke patient (HWS) with a unilateral lesion of the right medial ventral visual stream, involving the right fusiform and parahippocampal gyri. In a number of object recognition tests with lateralized presentations of target stimuli, HWS showed significant symptoms of hemiagnosia with contralesional recognition deficits for everyday objects. We further explored the patient's capacities of visual expertise that were acquired before the current perceptual impairment became effective. We confronted him with objects he was an expert for already before stroke onset and compared this performance with the recognition of familiar everyday objects. HWS was able to identify significantly more of the specific (“expert”) than of the everyday objects on the affected contralesional side. This observation of better expert object recognition in visual hemiagnosia allows for several interpretations. The results may be caused by enhanced information processing for expert objects in the ventral system in the affected or the intact hemisphere. Expert knowledge could trigger top–down mechanisms supporting object recognition despite of impaired basic functions of object processing. More importantly, the current work demonstrates that top–down mechanisms of visual expertise influence object recognition at an early stage, probably before visual object information propagates to modules of higher object recognition. Because HWS showed a lesion to the fusiform gyrus and spared capacities of expert object recognition, the current study emphasizes possible contributions of areas outside the ventral stream to visual expertise.


Sign in / Sign up

Export Citation Format

Share Document