scholarly journals The Evolution of the Pulvinar Complex in Primates and Its Role in the Dorsal and Ventral Streams of Cortical Processing

Vision ◽  
2019 ◽  
Vol 4 (1) ◽  
pp. 3 ◽  
Author(s):  
Jon H. Kaas ◽  
Mary K. L. Baldwin

Current evidence supports the view that the visual pulvinar of primates consists of at least five nuclei, with two large nuclei, lateral pulvinar ventrolateral (PLvl) and central lateral nucleus of the inferior pulvinar (PIcl), contributing mainly to the ventral stream of cortical processing for perception, and three smaller nuclei, posterior nucleus of the inferior pulvinar (PIp), medial nucleus of the inferior pulvinar (PIm), and central medial nucleus of the inferior pulvinar (PIcm), projecting to dorsal stream visual areas for visually directed actions. In primates, both cortical streams are highly dependent on visual information distributed from primary visual cortex (V1). This area is so vital to vision that patients with V1 lesions are considered “cortically blind”. When the V1 inputs to dorsal stream area middle temporal visual area (MT) are absent, other dorsal stream areas receive visual information relayed from the superior colliculus via PIp and PIcm, thereby preserving some dorsal stream functions, a phenomenon called “blind sight”. Non-primate mammals do not have a dorsal stream area MT with V1 inputs, but superior colliculus inputs to temporal cortex can be more significant and more visual functions are preserved when V1 input is disrupted. The current review will discuss how the different visual streams, especially the dorsal stream, have changed during primate evolution and we propose which features are retained from the common ancestor of primates and their close relatives.

Author(s):  
Jon H. Kaas ◽  
Hui-Xin Qi ◽  
Iwona Stepniewska

Early mammals were small and nocturnal. Their visual systems had regressed and they had poor vision. After the extinction of the dinosaurs 66 mya, some but not all escaped the ‘nocturnal bottleneck’ by recovering high-acuity vision. By contrast, early primates escaped the bottleneck within the age of dinosaurs by having large forward-facing eyes and acute vision while remaining nocturnal. We propose that these primates differed from other mammals by changing the balance between two sources of visual information to cortex. Thus, cortical processing became less dependent on a relay of information from the superior colliculus (SC) to temporal cortex and more dependent on information distributed from primary visual cortex (V1). In addition, the two major classes of visual information from the retina became highly segregated into magnocellular (M cell) projections from V1 to the primate-specific temporal visual area (MT), and parvocellular-dominated projections to the dorsolateral visual area (DL or V4). The greatly expanded P cell inputs from V1 informed the ventral stream of cortical processing involving temporal and frontal cortex. The M cell pathways from V1 and the SC informed the dorsal stream of cortical processing involving MT, surrounding temporal cortex, and parietal–frontal sensorimotor domains. This article is part of the theme issue ‘Systems neuroscience through the lens of evolutionary theory’.


2000 ◽  
Vol 17 (4) ◽  
pp. 529-549 ◽  
Author(s):  
IWONA STEPNIEWSKA ◽  
HUI-XIN QI ◽  
JON H. KAAS

Patterns of terminals labeled after WGA-HRP injections in the superior colliculus (SC) in squirrel monkeys and macaque monkeys, and after DiI application in marmosets, were related to the architecture of the pulvinar and dorsal lateral geniculate nucleus (LGN). In all studied species, the SC projects densely to two architectonic subdivisions of the inferior pulvinar, the posterior inferior pulvinar nucleus (PIp) and central medial inferior pulvinar nucleus (PIcm). These projection zones expressed substance P. Thus, sections processed for substance P reveal SC termination zones in the inferior pulvinar. The medial subdivision of the inferior pulvinar, PIm, which is known to project to visual area MT, does not receive a significant collicular input. Injections in MT of a squirrel monkey revealed no overlap between SC terminals and neurons projecting to area MT. Thus, PIm is not the significant relay station of visual input from the SC to MT. The SC also sends an input to the LGN, however, this projection is sparser than the input directed to pulvinar.


2018 ◽  
Author(s):  
Simona Monaco ◽  
Ying Chen ◽  
Nicholas Menghi ◽  
J Douglas Crawford

AbstractSensorimotor integration involves feedforward and reentrant processing of sensory input. Grasp-related motor activity precedes and is thought to influence visual object processing. Yet, while the importance of reentrant feedback is well established in perception, the top-down modulations for action and the neural circuits involved in this process have received less attention. Do action-specific intentions influence the processing of visual information in the human cortex? Using a cue-separation fMRI paradigm, we found that action-specific instruction (manual alignment vs. grasp) influences the cortical processing of object orientation several seconds after the object had been viewed. This influence occurred as early as in the primary visual cortex and extended to ventral and dorsal visual stream areas. Importantly, this modulation was unrelated to non-specific action planning. Further, the primary visual cortex showed stronger functional connectivity with frontal-parietal areas and the inferior temporal cortex during the delay following orientation processing for align than grasping movements, strengthening the idea of reentrant feedback from dorsal visual stream areas involved in action. To our knowledge, this is the first demonstration that intended manual actions have such an early, pervasive, and differential influence on the cortical processing of vision.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Melvyn A. Goodale

AbstractThe visual guidance of goal-directed movements requires transformations of incoming visual information that are different from those required for visual perception. For us to grasp an object successfully, our brain must use just-in-time computations of the object’s real-world size and shape, and its orientation and disposition with respect to our hand. These requirements have led to the emergence of dedicated visuomotor modules in the posterior parietal cortex of the human brain (the dorsal visual stream) that are functionally distinct from networks in the occipito-temporal cortex (the ventral visual stream) that mediate our conscious perception of the world. Although the identification and selection of goal objects and an appropriate course of action depends on the perceptual machinery of the ventral stream and associated cognitive modules, the execution of the subsequent goal-directed action is mediated by dedicated online control systems in the dorsal stream and associated motor areas. The dorsal stream allows an observer to reach out and grasp objects with exquisite ease, but by itself, deals only with objects that are visible at the moment the action is being programmed. The ventral stream, however, allows an observer to escape the present and bring to bear information from the past – including information about the function of objects, their intrinsic properties, and their location with reference to other objects in the world. Ultimately then, both streams contribute to the production of goal-directed actions. The principles underlying this division of labour between the dorsal and ventral streams are relevant to the design and implementation of autonomous robotic systems.


2021 ◽  
Vol 11 (8) ◽  
pp. 3397
Author(s):  
Gustavo Assunção ◽  
Nuno Gonçalves ◽  
Paulo Menezes

Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations.


Sign in / Sign up

Export Citation Format

Share Document