scholarly journals Categorical representation of visual motion direction in posterior parietal cortex area LIP

2010 ◽  
Vol 6 (6) ◽  
pp. 110-110
Author(s):  
D. J. Freedman ◽  
J. A. Assad
eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Guilhem Ibos ◽  
David J Freedman

Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for).


1997 ◽  
Vol 352 (1360) ◽  
pp. 1429-1436 ◽  
Author(s):  
Michael A. Arbib

This paper explores the hypothesis that various subregions (but by no means all) of the posterior parietal cortex are specialized to process visual information to extract a variety of affordances for behaviour. Two biologically based models of regions of the posterior parietal cortex of the monkey are introduced. The model of the lateral intraparietal area (LIP) emphasizes its roles in dynamic remapping of the representation of targets during a double saccade task, and in combining stored, updated input with current visual input. The model of the anterior intraparietal area (AIP) addresses parietal–premotor interactions involved in grasping, and analyses the interaction between the AIP and premotor area F5. The model represents the role of other intraparietal areas working in concert with the inferotemporal cortex as well as with corollary discharge from F5 to provide and augment the affordance information in the AIP, and suggests how various constraints may resolve the action opportunities provided by multiple affordances. Finally, a systems–level model of hippocampo–parietal interactions underlying rat navigation is developed, motivated by the monkey data used in developing the above two models as well as by data on neurons in the posterior parietal cortex of the monkey that are sensitive to visual motion. The formal similarity between dynamic remapping (primate saccades) and path integration (rat navigation) is noted, and certain available data on rat posterior parietal cortex in terms of affordances for locomotion are explained. The utility of further modelling, linking the World Graph model of cognitive maps for motivated behaviour with hippocampal–parietal interactions involved in navigation, is also suggested. These models demonstrate that posterior parietal cortex is not only itself a network of interacting subsystems, but functions through cooperative computation with many other brain regions.


1997 ◽  
Vol 352 (1360) ◽  
pp. 1421-1428 ◽  
Author(s):  
Richard A. Andersen

The posterior parietal cortex has long been considered an ‘association’ area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author's laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex.The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer's movement through the environment, with eye–movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain–field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.


2009 ◽  
Author(s):  
Philip Tseng ◽  
Cassidy Sterling ◽  
Adam Cooper ◽  
Bruce Bridgeman ◽  
Neil G. Muggleton ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document