Grasping Execution and Grasping Observation Activity of Single Neurons in the Macaque Anterior Intraparietal Area

2014 ◽  
Vol 26 (10) ◽  
pp. 2342-2355 ◽  
Author(s):  
Pierpaolo Pani ◽  
Tom Theys ◽  
Maria C. Romero ◽  
Peter Janssen

Primates use vision to guide their actions in everyday life. Visually guided object grasping is known to rely on a network of cortical areas located in the parietal and premotor cortex. We recorded in the anterior intraparietal area (AIP), an area in the dorsal visual stream that is critical for object grasping and densely connected with the premotor cortex, while monkeys were grasping objects under visual guidance and during passive fixation of videos of grasping actions from the first-person perspective. All AIP neurons in this study responded during grasping execution in the light, that is, became more active after the hand had started to move toward the object and during grasping in the dark. More than half of these AIP neurons responded during the observation of a video of the same grasping actions on a display. Furthermore, these AIP neurons responded as strongly during passive fixation of movements of a hand on a scrambled background and to a lesser extent to a shape appearing within the visual field near the object. Therefore, AIP neurons responding during grasping execution also respond during passive observation of grasping actions and most of them even during passive observation of movements of a simple shape in the visual field.

Author(s):  
Sigrid Hegna Ingvaldsen ◽  
Tora Sund Morken ◽  
Dordi Austeng ◽  
Olaf Dammann

AbstractResearch on retinopathy of prematurity (ROP) focuses mainly on the abnormal vascularization patterns that are directly visible for ophthalmologists. However, recent findings indicate that children born prematurely also exhibit changes in the retinal cellular architecture and along the dorsal visual stream, such as structural changes between and within cortical areas. Moreover, perinatal sustained systemic inflammation (SSI) is associated with an increased risk for ROP and the visual deficits that follow. In this paper, we propose that ROP might just be the tip of an iceberg we call visuopathy of prematurity (VOP). The VOP paradigm comprises abnormal vascularization of the retina, alterations in retinal cellular architecture, choroidal degeneration, and abnormalities in the visual pathway, including cortical areas. Furthermore, VOP itself might influence the developmental trajectories of cerebral structures and functions deemed responsible for visual processing, thereby explaining visual deficits among children born preterm.


2011 ◽  
Vol 11 (11) ◽  
pp. 952-952 ◽  
Author(s):  
S. Rossit ◽  
T. McAdam ◽  
A. Mclean ◽  
M. Goodale ◽  
J. Culham

1999 ◽  
Vol 81 (4) ◽  
pp. 1927-1938 ◽  
Author(s):  
Kiyoshi Kurata ◽  
Eiji Hoshi

Reacquisition deficits in prism adaptation after muscimol microinjection into the ventral premotor cortex of monkeys. A small amount of muscimol (1 μl; concentration, 5 μg/μl) was injected into the ventral and dorsal premotor cortex areas (PMv and PMd, respectively) of monkeys, which then were required to perform a visually guided reaching task. For the task, the monkeys were required to reach for a target soon after it was presented on a screen. While performing the task, the monkeys’ eyes were covered with left 10°, right 10°, or no wedge prisms, for a block of 50–100 trials. Without the prisms, the monkeys reached the targets accurately. When the prisms were placed, the monkeys initially misreached the targets because the prisms displaced the visual field. Before the muscimol injection, the monkeys adapted to the prisms in 10–20 trials, judging from the horizontal distance between the target location and the point where the monkey touched the screen. After muscimol injection into the PMv, the monkeys lost the ability to readapt and touched the screen closer to the location of the targets as seen through the prisms. This deficit was observed at selective target locations, only when the targets were shifted contralaterally to the injected hemisphere. When muscimol was injected into the PMd, no such deficits were observed. There were no changes in the reaction and movement times induced by muscimol injections in either area. The results suggest that the PMv plays an important role in motor learning, specifically in recalibrating visual and motor coordinates.


2016 ◽  
Vol 115 (3) ◽  
pp. 1542-1555 ◽  
Author(s):  
Maria C. Romero ◽  
Peter Janssen

Visual object information is necessary for grasping. In primates, the anterior intraparietal area (AIP) plays an essential role in visually guided grasping. Neurons in AIP encode features of objects, but no study has systematically investigated the receptive field (RF) of AIP neurons. We mapped the RF of posterior AIP (pAIP) neurons in the central visual field, using images of objects and small line fragments that evoked robust responses, together with less effective stimuli. The RF sizes we measured varied between 3°2 and 90°2, with the highest response either at the fixation point or at parafoveal positions. A large fraction of pAIP neurons showed nonuniform RFs, with multiple local maxima in both ipsilateral and contralateral hemifields. Moreover, the RF profile could depend strongly on the stimulus used to map the RF. Highly similar results were obtained with the smallest stimulus that evoked reliable responses (line fragments measuring 1–2°). The nonuniformity and dependence of the RF profile on the stimulus in pAIP were comparable to previous observations in the anterior part of the lateral intraparietal area (aLIP), but the average RF of pAIP neurons was located at the fovea whereas the average RF of aLIP neurons was located parafoveally. Thus nonuniformity and stimulus dependence of the RF may represent general RF properties of neurons in the dorsal visual stream involved in object analysis, which contrast markedly with those of neurons in the ventral visual stream.


2010 ◽  
Vol 103 (2) ◽  
pp. 817-826 ◽  
Author(s):  
Hui Meng ◽  
Dora E. Angelaki

Multisensory neurons tuned to both vestibular and visual motion (optic flow) signals are found in several cortical areas in the dorsal visual stream. Here we examine whether such convergence occurs subcortically in the macaque thalamus. We searched the ventral posterior nuclei, including the anterior pulvinar, as well as the ventro-lateral and ventral posterior lateral nuclei, areas that receive vestibular signals from brain stem and deep cerebellar nuclei. Approximately a quarter of cells responded to three-dimensional (3D) translational and/or rotational motion. More than half of the responsive cells were convergent, thus responded during both rotation and translation. The preferred axes of translation/rotation were distributed throughout 3D space. The majority of the neurons were excited, but some were inhibited, during rotation/translation in darkness. Only a couple of neurons were multisensory being tuned to both vestibular and optic flow stimuli. We conclude that multisensory vestibular/optic flow neurons, which are commonly found in cortical visual and visuomotor areas, are rare in the ventral posterior thalamus.


2009 ◽  
Vol 101 (1) ◽  
pp. 289-305 ◽  
Author(s):  
Alessandra Fanini ◽  
John A. Assad

The lateral intraparietal area (LIP) of the macaque is believed to play a role in the allocation of attention and the plan to make saccadic eye movements. Many studies have shown that LIP neurons generally encode the static spatial location demarked by the receptive field (RF). LIP neurons might also provide information about the features of visual stimuli within the RF. For example, LIP receives input from cortical areas in the dorsal visual pathway that contain many direction-selective neurons. Here we examine direction selectivity of LIP neurons. Animals were only required to fixate while motion stimuli appeared in the RF. To avoid spatial confounds, the motion stimuli were patches of randomly arrayed dots that moved with 100% coherence in eight different directions. We found that the majority (61%) of LIP neurons were direction selective. The direction tuning was fairly broad, with a median direction-tuning bandwidth of 136°. The average strength of direction selectivity was weaker in LIP than that of other areas of the dorsal visual stream but that difference may be because of the fact that LIP neurons showed a tonic offset in firing whenever a visual stimulus was in the RF, independent of direction. Direction-selective neurons do not seem to constitute a functionally distinct subdivision within LIP, because those neurons had robust, sustained delay-period activity during a memory delayed saccade task. The direction selectivity could also not be explained by asymmetries in the spatial RF, in the hypothetical case that the animals attended to slightly different locations depending on the direction of motion in the RF. Our results show that direction selectivity is a distinct attribute of LIP neurons in addition to spatial encoding.


2004 ◽  
Vol 16 (5) ◽  
pp. 817-827 ◽  
Author(s):  
K. Vogeley ◽  
M. May ◽  
A. Ritzl ◽  
P. Falkai ◽  
K. Zilles ◽  
...  

Taking the first-person perspective (1PP) centered upon one's own body as opposed to the third-person perspective (3PP), which enables us to take the viewpoint of someone else, is constitutive for human self-consciousness. At the underlying representational or cognitive level, these operations are processed in an egocentric reference frame, where locations are represented centered around another person's (3PP) or one's own perspective (1PP). To study 3PP and 1PP, both operating in egocentric frames, a virtual scene with an avatar and red balls in a room was presented from different camera viewpoints to normal volunteers (n = 11) in a functional magnetic resonance imaging experiment. The task for the subjects was to count the objects as seen either from the avatar's perspective (3PP) or one's own perspective (1PP). The scene was presented either from a ground view (GV) or an aerial view (AV) to investigate the effect of view on perspective taking. The factors perspective (3PP vs. 1PP) and view (GV vs. AV) were arranged in a two-factorial way. Reaction times were increased and percent correctness scores were decreased in 3PP as opposed to 1PP. To detect the neural mechanisms associated with perspective taking, functional magnetic resonance imaging was employed. Data were analyzed using SPM'99 in each subject and non-parametric statistics on the group level. Activations common to 3PP and 1PP (relative to baseline) were observed in a network of occipital, parietal, and prefrontal areas. Deactivations common to 3PP and 1PP (relative to baseline) were observed predominantly in mesial (i.e., parasagittal) cortical and lateral superior temporal areas bilaterally. Differential increases of neural activity were found in mesial superior parietal and right premotor cortex during 3PP (relative to 1PP), whereas differential increases during 1PP (relative to 3PP) were found in mesial prefrontal cortex, posterior cingulate cortex, and superior temporal cortex bilaterally. The data suggest that in addition to joint neural mechanisms, for example, due to visuospatial processing and decision making, 3PP and 1PP rely on differential neural processes. Mesial cortical areas are involved in decisional processes when the spatial task is solved from one's own viewpoint, whereas egocentric operations from another person's perspective differentially draw upon cortical areas known to be involved in spatial cognition.


2021 ◽  
Vol 11 (4) ◽  
pp. 521
Author(s):  
Jonathan Erez ◽  
Marie-Eve Gagnon ◽  
Adrian M. Owen

Investigating human consciousness based on brain activity alone is a key challenge in cognitive neuroscience. One of its central facets, the ability to form autobiographical memories, has been investigated through several fMRI studies that have revealed a pattern of activity across a network of frontal, parietal, and medial temporal lobe regions when participants view personal photographs, as opposed to when they view photographs from someone else’s life. Here, our goal was to attempt to decode when participants were re-experiencing an entire event, captured on video from a first-person perspective, relative to a very similar event experienced by someone else. Participants were asked to sit passively in a wheelchair while a researcher pushed them around a local mall. A small wearable camera was mounted on each participant, in order to capture autobiographical videos of the visit from a first-person perspective. One week later, participants were scanned while they passively viewed different categories of videos; some were autobiographical, while others were not. A machine-learning model was able to successfully classify the video categories above chance, both within and across participants, suggesting that there is a shared mechanism differentiating autobiographical experiences from non-autobiographical ones. Moreover, the classifier brain maps revealed that the fronto-parietal network, mid-temporal regions and extrastriate cortex were critical for differentiating between autobiographical and non-autobiographical memories. We argue that this novel paradigm captures the true nature of autobiographical memories, and is well suited to patients (e.g., with brain injuries) who may be unable to respond reliably to traditional experimental stimuli.


Sign in / Sign up

Export Citation Format

Share Document