Coding of peripersonal space in inferior premotor cortex (area F4)

1996 ◽  
Vol 76 (1) ◽  
pp. 141-157 ◽  
Author(s):  
L. Fogassi ◽  
V. Gallese ◽  
L. Fadiga ◽  
G. Luppino ◽  
M. Matelli ◽  
...  

1. We studied the functional properties of neurons in the caudal part of inferior area 6 (area F4) in awake monkeys. In agreement with previous reports, we found that the large majority (87%) of neurons responded to sensory stimuli. The responsive neurons fell into three categories: somatosensory neurons (30%); visual neurons (14%); and bimodal, visual and somatosensory neurons (56%). Both somatosensory and bimodal neurons typically responded to light touch of the skin. Their RFs were located on the face, neck, trunk, and arms. Approaching objects were the most effective visual stimuli. Visual RFs were mostly located in the space near the monkey (peripersonal space). Typically they extended in the space adjacent to the tactile RFs. 2. The coordinate system in which visual RFs were coded was studied in 110 neurons. In 94 neurons the RF location was independent of eye position, remaining in the same position in the peripersonal space regardless of eye deviation. The RF location with respect to the monkey was not modified by changing monkey position in the recording room. In 10 neurons the RF's location followed the eye movements, remaining in the same retinal position (retinocentric RFs). For the remaining six neurons the RF organization was not clear. We will refer to F4 neurons with RF independent of eye position as somatocentered neurons. 3. In most somatocentered neurons (43 of 60 neurons) the background level of activity and the response to visual stimuli were not modified by changes in eye position, whereas they were modulated in the remaining 17. It is important to note that eye deviations were constantly accompanied by a synergic increase of the activity of the ipsilateral neck muscles. It is not clear, therefore, whether the modulation of neuron discharge depended on eye position or was a consequence of changes in neck muscle activity. 4. The effect of stimulus velocity (20-80 cm/s) on neuron response intensity and RF extent in depth was studied in 34 somatocentered neurons. The results showed that in most neurons the increase of stimulus velocity produced an expansion in depth of the RF. 5. We conclude that space is coded differently in areas that control somatic and eye movements. We suggest that space coding in different cortical areas depends on the computational necessity of the effectors they control.

1991 ◽  
Vol 66 (6) ◽  
pp. 2125-2140 ◽  
Author(s):  
A. M. Pastor ◽  
B. Torres ◽  
J. M. Delgado-Garcia ◽  
R. Baker

1. The discharge of antidromically identified medial rectus and abducens motoneurons was recorded in restrained unanesthesized goldfish during spontaneous eye movements and in response to vestibular and optokinetic stimulation. 2. All medial rectus and abducens motoneurons exhibited a similar discharge pattern. A burst of spikes accompanied spontaneous saccades and fast phases during vestibular and optokinetic nystagmus in the ON-direction. Firing rate decreased for the same eye movements in the OFF-direction. All units showed a steady firing rate proportional to eye position beyond their recruitment threshold. 3. Motoneuronal position (ks) and velocity (rs) sensitivity for spontaneous eye movements were calculated from the slope of the rate-position and rate-velocity linear regression lines, respectively. The averaged ks and rs values of medial rectus motoneurons were higher than those of abducens motoneurons. The differences in motoneuronal sensitivity coupled with structural variations in the lateral versus the medial rectus muscle suggest that symmetric nasal and temporal eye movements are preserved by different motor unit composition. Although the abducens nucleus consists of distinct rostral and caudal subgroups, mean ks and rs values were not significantly different between the two populations. 4. Every abducens and medial rectus motoneuron fired an intense burst of spikes during its corresponding temporal or nasal activation phase of the "eye blink." This eye movement consisted of a sequential, rather than a synergic, contraction of both vertical and horizontal extraocular muscles. The eye blink could act neither as a protective reflex nor as a goal-directed eye movement because it could not be evoked in response to sensory stimuli. We propose a role for the blink in recentering eye position. 5. Motoneuronal firing rate after ON-directed saccades decreased exponentially before reaching the sustained discharge proportional to the new eye position. Time constants of the exponential decay ranged from 50 to 300 ms. Longer time constants after the saccade were associated with backward drifts of eye position and shorter time constants with onward drifts. These postsaccadic slide signals are suggested to encode the transition of eye position to the new steady level. 6. Motoneurons modulated sinusoidally in response to sinusoidal head rotation in the dark, but for a part of the cycle they went into cutoff, dependent on their eye position recruitment threshold. Eye position (kv) and velocity (rv) sensitivity during vestibular stimulation were measured at frequencies between 1/16 and 2 Hz. Motoneuronal time constants (tau v = rv/kv) decreased on the average by 25% with the frequency of vestibular stimulation.(ABSTRACT TRUNCATED AT 400 WORDS)


2019 ◽  
Vol 116 (29) ◽  
pp. 14749-14754 ◽  
Author(s):  
Warren W. Pettine ◽  
Nicholas A. Steinmetz ◽  
Tirin Moore

Neurons in sensory areas of the neocortex are known to represent information both about sensory stimuli and behavioral state, but how these 2 disparate signals are integrated across cortical layers is poorly understood. To study this issue, we measured the coding of visual stimulus orientation and of behavioral state by neurons within superficial and deep layers of area V4 in monkeys while they covertly attended or prepared eye movements to visual stimuli. We show that whereas single neurons and neuronal populations in the superficial layers conveyed more information about the orientation of visual stimuli than neurons in deep layers, the opposite was true of information about the behavioral relevance of those stimuli. In particular, deep layer neurons encoded greater information about the direction of planned eye movements than superficial neurons. These results suggest a division of labor between cortical layers in the coding of visual input and visually guided behavior.


2018 ◽  
Author(s):  
Warren W. Pettine ◽  
Nicholas A. Steinmetz ◽  
Tirin Moore

SummaryNeurons in sensory areas of the neocortex are known to represent information both about sensory stimuli and behavioral state, but how these two disparate signals are integrated across cortical layers is poorly understood. To study this issue, we measured the coding of visual stimulus orientation and of behavioral state by neurons within superficial and deep layers of area V4 in monkeys while they covertly attended or prepared eye movements to visual stimuli. We show that single neurons and neuronal populations in superficial layers convey more information about the orientation of visual stimuli, whereas single neurons and neuronal populations in deep layers convey greater information about the behavioral relevance of those stimuli. In particular, deep layer neurons encode greater information about the direction of prepared eye movements. These results reveal a division of labor between laminae in the coding of visual input and visually guided behavior.


1990 ◽  
Vol 64 (1) ◽  
pp. 77-90 ◽  
Author(s):  
M. J. Mustari ◽  
A. F. Fuchs

1. To determine the possible role of the primate pretectal nucleus of the optic tract (NOT) in the generation of optokinetic and smooth-pursuit eye movements, we recorded the activity of 155 single units in four behaving rhesus macaques. The monkeys were trained to fixate a stationary target spot during visual testing and to track a small moving spot in a variety of visual environments. 2. The majority (82%) of NOT neurons responded only to visual stimuli. Most units responded vigorously for large-field (70 x 50 degrees) moving visual stimuli and responded less, if at all, during smooth-pursuit eye movements in the dark; many of these units had large receptive fields (greater than 10 x 10 degrees) that included the fovea. The remaining visual units responded more vigorously during smooth-pursuit eye movements in the dark than during movement of large-field visual stimuli; all but one had small receptive fields (less than 10 x 10 degrees) that included the fovea. For all visual units that responded during smooth pursuit, extinction of the small moving target so briefly that pursuit continued caused the firing rates to drop to resting levels, confirming that the discharge was due to visual stimulation of receptive fields with foveal and perifoveal movement sensitivity and not to smooth-pursuit eye movements per se. 3. Eighteen percent of all NOT units ceased their tonic discharge in association with all saccades including the quick phases accompanying optokinetic or vestibular nystagmus. The pause in firing began after saccade onset, was unrelated to saccade duration, and occurred even in complete darkness. 4. Most (90%) of the visual NOT units were direction selective. They exhibited an increase in firing above resting during horizontal (ipsilateral) background movement and/or during smooth pursuit of a moving spot and a decrease in firing during contralateral movement. 5. The firing rates of NOT units were highly dependent on stimulus velocity. All had velocity thresholds of less than 1 degree/s and exhibited a monotonic increase in firing rate with visual stimulus velocity over part (n = 90%) or all (n = 10%) of the tested range (i.e., 1–200 degrees/s). Most NOT units exhibited velocity tuning with an average preferred velocity of 64 degrees/s.(ABSTRACT TRUNCATED AT 400 WORDS)


2016 ◽  
Author(s):  
Roy Salomon ◽  
Jean-Paul Noel ◽  
Marta Łukowska ◽  
Nathan Faivre ◽  
Thomas Metzinger ◽  
...  

AbstractRecent studies have highlighted the role of multisensory integration as a key mechanism of self-consciousness. In particular, integration of bodily signals within the peripersonal space (PPS) underlies the experience of the self in a body we own (self-identification) and that is experienced as occupying a specific location in space (self-location), two main components of bodily self-consciousness (BSC). Experiments investigating the effects of multisensory integration on BSC have typically employed supra-threshold sensory stimuli, neglecting the role of unconscious sensory signals in BSC, as tested in other consciousness research. Here, we used psychophysical techniques to test whether multisensory integration of bodily stimuli underlying BSC may also occur for multisensory inputs presented below the threshold of conscious perception. Our results indicate that visual stimuli rendered invisible (through continuous flash suppression) boost processing of tactile stimuli on the body (Exp. 1), and enhance the perception of near-threshold tactile stimuli (Exp. 2), only once they entered peripersonal space. We then employed unconscious multisensory mechanisms to manipulate BSC. Participants were presented with tactile stimulation on their body and with visual stimuli on a virtual body, seen at a distance, which were either visible or rendered invisible. We report that if visuo-tactile stimulation was synchronized, participants self-identified with the virtual body (Exp. 3), and shifted their self-location toward the virtual body (Exp.4), even if visual stimuli were fully invisible. Our results indicate that multisensory inputs, even outside of awareness, are integrated and affect the phenomenological content of self-consciousness, grounding BSC firmly in the field of psychophysical consciousness studies.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1394
Author(s):  
Asad Ali ◽  
Sanaul Hoque ◽  
Farzin Deravi

Presentation attack artefacts can be used to subvert the operation of biometric systems by being presented to the sensors of such systems. In this work, we propose the use of visual stimuli with randomised trajectories to stimulate eye movements for the detection of such spoofing attacks. The presentation of a moving visual challenge is used to ensure that some pupillary motion is stimulated and then captured with a camera. Various types of challenge trajectories are explored on different planar geometries representing prospective devices where the challenge could be presented to users. To evaluate the system, photo, 2D mask and 3D mask attack artefacts were used and pupillary movement data were captured from 80 volunteers performing genuine and spoofing attempts. The results support the potential of the proposed features for the detection of biometric presentation attacks.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


1996 ◽  
Vol 76 (3) ◽  
pp. 1439-1456 ◽  
Author(s):  
P. Mazzoni ◽  
R. M. Bracewell ◽  
S. Barash ◽  
R. A. Andersen

1. The lateral intraparietal area (area LIP) of the monkey's posterior parietal cortex (PPC) contains neurons that are active during saccadic eye movements. These neurons' activity includes visual and saccade-related components. These responses are spatially tuned and the location of a neuron's visual receptive field (RF) relative to the fovea generally overlaps its preferred saccade amplitude and direction (i.e., its motor field, MF). When a delay is imposed between the presentation of a visual stimulus and a saccade made to its location (memory saccade task), many LIP neurons maintain elevated activity during the delay (memory activity, M), which appears to encode the metrics of the next intended saccadic eye movements. Recent studies have alternatively suggested that LIP neurons encode the locations of visual stimuli regardless of where the animal intends to look. We examined whether the M activity of LIP neurons specifically encodes movement intention or the locations of recent visual stimuli, or a combination of both. In the accompanying study, we investigated whether the intended-movement activity reflects changes in motor plan. 2. We trained monkeys (Macaca mulatta) to memorize the locations of two visual stimuli and plan a sequence of two saccades, one to each remembered target, as we recorded the activity of single LIP neurons. Two targets were flashed briefly while the monkey maintained fixation; after a delay the fixation point was extinguished, and the monkey made two saccades in sequence to each target's remembered location, in the order in which the targets were presented. This "delayed double saccade" (DDS) paradigm allowed us to dissociate the location of visual stimulation from the direction of the planned saccade and thus distinguish neuronal activity related to the target's location from activity related to the saccade plan. By imposing a delay, we eliminated the confounding effect of any phasic responses coincident with the appearance of the stimulus and with the saccade. 3. We arranged the two visual stimuli so that in one set of conditions at least the first one was in the neuron's visual RF, and thus the first saccade was in the neuron's motor field (MF). M activity should be high in these conditions according to both the sensory memory and motor plan hypotheses. In another set of conditions, the second stimulus appeared in the RF but the first one was presented outside the RF, instructing the monkey to plan the first saccade away from the neuron's MF. If the M activity encodes the motor plan, it should be low in these conditions, reflecting the plan for the first saccade (away from the MF). If it is a sensory trace of the stimulus' location, it should be high, reflecting stimulation of the RF by the second target. 4. We tested 49 LIP neurons (in 3 hemispheres of 2 monkeys) with M activity on the DDS task. Of these, 38 (77%) had M activity related to the next intended saccade. They were active in the delay period, as expected, if the first saccade was in their preferred direction. They were less active or silent if the next saccade was not in their preferred direction, even when the second stimulus appeared in their RF. 5. The M activity of 8 (16%) of the remaining neurons specifically encoded the location of the most recent visual stimulus. Their firing rate during the delay reflected stimulation of the RF independently of the saccade being planned. The remaining 3 neurons had M activity that did not consistently encode either the next saccade or the stimulus' location. 6. We also recorded the activity of a subset of neurons (n = 38) in a condition in which no stimulus appeared in a neuron's RF, but the second saccade was in the neuron's MF. In this case the majority of neurons tested (23/38, 60%) became active in the period between the first and second saccade, even if neither stimulus had appeared in their RF. Moreover, this activity appeared only after the first saccade had started in all but two of


Sign in / Sign up

Export Citation Format

Share Document