Temporal Dynamics of 2D Motion Integration for Ocular Following in Macaque Monkeys

2010 ◽  
Vol 103 (3) ◽  
pp. 1275-1282 ◽  
Author(s):  
Fréderic V. Barthélemy ◽  
Jérome Fleuriet ◽  
Guillaume S. Masson

Several recent studies have shown that extracting pattern motion direction is a dynamical process where edge motion is first extracted and pattern-related information is encoded with a small time lag by MT neurons. A similar dynamics was found for human reflexive or voluntary tracking. Here, we bring an essential, but still missing, piece of information by documenting macaque ocular following responses to gratings, unikinetic plaids, and barber-poles. We found that ocular tracking was always initiated first in the grating motion direction with ultra-short latencies (∼55 ms). A second component was driven only 10–15 ms later, rotating tracking toward pattern motion direction. At the end the open-loop period, tracking direction was aligned with pattern motion direction (plaids) or the average of the line-ending motion directions (barber-poles). We characterized the dependency on contrast of each component. Both timing and direction of ocular following were quantitatively very consistent with the dynamics of neuronal responses reported by others. Overall, we found a remarkable consistency between neuronal dynamics and monkey behavior, advocating for a direct link between the neuronal solution of the aperture problem and primate perception and action.

Perception ◽  
2018 ◽  
Vol 47 (7) ◽  
pp. 735-750 ◽  
Author(s):  
Lindsey M. Shain ◽  
J. Farley Norman

An experiment required younger and older adults to estimate coherent visual motion direction from multiple motion signals, where each motion signal was locally ambiguous with respect to the true direction of pattern motion. Thus, accurate performance required the successful integration of motion signals across space (i.e., accurate performance required solution of the aperture problem) . The observers viewed arrays of either 64 or 9 moving line segments; because these lines moved behind apertures, their individual local motions were ambiguous with respect to direction (i.e., were subject to the aperture problem). Following 2.4 seconds of pattern motion on each trial (true motion directions ranged over the entire range of 360° in the fronto-parallel plane), the observers estimated the coherent direction of motion. There was an effect of direction, such that cardinal directions of pattern motion were judged with less error than oblique directions. In addition, a large effect of aging occurred—The average absolute errors of the older observers were 46% and 30.4% higher in magnitude than those exhibited by the younger observers for the 64 and 9 aperture conditions, respectively. Finally, the observers’ precision markedly deteriorated as the number of apertures was reduced from 64 to 9.


2021 ◽  
Author(s):  
Christian Quaia ◽  
Incheol Kang ◽  
Bruce G Cumming

Direction selective neurons in primary visual cortex (area V1) are affected by the aperture problem, i.e., they are only sensitive to motion orthogonal to their preferred orientation. A solution to this problem first emerges in the middle temporal (MT) area, where a subset of neurons (called pattern cells) combine motion information across multiple orientations and directions, becoming sensitive to pattern motion direction. These cells are expected to play a prominent role in subsequent neural processing, but they are intermixed with cells that behave like V1 cells (component cells), and others that do not clearly fall in either group. The picture is further complicated by the finding that cells that behave like pattern cells with one type of pattern, might behave like component cells for another. We recorded from macaque MT neurons using multi-contact electrodes while presenting both type I and unikinetic plaids, in which the components were 1D noise patterns. We found that the indices that have been used in the past to classify neurons as pattern or component cells work poorly when the properties of the stimulus are not optimized for the cell being recorded, as is always the case with multi-contact arrays. We thus propose alternative measures, which considerably ameliorate the problem, and allow us to gain insights in the signals carried by individual MT neurons. We conclude that arranging cells along a component-to-pattern continuum is an oversimplification, and that the signals carried by individual cells only make sense when embodied in larger populations.


2010 ◽  
Vol 103 (1) ◽  
pp. 230-243 ◽  
Author(s):  
Ryusuke Hayashi ◽  
Yuko Sugita ◽  
Shin'ya Nishida ◽  
Kenji Kawano

Visual motion signals, which are initially extracted in parallel at multiple spatial frequencies, are subsequently integrated into a unified motion percept. Cross-frequency integration plays a crucial role when directional information conflicts across frequencies due to such factors as occlusion. We investigated the human observers' open-loop oculomotor tracking responses (ocular following responses, or OFRs) and the perceived motion direction in an idealized situation of occlusion—multiple-slits viewing (MSV)—in which a moving pattern is visible only through an array of slits. We also tested a more challenging viewing condition, contrast-alternating MSV (CA-MSV), in which the contrast polarity of the moving pattern alternates when it passes the slits. We found that changes in the distribution of the spectral content of the slit stimuli, introduced by variations of both the interval between the slits and the frame rate of the image stream, modulated the OFR and the reported motion direction in a rather complex manner. We show that those complex modulations could be explained by the weighted sum of the motion signal (motion contrast) of each spatiotemporal frequency. The estimated distribution of frequency weights (tuning maps) indicate that the cross-frequency integration of supra-threshold motion signals gives strong weight to low spatial frequency components (<0.25 cpd) for both OFR and motion perception. However, the tuning map estimated with the MSV stimuli were significantly different from those estimated with the CA-MSV (and from those measured in a more direct manner using grating stimuli), suggesting that inter-frequency interactions (e.g., interaction producing speed-dependent tuning) was involved.


Author(s):  
Filippo Ghin ◽  
Louise O’Hare ◽  
Andrea Pavan

AbstractThere is evidence that high-frequency transcranial random noise stimulation (hf-tRNS) is effective in improving behavioural performance in several visual tasks. However, so far there has been limited research into the spatial and temporal characteristics of hf-tRNS-induced facilitatory effects. In the present study, electroencephalogram (EEG) was used to investigate the spatial and temporal dynamics of cortical activity modulated by offline hf-tRNS on performance on a motion direction discrimination task. We used EEG to measure the amplitude of motion-related VEPs over the parieto-occipital cortex, as well as oscillatory power spectral density (PSD) at rest. A time–frequency decomposition analysis was also performed to investigate the shift in event-related spectral perturbation (ERSP) in response to the motion stimuli between the pre- and post-stimulation period. The results showed that the accuracy of the motion direction discrimination task was not modulated by offline hf-tRNS. Although the motion task was able to elicit motion-dependent VEP components (P1, N2, and P2), none of them showed any significant change between pre- and post-stimulation. We also found a time-dependent increase of the PSD in alpha and beta bands regardless of the stimulation protocol. Finally, time–frequency analysis showed a modulation of ERSP power in the hf-tRNS condition for gamma activity when compared to pre-stimulation periods and Sham stimulation. Overall, these results show that offline hf-tRNS may induce moderate aftereffects in brain oscillatory activity.


2009 ◽  
Vol 102 (1) ◽  
pp. 513-522 ◽  
Author(s):  
Anand C. Joshi ◽  
Matthew J. Thurtell ◽  
Mark F. Walker ◽  
Alessandro Serra ◽  
R. John Leigh

The human ocular following response (OFR) is a preattentive, short-latency visual-field–holding mechanism, which is enhanced if the moving stimulus is applied in the wake of a saccade. Since most natural gaze shifts incorporate both saccadic and vergence components, we asked whether the OFR was also enhanced during vergence. Ten subjects viewed vertically moving sine-wave gratings on a video monitor at 45 cm that had a temporal frequency of 16.7 Hz, contrast of 32%, and spatial frequency of 0.17, 0.27, or 0.44 cycle/deg. In Fixation/OFR experiments, subjects fixed on a white central dot on the video monitor, which disappeared at the beginning of each trial, just as the sinusoidal grating started moving up or down. We measured the change in eye position in the 70- to 150-ms open-loop interval following stimulus onset. Group mean downward responses were larger (0.14°) and made at shorter latency (85 ms) than upward responses (0.10° and 96 ms). The direction of eye drifts during control trials, when gratings remained stationary, was unrelated to the prior response. During vergence/OFR experiments, subjects switched their fixation point between the white dot at 45 cm and a red spot at 15 cm, cued by the disappearance of one target and appearance of the other. When horizontal vergence velocity exceeded 15°/s, motion of sinusoidal gratings commenced and elicited the vertical OFR. Subjects showed significantly ( P < 0.001) larger OFR when the moving stimulus was presented during convergence (group mean increase of 46%) or divergence (group mean increase of 36%) compared with following fixation. Since gaze shifts between near and far are common during natural activities, we postulate that the increase of OFR during vergence movements reflects enhancement of early cortical motion processing, which serves to stabilize the visual field as the eyes approach their new fixation point.


1991 ◽  
Vol 157 (1) ◽  
pp. 461-481 ◽  
Author(s):  
R. PREISS ◽  
M. GEWECKE

1. The visual control of translatory movements in the desert locust Schistocerca gregaria was investigated under open-loop conditions. When locusts were flown tethered in a wind tunnel, wind drift, visually simulated by ground pattern motion either in line with or transverse to their long body axis, induced a modulation of yaw-torque, thrust and lift correlated with the reversal of the direction of motion. 2. Yaw-torque and thrust responses were independent of each other. Spontaneous modulation of amplitude and differences in the time course of these responses indicate that a gain control mechanism is involved in the conversion of the visual stimuli to a behavioural response. 3. Two opposing types of response were observed for each flight parameter and they were found equally often. They were elicited by either transverse or longitudinal pattern motion. The polarity of yaw-torque, thrust or lift responses was thus either positively or negatively correlated with the direction of pattern motion and was preserved throughout an experiment or reversed repeatedly. 4. The yaw responses revealed a tendency for locusts to orient either upwind or downwind under the same stimulus situation. Modulations of thrust and lift confirm that locusts compensate for deviations of the retinal image flow from a preferred value by adjusting both air speed and altitude in free flight. They either speed up or slow down and either increase or decrease flight altitude under the same stimulus situation. 5. The visually induced turning tendency often interacts with a variable internal turning tendency. The internal turning tendency might be responsible for the orientation menotactic to wind seen in the field. 6. The threshold of optomotor responses in the visual control of translation is below 0.15°s−1 for both transverse and longitudinal pattern motion, indicating that wind-related orientation can occur at altitudes of several hundred metres. 7. The orientation behaviour of locusts subjected to visually simulated wind drift depended on the transverse and longitudinal components of pattern motion and on internal factors. The observed variability of response is assumed to result from the locust's ability to modulate independently the gain and sign of the optomotor responses for yaw-torque, thrust and lift.


2019 ◽  
Vol 6 (3) ◽  
pp. 190114
Author(s):  
William Curran ◽  
Lee Beattie ◽  
Delfina Bilello ◽  
Laura A. Coulter ◽  
Jade A. Currie ◽  
...  

Prior experience influences visual perception. For example, extended viewing of a moving stimulus results in the misperception of a subsequent stimulus's motion direction—the direction after-effect (DAE). There has been an ongoing debate regarding the locus of the neural mechanisms underlying the DAE. We know the mechanisms are cortical, but there is uncertainty about where in the visual cortex they are located—at relatively early local motion processing stages, or at later global motion stages. We used a unikinetic plaid as an adapting stimulus, then measured the DAE experienced with a drifting random dot test stimulus. A unikinetic plaid comprises a static grating superimposed on a drifting grating of a different orientation. Observers cannot see the true motion direction of the moving component; instead they see pattern motion running parallel to the static component. The pattern motion of unikinetic plaids is encoded at the global processing level—specifically, in cortical areas MT and MST—and the local motion component is encoded earlier. We measured the direction after-effect as a function of the plaid's local and pattern motion directions. The DAE was induced by the plaid's pattern motion, but not by its component motion. This points to the neural mechanisms underlying the DAE being located at the global motion processing level, and no earlier than area MT.


2018 ◽  
Vol 29 (1) ◽  
pp. 105-115
Author(s):  
Junjie WANG ◽  
◽  
Yaoguo DANG ◽  
Ning XU ◽  
Song DING ◽  
...  

2020 ◽  
Vol 123 (2) ◽  
pp. 682-694 ◽  
Author(s):  
Jacob L. Yates ◽  
Leor N. Katz ◽  
Aaron J. Levi ◽  
Jonathan W. Pillow ◽  
Alexander C. Huk

Motion discrimination is a well-established model system for investigating how sensory signals are used to form perceptual decisions. Classic studies relating single-neuron activity in the middle temporal area (MT) to perceptual decisions have suggested that a simple linear readout could underlie motion discrimination behavior. A theoretically optimal readout, in contrast, would take into account the correlations between neurons and the sensitivity of individual neurons at each time point. However, it remains unknown how sophisticated the readout needs to be to support actual motion-discrimination behavior or to approach optimal performance. In this study, we evaluated the performance of various neurally plausible decoders, trained to discriminate motion direction from small ensembles of simultaneously recorded MT neurons. We found that decoding the stimulus without knowledge of the interneuronal correlations was sufficient to match an optimal (correlation aware) decoder. Additionally, a decoder could match the psychophysical performance of the animals with flat integration of up to half the stimulus and inherited temporal dynamics from the time-varying MT responses. These results demonstrate that simple, linear decoders operating on small ensembles of neurons can match both psychophysical performance and optimal sensitivity without taking correlations into account and that such simple read-out mechanisms can exhibit complex temporal properties inherited from the sensory dynamics themselves. NEW & NOTEWORTHY Motion perception depends on the ability to decode the activity of neurons in the middle temporal area. Theoretically optimal decoding requires knowledge of the sensitivity of neurons and interneuronal correlations. We report that a simple correlation-blind decoder performs as well as the optimal decoder for coarse motion discrimination. Additionally, the decoder could match the psychophysical performance with moderate temporal integration and dynamics inherited from sensory responses.


Vision ◽  
2019 ◽  
Vol 3 (4) ◽  
pp. 64
Author(s):  
Martin Lages ◽  
Suzanne Heron

Like many predators, humans have forward-facing eyes that are set a short distance apart so that an extensive region of the visual field is seen from two different points of view. The human visual system can establish a three-dimensional (3D) percept from the projection of images into the left and right eye. How the visual system integrates local motion and binocular depth in order to accomplish 3D motion perception is still under investigation. Here, we propose a geometric-statistical model that combines noisy velocity constraints with a spherical motion prior to solve the aperture problem in 3D. In two psychophysical experiments, it is shown that instantiations of this model can explain how human observers disambiguate 3D line motion direction behind a circular aperture. We discuss the implications of our results for the processing of motion and dynamic depth in the visual system.


Sign in / Sign up

Export Citation Format

Share Document