stationary stimulus
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Audrey Morrow ◽  
Jason Samaha

AbstractTheories of perception based on discrete sampling posit that visual consciousness is reconstructed based on snapshot-like perceptual moments, as opposed to being updated continuously. According to a model proposed by Schneider (2018), discrete sampling can explain both the flash-lag and the Fröhlich illusion, whereby a lag in the conscious updating of a moving stimulus alters its perceived spatial location in comparison to a stationary stimulus. The alpha-band frequency, which is associated with phasic modulation of stimulus detection and the temporal resolution of perception, has been proposed to reflect the duration of perceptual moments. The goal of this study was to determine whether a single oscillator (e.g., alpha) is underlying the duration of perceptual moments, which would predict that the point of subjective equality (PSE) in the flash-lag and Fröhlich illusions are positively correlated across individuals. Although our displays induced robust flash-lag and Fröhlich effects, virtually zero correlation was seen between the PSE in the two illusions, indicating that the illusion magnitudes are unrelated across observers. These findings suggest that, if discrete sampling theory is true, these illusory percepts either rely on different oscillatory frequencies or not on oscillations at all. Alternatively, discrete sampling may not be the mechanism underlying these two motion illusions or our methods were ill-suited to test the theory.


2020 ◽  
Vol 13 (5) ◽  
Author(s):  
Shirin Sadeghpour ◽  
Jorge Otero-Millan

While many studies have characterized the eye movements during visual fixation, including microsaccades, in most cases only horizontal and vertical components have been recorded and analyzed. Thus, little is known about the torsional component of microsaccades. We took advantage of a newly developed software and hardware to record eye movements around the three axes of rotation during fixation and torsional optokinetic stimulus. We found that the average amplitude of the torsional component of microsaccades during fixation was 0.34 ± 0.07 degrees with velocities following a main sequence with a slope comparable to the horizontal and vertical components. We also found the size of the torsional displacement during microsaccades was correlated with the horizontal but not the vertical component. In the presence of an optokinetic stimulus a nystagmus was induced producing a more frequent and larger torsional quick phases compared to microsaccades produced during fixation with a stationary stimulus. The torsional component and the vertical vergence component of quick phases grew larger with higher velocities. Additionally, our results validate and show the feasibility of recording torsional eye movements using video eye tracking in a desktop mounted setup.


2019 ◽  
Vol 116 (20) ◽  
pp. 10081-10086 ◽  
Author(s):  
Elizabeth Huber ◽  
Fang Jiang ◽  
Ione Fine

Previous studies report that human middle temporal complex (hMT+) is sensitive to auditory motion in early-blind individuals. Here, we show that hMT+ also develops selectivity for auditory frequency after early blindness, and that this selectivity is maintained after sight recovery in adulthood. Frequency selectivity was assessed using both moving band-pass and stationary pure-tone stimuli. As expected, within primary auditory cortex, both moving and stationary stimuli successfully elicited frequency-selective responses, organized in a tonotopic map, for all subjects. In early-blind and sight-recovery subjects, we saw evidence for frequency selectivity within hMT+ for the auditory stimulus that contained motion. We did not find frequency-tuned responses within hMT+ when using the stationary stimulus in either early-blind or sight-recovery subjects. We saw no evidence for auditory frequency selectivity in hMT+ in sighted subjects using either stimulus. Thus, after early blindness, hMT+ can exhibit selectivity for auditory frequency. Remarkably, this auditory frequency tuning persists in two adult sight-recovery subjects, showing that, in these subjects, auditory frequency-tuned responses can coexist with visually driven responses in hMT+.


2016 ◽  
Vol 16 (12) ◽  
pp. 1085
Author(s):  
Ikuya Murakami ◽  
Shunsuke Aoki ◽  
Akitoshi Kawano ◽  
Masahiko Terao

2006 ◽  
Vol 273 (1600) ◽  
pp. 2507-2512 ◽  
Author(s):  
Barrie W Roulston ◽  
Matt W Self ◽  
Semir Zeki

The mechanism of positional localization has recently been debated due to interest in the flash-lag effect, which occurs when a briefly flashed stationary stimulus is perceived to lag behind a spatially aligned moving stimulus. Here we report positional localization observed at motion offsets as well as at onsets. In the ‘flash-lead’ effect, a moving object is perceived to be behind a spatially concurrent stationary flash before the two disappear. With ‘reverse-repmo’, subjects mis-localize the final position of a moving bar in the direction opposite to the trajectory of motion. Finally, we demonstrate that simultaneous onset and offset effects lead to a perceived compression of visual space. By characterizing illusory effects observed at motion offsets as well as at onsets, we provide evidence that the perceived position of a moving object is the result of an averaging process over a short time period, weighted towards the most recent positions. Our account explains a variety of motion illusions, including the compression of moving shapes when viewed through apertures.


Perception ◽  
10.1068/p5415 ◽  
2005 ◽  
Vol 34 (6) ◽  
pp. 687-698 ◽  
Author(s):  
Katsumi Watanabe

When subjects localize a flash relative to another stationary stimulus, the flash appears displaced in the direction of nearby motion signals (position capture; Whitney and Cavanagh, 2000 Nature Neuroscience3 954–959). Our previous study had suggested that the position capture is larger for a flash presented ahead of a moving stimulus than for a flash behind it (Watanabe et al, 2003 Perception32 545–559). In the present study, I investigated the spatial asymmetry of position capture. Experiment 1 demonstrated that asymmetric position capture occurs primarily in a moving-object-centered coordinate. Experiment 2 showed evidence that the asymmetric position capture operates after individuation of single visual objects. Finally, experiment 3 demonstrated that, when attention was reduced with a dual-task procedure, the asymmetric position capture increased. These results suggest that the spatial asymmetry of position capture occurs without attention but the spatial bias can be reduced by attention. Therefore, the underlying mechanism for the asymmetric spatial bias may be different from attentive tracking (Cavanagh, 1992 Science257 1563–1565) and mislocalization during smooth pursuit (Brenner et al, 2001 Vision Research41 2253–2259).


2003 ◽  
Vol 43 (23) ◽  
pp. 2387-2392 ◽  
Author(s):  
Anna Brooks ◽  
Rick van der Zwan ◽  
John Holden

1993 ◽  
Vol 77 (2) ◽  
pp. 595-608 ◽  
Author(s):  
Gary C. Galbraith ◽  
Wolfram Schultz

Spatial-temporal visuomotor rearrangement caused pursuit eye movements to counteract the vestibulo-ocular reflex (VOR). Vertical head nodding produced horizontal oscillations of a light spot delayed 1 or 150 msec. Adaptation resulted in apparent complementary motion of a stationary stimulus during nodding. 16 subjects adapted and tested at 150 msec. showed a 10% magnitude apparent motion. Following normal vision, while the electroencephalogram (EEG) was recorded, subjects were readapted at 150 msec. but tested at 1 msec. (to measure temporal generalization). Individual performance was correlated with EEG alpha. Adaptation correlated negatively with Oz and Fz intensity, and positively with Oz frequency. Temporal generalization correlated positively with Oz intensity and negatively with Oz - Fz phase angle. These results suggest that visuomotor adaptability is related to electrocortical activity.


Perception ◽  
1993 ◽  
Vol 22 (9) ◽  
pp. 1111-1119 ◽  
Author(s):  
Nicholas J Wade ◽  
Michael T Swanston

Visual motion of a physically stationary stimulus can be induced by the movement of adjacent stimuli. The frequencies of motion reports and the angular separations required to induce motion were determined for a number of stimulus configurations. A stationary stimulus was fixated in the centre of the display and the point at which induced motion was initially reported was measured. In the first experiment either one or two stationary stimuli were presented in the centre of a display and either one or two similar stimuli moved horizontally towards them. The percentage of trials on which motion was induced varied with the display configuration, being greatest with two moving and one stationary stimuli. The angular separations at which motion was reported were about 2 deg for all conditions. In the second experiment the binocular interaction of such induced motion was examined. A single static fixation stimulus was presented binocularly and a range of monocular or dichoptic conditions was examined: a single moving stimulus to one eye, two moving stimuli to one eye, or two moving stimuli dichoptically. Induced motion was reported on about 90% of the trials for the monocular and dichoptic conditions with two moving stimuli. Motion was first induced at similar angular separations by two moving stimuli, whether presented monocularly or dichoptically. Binocular interaction was further examined with a display that induced motion in the stimulus presented to one eye but not in that presented to the other: this resulted in the apparent motion in depth of the binocularly fixated stimulus.


Sign in / Sign up

Export Citation Format

Share Document