Two Stages for Depth Integration of Motion Parallax

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 163-163
Author(s):  
H Ujike ◽  
S Saida

Motion parallax has been shown to be a principal cue for depth perception under monocular viewing. The simulated depth of stimuli in previous studies has been constant in both magnitude and direction. In the present study we addressed the question how the visual system detects parallactic depth change. To answer this we investigated the temporal characteristics of parallactic depth change and the effect of a motion signal on them. The stimulus consisted of four bands of 15-cycle sinusoidal gratings and parallactic depth was simulated between each band. In experiment 1, we measured the amount of perceived depth change with different frequencies (0.125 to 10 Hz) of simulated depth change and with different velocities (2.5 to 40 cm s−1) of head movements. The result showed the perceived depth change decreased with frequency of depth change, and it increased with head velocity when the frequency was constant. In experiment 2, we measured the motion threshold with different velocities of head movement. The result showed the threshold was constant across different head velocities. In experiment 3, we measured the amount of perceived depth using apparent motion stimuli with the head moving. The result showed depth decreased with SOA of apparent motion stimuli, but there was no effect of different head velocities. The results of these three experiments indicate that parallactic depth change is determined by the duration of simulated depth, which corresponds to the integration time of motion, as well as by the extent of head movement. We conclude that parallactic depth is integrated in two stages: first, integration of motion and, second, integration of motion parallax.

Perception ◽  
10.1068/p5221 ◽  
2005 ◽  
Vol 34 (4) ◽  
pp. 477-490 ◽  
Author(s):  
Hiroshi Ono ◽  
Hiroyasu Ujike

Yoking the movement of the stimulus on the screen to the movement of the head, we examined visual stability and depth perception as a function of head-movement velocity and parallax. In experiment 1, for different head velocities, observers adjusted the parallax to find (a) the depth threshold and (b) the concomitant-motion threshold. Between these thresholds, depth was seen with no perceived motion. In experiment 2, for different head velocities, observers adjusted the parallax to produce the same perceived depth. A slower head movement required a greater parallax to produce the same perceived depth as faster head movements. In experiment 3, observers reported the perceived depth for different parallax magnitudes. Perceived depth covaried with smaller parallax without motion perception, but began to decrease with larger parallax and concomitant motion was seen. Only motion was seen with the larger parallax.


i-Perception ◽  
10.1068/ic393 ◽  
2011 ◽  
Vol 2 (4) ◽  
pp. 393-393
Author(s):  
Masahiro Ishii ◽  
Masashi Fujita ◽  
Masayuki Sato

Perception ◽  
10.1068/p5232 ◽  
2005 ◽  
Vol 34 (10) ◽  
pp. 1263-1273 ◽  
Author(s):  
Hiroshi Ono ◽  
Nicholas J Wade

Motion parallax was described as a cue to depth over 300 years ago and as producing apparent motion over 150 years ago. In recent years, experimental interest in motion parallax has increased, following the rediscovery of the idea that stimulus motion can be yoked to head movement. We compare the historical descriptions with some contemporary research, which indicates how depth and motion perception are dependent on the conditions of stimulation.


2000 ◽  
Vol 9 (6) ◽  
pp. 638-647 ◽  
Author(s):  
Hanfeng Yuan ◽  
W. L. Sachtler ◽  
Nat Durlach ◽  
Barbara Shinn-Cunningham

Experiments were conducted to determine how the ability to detect and discriminate head-motion parallax depth cues is degraded by time delays between head movement and image update. The stimuli consisted of random-dot patterns that were programmed to appear as one cycle of a sinusoi dal grating when the subject's head moved. The results show that time delay between head movement and image update has essentially no effect on the ability to discrimi nate between two such gratings with different depth char acteristics when the delay is less than or equal to roughly 265 ms.


2020 ◽  
Vol 27 (2) ◽  
pp. 206-225 ◽  
Author(s):  
Sirisilp Kongsilp ◽  
Matthew N. Dailey

Since one of the most important aspects of a Fish Tank Virtual Reality (FTVR) system is how well it provides the illusion of depth to users, we present a study that evaluates users' depth perception in FTVR systems using three tasks. The tasks are based on psychological research on human vision and depth judgments common in VR applications. We find that participants do not perform well under motion parallax cues only, when compared with stereo only or a combination of both kinds of cues. Measurements of participants' head movement during each task prove valuable in explaining the experimental findings. We conclude that FTVR users rely on stereopsis for depth perception in FTVR environments more than they do on motion parallax, especially for tasks requiring depth acuity.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


2003 ◽  
Vol 89 (5) ◽  
pp. 2516-2527 ◽  
Author(s):  
Laurent Petit ◽  
Michael S. Beauchamp

We used event-related fMRI to measure brain activity while subjects performed saccadic eye, head, and gaze movements to visually presented targets. Two distinct patterns of response were observed. One set of areas was equally active during eye, head, and gaze movements and consisted of the superior and inferior subdivisions of the frontal eye fields, the supplementary eye field, the intraparietal sulcus, the precuneus, area MT in the lateral occipital sulcus and subcortically in basal ganglia, thalamus, and the superior colliculus. These areas have been previously observed in functional imaging studies of human eye movements, suggesting that a common set of brain areas subserves both oculomotor and head movement control in humans, consistent with data from single-unit recording and microstimulation studies in nonhuman primates that have described overlapping eye- and head-movement representations in oculomotor control areas. A second set of areas was active during head and gaze movements but not during eye movements. This set of areas included the posterior part of the planum temporale and the cortex at the temporoparietal junction, known as the parieto-insular vestibular cortex (PIVC). Activity in PIVC has been observed during imaging studies of invasive vestibular stimulation, and we confirm its role in processing the vestibular cues accompanying natural head movements. Our findings demonstrate that fMRI can be used to study the neural basis of head movements and show that areas that control eye movements also control head movements. In addition, we provide the first evidence for brain activity associated with vestibular input produced by natural head movements as opposed to invasive caloric or galvanic vestibular stimulation.


2021 ◽  
Vol 11 (10) ◽  
pp. 4505
Author(s):  
Takafumi Asao ◽  
Takeru Kobayashi ◽  
Kentaro Kotani ◽  
Satoshi Suzuki ◽  
Kazutaka Obama ◽  
...  

The purpose of this study is to construct a hands-free endoscopic surgical communication support system that can draw lines in space corresponding to head movements using AR technology and evaluate the applicability of the drawing motion by the head movement to the steering law, one of the HCI models, for the potential use during endoscopic surgery. In the experiment, the participants manipulated the cursor by using head movements through the pathway and movement time (MT); the number of errors and subjective evaluation of the difficulty of the task was obtained. The results showed that the head-movement-based line drawing manipulation was significantly affected by the tracking direction and by the task difficulty, shown as the Index of Difficulty (ID). There was high linearity between ID and MT, with a coefficient of determination R² of 0.9991. The Index of Performance was higher in the horizontal and vertical directions compared to diagonal directions. Although the weight and biocompatibility of the AR glasses must be overcome to make the current prototype a viable tool for supporting communication in the operating room environment, the prototype has the potential to promote the development of a computer-supported collaborative work environment for endoscopic surgery purposes.


Sign in / Sign up

Export Citation Format

Share Document