motion in depth
Recently Published Documents


TOTAL DOCUMENTS

228
(FIVE YEARS 6)

H-INDEX

30
(FIVE YEARS 0)

2021 ◽  
Vol 189 ◽  
pp. 93-103
Author(s):  
Rebecca A. Champion ◽  
Lucy Evans ◽  
Paul A. Warren
Keyword(s):  


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Joan López-Moliner ◽  
Cristina de la Malla

AbstractWe often need to interact with targets that move along arbitrary trajectories in the 3D scene. In these situations, information of parameters like speed, time-to-contact, or motion direction is required to solve a broad class of timing tasks (e.g., shooting, or interception). There is a large body of literature addressing how we estimate different parameters when objects move both in the fronto-parallel plane and in depth. However, we do not know to which extent the timing of interceptive actions is affected when motion-in-depth (MID) is involved. Unlike previous studies that have looked at the timing of interceptive actions using constant distances and fronto-parallel motion, we here use immersive virtual reality to look at how differences in the above-mentioned variables influence timing errors in a shooting task performed in a 3D environment. Participants had to shoot at targets that moved following different angles of approach with respect to the observer when those reached designated shooting locations. We recorded the shooting time, the temporal and spatial errors and the head’s position and orientation in two conditions that differed in the interval between the shot and the interception of the target’s path. Results show a consistent change in the temporal error across approaching angles: the larger the angle, the earlier the error. Interestingly, we also found different error patterns within a given angle that depended on whether participants tracked the whole target’s trajectory or only its end-point. These differences had larger impact when the target moved in depth and are consistent with underestimating motion-in-depth in the periphery. We conclude that the strategy participants use to track the target’s trajectory interacts with MID and affects timing performance.



2021 ◽  
Author(s):  
Joan López-Moliner ◽  
Cristina Malla

Abstract We often need to interact with targets that move along arbitrary trajectories in the 3D scene. In these situations, information of parameters like speed, time-to-contact, or motion direction is required to solve a broad class of timing tasks (e.g., shooting, or interception). There is a large body of literature addressing how we estimate different parameters when objects move both in the fronto-parallel plane and in depth. However, we do not know to which extent the timing of interceptive actions is affected when there is MID involved. Unlike previous studies that have looked at the timing of interceptive actions using constant distances and fronto-parallel motion, we here use immersive virtual reality to look at how differences in the above-mentioned variables influence timing errors in a shooting task performed in a 3D environment. Participants had to shoot at targets that moved following different angles of approach with respect to the observer when those reached designated shooting locations. We recorded the shooting time, the temporal and spatial errors and the head’s position and orientation in two conditions that differed in the interval between the shot and the interception of the target’s path. Results show a consistent change of the temporal error across approaching angles: the larger the angle, the earlier the error. Interestingly, we also found different error patterns within a given angle that depended on whether participants tracked the whole target’s trajectory or only its end-point. These differences had larger impact when the target moved in depth and are consistent with underestimating motion-in-depth in the periphery. We conclude that the strategy participants use to track the trajectory interacts with MID and affects timing performance.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Wilson Luu ◽  
Barbara Zangerl ◽  
Michael Kalloniatis ◽  
Juno Kim

AbstractStereopsis provides critical information for the spatial visual perception of object form and motion. We used virtual reality as a tool to understand the role of global stereopsis in the visual perception of self-motion and spatial presence using virtual environments experienced through head-mounted displays (HMDs). Participants viewed radially expanding optic flow simulating different speeds of self-motion in depth, which generated the illusion of self-motion in depth (i.e., linear vection). Displays were viewed with the head either stationary (passive radial flow) or laterally swaying to the beat of a metronome (active conditions). Multisensory conflict was imposed in active conditions by presenting displays that either: (i) compensated for head movement (active compensation condition), or (ii) presented pure radial flow with no compensation during head movement (active no compensation condition). In Experiment 1, impairing stereopsis by anisometropic suppression in healthy participants generated declines in reported vection strength, spatial presence and severity of cybersickness. In Experiment 2, vection and presence ratings were compared between participants with and without clinically-defined global stereopsis. Participants without global stereopsis generated impaired vection and presence similarly to those found in Experiment 1 by subjects with induced stereopsis impairment. We find that reducing global stereopsis can have benefits of reducing cybersickness, but has adverse effects on aspects of self-motion perception in HMD VR.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Arvind Chandna ◽  
Jeremy Badler ◽  
Devashish Singh ◽  
Scott Watamaniuk ◽  
Stephen Heinen

AbstractTo clearly view approaching objects, the eyes rotate inward (vergence), and the intraocular lenses focus (accommodation). Current ocular control models assume both eyes are driven by unitary vergence and unitary accommodation commands that causally interact. The models typically describe discrete gaze shifts to non-accommodative targets performed under laboratory conditions. We probe these unitary signals using a physical stimulus moving in depth on the midline while recording vergence and accommodation simultaneously from both eyes in normal observers. Using monocular viewing, retinal disparity is removed, leaving only monocular cues for interpreting the object’s motion in depth. The viewing eye always followed the target’s motion. However, the occluded eye did not follow the target, and surprisingly, rotated out of phase with it. In contrast, accommodation in both eyes was synchronized with the target under monocular viewing. The results challenge existing unitary vergence command theories, and causal accommodation-vergence linkage.





2020 ◽  
Vol 14 ◽  
Author(s):  
Marc M. Himmelberg ◽  
Federico G. Segala ◽  
Ryan T. Maloney ◽  
Julie M. Harris ◽  
Alex R. Wade

Two stereoscopic cues that underlie the perception of motion-in-depth (MID) are changes in retinal disparity over time (CD) and interocular velocity differences (IOVD). These cues have independent spatiotemporal sensitivity profiles, depend upon different low-level stimulus properties, and are potentially processed along separate cortical pathways. Here, we ask whether these MID cues code for different motion directions: do they give rise to discriminable patterns of neural signals, and is there evidence for their convergence onto a single “motion-in-depth” pathway? To answer this, we use a decoding algorithm to test whether, and when, patterns of electroencephalogram (EEG) signals measured from across the full scalp, generated in response to CD- and IOVD-isolating stimuli moving toward or away in depth can be distinguished. We find that both MID cue type and 3D-motion direction can be decoded at different points in the EEG timecourse and that direction decoding cannot be accounted for by static disparity information. Remarkably, we find evidence for late processing convergence: IOVD motion direction can be decoded relatively late in the timecourse based on a decoder trained on CD stimuli, and vice versa. We conclude that early CD and IOVD direction decoding performance is dependent upon fundamentally different low-level stimulus features, but that later stages of decoding performance may be driven by a central, shared pathway that is agnostic to these features. Overall, these data are the first to show that neural responses to CD and IOVD cues that move toward and away in depth can be decoded from EEG signals, and that different aspects of MID-cues contribute to decoding performance at different points along the EEG timecourse.





2020 ◽  
Vol 20 (11) ◽  
pp. 391
Author(s):  
Jacqueline M. Fulvio ◽  
Bas Rokers


2020 ◽  
Vol 124 (2) ◽  
pp. 623-633
Author(s):  
Veronica Choi ◽  
Nicholas J. Priebe

The visual system integrates signals from the left and right eye to generate a representation of the world in depth. The binocular integration of signals may be observed from the coordinated vergence eye movements elicited by object motion in depth. We explored the circuits and signals responsible for these vergence eye movements in rodent and find these vergence eye movements are generated by a comparison of the motion and not spatial visual signals.



Sign in / Sign up

Export Citation Format

Share Document