scholarly journals The effect of feedback training on distance estimation in virtual environments

2005 ◽  
Vol 19 (8) ◽  
pp. 1089-1108 ◽  
Author(s):  
Adam R. Richardson ◽  
David Waller
2004 ◽  
Author(s):  
Jodie M. Plumert ◽  
Joseph K. Kearney ◽  
James F. Cremer

2010 ◽  
Vol 7 (4) ◽  
pp. 1-18 ◽  
Author(s):  
Timofey Y. Grechkin ◽  
Tien Dat Nguyen ◽  
Jodie M. Plumert ◽  
James F. Cremer ◽  
Joseph K. Kearney

2011 ◽  
Vol 20 (3) ◽  
pp. 254-272 ◽  
Author(s):  
Abdeldjallil Naceri ◽  
Ryad Chellali ◽  
Thierry Hoinville

In this paper, we address depth perception in the peripersonal space within three virtual environments: poor environment (dark room), reduced cues environment (wireframe room), and rich cues environment (a lit textured room). Observers binocularly viewed virtual scenes through a head-mounted display and evaluated the egocentric distance to spheres using visually open-loop pointing tasks. We conducted two different experiments within all three virtual environments. The apparent size of the sphere was held constant in the first experiment and covaried with distance in the second one. The results of the first experiment revealed that observers more accurately estimated depth in the rich virtual environment compared to the visually poor and the wireframe environments. Specifically, observers' pointing errors were small in distances up to 55 cm, and increased with distance once the sphere was further than 55 cm. Individual differences were found in the second experiment. Our results suggest that the quality of virtual environments has an impact on distance estimation within reaching space. Also, manipulating the targets' size cue led to individual differences in depth judgments. Finally, our findings confirm the use of vergence as an absolute distance cue in virtual environments within the arm's reaching space.


2021 ◽  
Vol 2 ◽  
Author(s):  
Daisuke Mine ◽  
Sakurako Kimoto ◽  
Kazuhiko Yokosawa

Distance perception in humans can be affected by oculomotor and optical cues and a person’s action capability in a given environment, known as action-specific effects. For example, a previous study has demonstrated that egocentric distance estimation to a target is affected by the width of a transparent barrier placed in the intermediate space between a participant and a target. However, the characteristics of a barrier’s width that affect distance perception remain unknown. Therefore, we investigated whether visual and tactile inputs and actions related to a barrier affect distance estimation to a target behind the barrier. The results confirmed previous studies by demonstrating that visual and tactile presentations of the barrier’s width affected distance estimation to the target. However, this effect of the barrier’s width was not observed when the barrier was touchable but invisible nor when the barrier was visible but penetrable. These findings indicate the complexity of action-specific effects and the difficulty of identifying necessary information for inducing these effects.


1998 ◽  
Vol 7 (2) ◽  
pp. 144-167 ◽  
Author(s):  
Bob G. Witmer ◽  
Paul B. Kline

The ability to accurately estimate distance is an essential component of navigating large-scale spaces. Although the factors that influence distance estimation have been a topic of research in real-world environments for decades and are well known, research on distance estimation in virtual environments (VEs) has only just begun. Initial investigations of distance estimation in VEs suggest that observers are less accurate in estimating distance in VEs than in the real world (Lampton et al., 1995). Factors influencing distance estimates may be divided into those affecting perceived distance (visual cues only) and those affecting traversed distance to include visual, cognitive, and proprioceptive cues. To assess the contribution of the various distance cues in VEs, two experiments were conducted. The first required a static observer to estimate the distance to a cylinder placed at various points along a 130-foot hallway. This experiment examined the effects of floor texture, floor pattern, and object size on distance estimates in a VE. The second experiment required a moving observer to estimate route segment distances and total route distances along four routes, each totaling 1210 feet. This experiment assessed the effects of movement method, movement speed, compensatory cues, and wall texture density. Results indicate that observers underestimate distances both in VEs and in the real world, but the underestimates are more extreme in VEs. Texture did not reliably affect the distance estimates, providing no compensation for the gross underestimates of distance in VE. Traversing a distance improves the ability to estimate that distance, but more natural means of moving via a treadmill do not necessarily improve distance estimates over traditional methods of moving in VE (e.g., using a joystick). The addition of compensatory cues (tone every 10 feet traversed on alternate route segments) improves VE distance estimation to almost perfect performance.


Author(s):  
Robert C. Allen ◽  
Daniel P. McDonald ◽  
Michael J. Singer

The current paper describes our classification of errors participants made when estimating direction and distances in a large scale (2000 m × 2000 m) Virtual Environment (VE). Two VE configuration groups (Low or High Interactivity) traversed a 400 m route through one of two Virtual Terrain's (Distinctive or Non-Distinctive or Terrain 1 and 2, respectively) in 100 m increments. The High VE group used a treadmill to move through the VE with head tracked visual displays; the Low VE group used a joystick for movement and visual display control. Results indicate that as experience within either terrain increased, participants demonstrated an improved ability to directionally locate landmarks. Experience in the environment did not affect distance estimation accuracy. Terrain 1 participants were more accurate in locating proximal, as opposed to distal, landmarks. They also overestimated distances to near landmarks and underestimated distances to far landmarks. In Terrain 2, the Low VE group gave more accurate distance estimations. We believe this result can be explained in terms of increased task demands placed on the High VE Group.


2010 ◽  
Vol 19 (1) ◽  
pp. 71-81 ◽  
Author(s):  
Francisco Velasco-Álvarez ◽  
Ricardo Ron-Angevin ◽  
Maria José Blanca-Mena

In this paper, an asynchronous brain–computer interface is presented that enables the control of a wheelchair in virtual environments using only one motor imagery task. The control is achieved through a graphical intentional control interface with three navigation commands (move forward, turn right, and turn left) which are displayed surrounding a circle. A bar is rotating in the center of the circle, so it points successively to the three possible commands. The user can, by motor imagery, extend this bar length to select the command at which the bar is pointing. Once a command is selected, the virtual wheelchair moves in a continuous way, so the user controls the length of the advance or the amplitude of the turns. Users can voluntarily switch from this interface to a noncontrol interface (and vice versa) when they do not want to generate any command. After performing a cue-based feedback training, three subjects carried out an experiment in which they had to navigate through the same fixed path to reach an objective. The results obtained support the viability of the system.


Sign in / Sign up

Export Citation Format

Share Document