The Transfer of Spatial Knowledge in Virtual Environment Training

1998 ◽  
Vol 7 (2) ◽  
pp. 129-143 ◽  
Author(s):  
David Waller ◽  
Earl Hunt ◽  
David Knapp

Many training applications of virtual environments (VEs) require people to be able to transfer spatial knowledge acquired in a VE to a real-world situation. Using the concept of fidelity, we examine the variables that mediate the transfer of spatial knowledge and discuss the form and development of spatial representations in VE training. We report the results of an experiment in which groups were trained in six different environments (no training, real world, map, VE desktop, VE immersive, and VE long immersive) and then were asked to apply route and configurational knowledge in a real-world maze environment. Short periods of VE training were no more effective than map training; however with sufficient exposure to the virtual training environment, VE training eventually surpassed real-world training. Robust gender differences in training effectiveness of VEs were also found.

Author(s):  
Natália Souza Soares ◽  
João Marcelo Xavier Natário Teixeira ◽  
Veronica Teichrieb

In this work, we propose a framework to train a robot in a virtual environment using Reinforcement Learning (RL) techniques and thus facilitating the use of this type of approach in robotics. With our integrated solution for virtual training, it is possible to programmatically change the environment parameters, making it easy to implement domain randomization techniques on-the-fly. We conducted experiments with a TurtleBot 2i in an indoor navigation task with static obstacle avoidance using an RL algorithm called Proximal Policy Optimization (PPO). Our results show that even though the training did not use any real data, the trained model was able to generalize to different virtual environments and real-world scenes.


Author(s):  
S. Sadasivan ◽  
R. Rele ◽  
J. S. Greenstein ◽  
A. K. Gramopadhye ◽  
J. Masters ◽  
...  

The human inspector performing visual inspection of an aircraft is the backbone of the aircraft inspection process, a vital element in assuring safety and reliability of an air transportation system. Training is an effective strategy for improving their inspection performance. A drawback of present-day on-the-job (OJT) training provided to aircraft inspectors is the limited exposure to different defect types. Previous studies have shown offline feedback training using virtual reality (VR) simulators to be effective in improving visual inspection performance. This research aims at combining the advantages of VR technology that includes exposure to a wide variety of defects and the one-on-one tutoring approach of OJT by implementing a collaborative virtual training environment. In an immersive collaborative virtual environment (CVE), avatars are used to represent the co-participants. In a CVE, information of where the trainer is pointing can be provided to a trainee as visual deictic reference (VDR). This study evaluates the effectiveness of simulating on-the-job training in a CVE for aircraft inspection training, providing VDR slaved to a 3D mouse used by the trainer for pointing. The results of this study show that the training was effective in improving inspection performance.


Author(s):  
John H. Bailey ◽  
Bob G. Witmer

Two experiments were conducted to investigate route and configurational knowledge acquisition in a virtual environment (VE). The results indicate that route knowledge can be acquired in a VE and that it transfers to the real world. Furthermore, although it was not explicitly trained, participants acquired some configurational knowledge. Higher levels of interactive exposure to the VE resulted in better route knowledge than less interactive exposure. There was some evidence that more reported presence was correlated with better performance on spatial knowledge tests, while more reported simulator sickness was correlated with worse performance. Finally, performance during VE rehearsals was a strong, consistent correlate of performance on spatial knowledge tests.


1996 ◽  
Vol 5 (2) ◽  
pp. 163-172 ◽  
Author(s):  
Andrew Liu ◽  
Alex P. Pentland

This paper describes a set of experiments investigating the interaction between the location of eye fixations and the detection of unexpected motion while driving. Both psychophysical and real-world observations indicate that there are differences between the upper and lower visual fields with respect to driving. We began with psychophysical experiments to test whether the detection of unexpected motion Is inherently different in the upper and lower visual fields. No difference was found. However, when texture was added to the driving surface, a large difference was found, possibly due to optokinetic nystagmus stimulated by the texture. These results were confirmed in a driving simulator, and their implications for head-up displays (HUDs) explored. We found that the same upper/lower field asymmetry could be found with digital HUDs but not with analog HUDs. These experiments illustrate how virtual environment technology can connect knowledge from psychophysical experimentation to more realistic situations.


1993 ◽  
Vol 2 (4) ◽  
pp. 297-313 ◽  
Author(s):  
Martin R. Stytz ◽  
Elizabeth Block ◽  
Brian Soltz

As virtual environments grow in complexity, size, and scope users will be increasingly challenged in assessing the situation in them. This will occur because of the difficulty in determining where to focus attention and in assimilating and assessing the information as it floods in. One technique for providing this type of assistance is to provide the user with a first-person, immersive, synthetic environment observation post, an observatory, that permits unobtrusive observation of the environment without interfering with the activity in the environment. However, for large, complex synthetic environments this type of support is not sufficient because the mere portrayal of raw, unanalyzed data about the objects in the virtual space can overwhelm the user with information. To address this problem, which exists in both real and virtual environments, we are investigating the forms of situation awareness assistance needed by users of large-scale virtual environments and the ways in which a virtual environment can be used to improve situation awareness of real-world environments. A technique that we have developed is to allow a user to place analysis modules throughout the virtual environment. Each module provides summary information concerning the importance of the activity in its portion of the virtual environment to the user. Our prototype system, called the Sentinel, is embedded within a virtual environment observatory and provides situation awareness assistance for users within a large virtual environment.


1999 ◽  
Vol 8 (6) ◽  
pp. 671-685 ◽  
Author(s):  
Jui Lin Chen ◽  
Kay M. Stanney

This paper proposes a theoretical model of wayfinding that can be used to guide the design of navigational aiding in virtual environments. Based on an evaluation of wayfinding studies in natural environments, this model divides the wayfinding process into three main subprocesses: cognitive mapping, wayfinding plan development, and physical movement or navigation through an environment. While this general subdivision has been proposed before, the current model further delineates the wayfinding process, including the distinct influences of spatial information, spatial orientation, and spatial knowledge. The influences of experience, abilities, search strategies, motivation, and environmental layout on the wayfinding process are also considered. With this specification of the wayfinding process, a taxonomy of navigational tools is then proposed that can be used to systematically aid the specified wayfinding subprocesses. If effectively applied to the design of a virtual environment, the use of such tools should lead to reduced disorientation and enhanced wayfinding in large-scale virtual spaces. It is also suggested that, in some cases, this enhanced wayfinding performance may be at the expense of the acquisition of an accurate cognitive map of the virtual environment being traversed.


2005 ◽  
Vol 32 (5) ◽  
pp. 777-785 ◽  
Author(s):  
Ebru Cubukcu ◽  
Jack L Nasar

Discrepanices between perceived and actual distance may affect people's spatial behavior. In a previous study Nasar, using self report of behavior, found that segmentation (measured through the number of buildings) along the route affected choice of parking garage and path from the parking garage to a destination. We recreated that same environment in a three-dimensional virtual environment and conducted a test to see whether the same factors emerged under these more controlled conditions and to see whether spatial behavior in the virtual environment accurately reflected behavior in the real environment. The results confirmed similar patterns of response in the virtual and real environments. This supports the use of virtual reality as a tool for predicting behavior in the real world and confirms increases in segmentation as related to increases in perceived distance.


Author(s):  
Hannah M. Solini ◽  
Ayush Bhargava ◽  
Christopher C. Pagano

It is often questioned whether task performance attained in a virtual environment can be transferred appropriately and accurately to the same task in the real world. With advancements in virtual reality (VR) technology, recent research has focused on individuals’ abilities to transfer calibration achieved in a virtual environment to a real-world environment. Little research, however, has shown whether transfer of calibration from a virtual environment to the real world is similar to transfer of calibration from a virtual environment to another virtual environment. As such, the present study investigated differences in calibration transfer to real-world and virtual environments. In either a real-world or virtual environment, participants completed blind walking estimates before and after experiencing perturbed virtual optic flow via a head-mounted virtual display (HMD). Results showed that individuals calibrated to perturbed virtual optic flow and that this calibration carried over to both real-world and virtual environments in a like manner.


2000 ◽  
Vol 9 (5) ◽  
pp. 435-447 ◽  
Author(s):  
Craig D. Murray ◽  
John M. Bowers ◽  
Adrian J. West ◽  
Steve Pettifer ◽  
Simon Gibson

We report a qualitative study of navigation, wayfinding, and place experience within a virtual city. “Cityscape” is a virtual environment (VE), partially algorithmically generated and intended to be redolent of the aggregate forms of real cities. In the present study, we observed and interviewed participants during and following exploration of a desktop implementation of Cityscape. A number of emergent themes were identified and are presented and discussed. Observing the interaction with the virtual city suggested a continuous relationship between real and virtual worlds. Participants were seen to attribute real-world properties and expectations to the contents of the virtual world. The implications of these themes for the construction of virtual environments modeled on real-world forms are considered.


1997 ◽  
Vol 6 (1) ◽  
pp. 127-132 ◽  
Author(s):  
Max M. North ◽  
Sarah M. North ◽  
Joseph R. Coble

Current computer and display technology allows the creation of virtual environment scenes that can be utilized for treating a variety of psychological disorders. This case study demonstrates the effectiveness of virtual environment desensitization (VED) in the treatment of a subject who suffered from fear of flying, a disorder that affects a large number of people. The subject, accompanied by a virtual therapist, was placed in the cockpit of a virtual helicopter and flown over a simulated city for five sessions. The VED treatment resulted in both a significant reduction of anxiety symptoms and the ability to face the phobic situations in the real world.


Sign in / Sign up

Export Citation Format

Share Document