A framework for haptic rendering of large-scale virtual environments

Author(s):  
Mashhuda Glencross ◽  
Roger Hubbold
2009 ◽  
Vol 18 (5) ◽  
pp. 340-360 ◽  
Author(s):  
Jong-Phil Kim ◽  
Beom-Chan Lee ◽  
Hyungon Kim ◽  
Jaeha Kim ◽  
Jeha Ryu

This paper proposes a novel, accurate, and efficient hybrid CPU/GPU-based 3-DOF haptic rendering algorithm for highly complex and large-scale virtual environments (VEs) that may simultaneously contain different types of object data representations. In a slower rendering process on the GPU, local geometry near the haptic interaction point (HIP) is obtained in the form of six directional depth maps from virtual cameras adaptively located around the object to be touched. In a faster rendering process on the CPU, collision detection and response computations are performed using the directional depth maps without the need for any complex data hierarchy of virtual objects, or data conversion of multiple data formats. To efficiently find an ideal HIP (IHIP), the proposed algorithm uses a new “abstract” local occupancy map instance (LOMI) and the nearest neighbor search algorithm, which does not require physical memory for storing voxel types during online voxelization and reduces the search time by a factor of about 10. Finally, in order to achieve accurate haptic interaction, sub-voxelization of a voxel in LOMI is proposed. The effectiveness of the proposed algorithm is subsequently demonstrated with several benchmark examples.


1997 ◽  
Vol 6 (5) ◽  
pp. 547-564 ◽  
Author(s):  
David R. Pratt ◽  
Shirley M. Pratt ◽  
Paul T. Barham ◽  
Randall E. Barker ◽  
Marianne S. Waldrop ◽  
...  

This paper examines the representation of humans in large-scale, networked virtual environments. Previous work done in this field is summarized, and existing problems with rendering, articulating, and networking numerous human figures in real time are explained. We have developed a system that integrates together some well-known solutions along with new ideas. Models with multiple level of details, body-tracking technology and animation libraries to specify joint angles, efficient group representations to describe multiple humans, and hierarchical network protocols have been successfully employed to increase the number of humans represented, system performance, and user interactivity. The resulting system immerses participants effectively and has numerous useful applications.


Author(s):  
Jerry Jen-Hung Tsai ◽  
Jeff WT Kan ◽  
Xiangyu Wang ◽  
Yingsiu Huang

This chapter presents a study on the impact of design scales on collaborations in 3D virtual environments. Different domains require designers to work on different scales; for instance, urban design and electronic circuit design operate at very different scales. However, the understanding of the effects of scales upon collaboration in virtual environment is limited. In this chapter, the authors propose to use protocol analysis method to examine the differences between two design collaboration projects in virtual environments: one large scale, and another small scale within a similar domain. It shows that the difference in scale impacted more on communication control and social communication.


2010 ◽  
pp. 180-193 ◽  
Author(s):  
F. Steinicke ◽  
G. Bruder ◽  
J. Jerald ◽  
H. Frenz

In recent years virtual environments (VEs) have become more and more popular and widespread due to the requirements of numerous application areas in particular in the 3D city visualization domain. Virtual reality (VR) systems, which make use of tracking technologies and stereoscopic projections of three-dimensional synthetic worlds, support better exploration of complex datasets. However, due to the limited interaction space usually provided by the range of the tracking sensors, users can explore only a portion of the virtual environment (VE). Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) such as virtual city models, while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. With redirected walking users are guided on physical paths that may differ from the paths they perceive in the virtual world. The authors have conducted experiments in order to quantify how much humans can unknowingly be redirected. In this chapter they present the results of this study and the implications for virtual locomotion user interfaces that allow users to view arbitrary real world locations, before the users actually travel there in a natural environment.


Author(s):  
Filipe Gaspar ◽  
Rafael Bastos ◽  
Miguel Sales

In large-scale immersive virtual reality (VR) environments, such as a CAVE, one of the most common problems is tracking the position of the user’s head while he or she is immersed in this environment to reflect perspective changes in the synthetic stereoscopic images. In this paper, the authors describe the theoretical foundations and engineering approach adopted in the development of an infrared-optical tracking system designed for large scale immersive Virtual Environments (VE) or Augmented Reality (AR) settings. The system is capable of tracking independent retro-reflective markers arranged in a 3D structure in real time, recovering all possible 6DOF. These artefacts can be adjusted to the user’s stereo glasses to track his or her head while immersed or used as a 3D input device for rich human-computer interaction (HCI). The hardware configuration consists of 4 shutter-synchronized cameras attached with band-pass infrared filters and illuminated by infrared array-emitters. Pilot lab results have shown a latency of 40 ms when simultaneously tracking the pose of two artefacts with 4 infrared markers, achieving a frame-rate of 24.80 fps and showing a mean accuracy of 0.93mm/0.51º and a mean precision of 0.19mm/0.04º, respectively, in overall translation/rotation, fulfilling the requirements initially defined.


Sign in / Sign up

Export Citation Format

Share Document