Virtual object manipulation using dynamically selected constraints with real-time collision detection

Author(s):  
Yoshifumi Kitamura ◽  
Amy Yee ◽  
Fumio Kishino
1998 ◽  
Vol 7 (5) ◽  
pp. 460-477 ◽  
Author(s):  
Yoshifumi Kitamura ◽  
Amy Yee ◽  
Fumio Kishino

A natural and intuitive method is proposed to help a user manipulate an object in a virtual environment. The method does not need to assign special properties to the object faces in advance and does not require special hardware. Instead, it uses only the visual constraints of motion among object faces that are dynamically selected by a real-time collision detection method while the user manipulates the object. By constraining more than two faces during the user's manipulation, the proposed method provides an efficient tool for complicated manipulation tasks. First, the method of manipulation aid is described. Then several experiments demonstrate the effectiveness of this method, particularly when the user is requested to precisely place a virtual object in a certain location. Finally, as an application of the proposed manipulation aid, an experiment is conducted to compare the performances of a task (constructing a simple toy) in a real versus a virtual environment. Results show that the distance accuracy and completion time of the virtual task with the manipulation aid is close to that of the real task.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker

The method presented in this work reduces the frequency of virtual objects incorrectly occluding real-world objects in Augmented Reality (AR) applications. Current AR rendering methods cannot properly represent occlusion between real and virtual objects because the objects are not represented in a common coordinate system. These occlusion errors can lead users to have an incorrect perception of the environment around them when using an AR application, namely not knowing a real-world object is present due to a virtual object incorrectly occluding it and incorrect perception of depth or distance by the user due to incorrect occlusions. The authors of this paper present a method that brings both real-world and virtual objects into a common coordinate system so that distant virtual objects do not obscure nearby real-world objects in an AR application. This method captures and processes RGB-D data in real-time, allowing the method to be used in a variety of environments and scenarios. A case study shows the effectiveness and usability of the proposed method to correctly occlude real-world and virtual objects and provide a more realistic representation of the combined real and virtual environments in an AR application. The results of the case study show that the proposed method can detect at least 20 real-world objects with potential to be incorrectly occluded while processing and fixing occlusion errors at least 5 times per second.


2014 ◽  
Vol 536-537 ◽  
pp. 603-606
Author(s):  
Yu Mei Liu ◽  
Yu Dan Dong ◽  
Jing Wu

According to the characteristics and needs of virtual scenic roaming system, select the appropriate modeling techniques. By using the modeling platform scenic entity object model structure, and then build virtual tourist attractions, we propose hierarchical collision detection methods. This method actually meets the accuracy requirements under the premise, greatly reducing the number and complexity of collision detection; effectively improve the system in real time.


Sign in / Sign up

Export Citation Format

Share Document