scholarly journals Evaluating Rules of Interaction for Object Manipulation in Cluttered Virtual Environments

2002 ◽  
Vol 11 (6) ◽  
pp. 591-609 ◽  
Author(s):  
Roy A. Ruddle ◽  
Justin C. D. Savage ◽  
Dylan M. Jones

A set of rules is presented for the design of interfaces that allow virtual objects to be manipulated in 3D virtual environments (VEs). The rules differ from other interaction techniques because they focus on the problems of manipulating objects in cluttered spaces rather than open spaces. Two experiments are described that were used to evaluate the effect of different interaction rules on participants' performance when they performed a task known as “the piano mover's problem.” This task involved participants in moving a virtual human through parts of a virtual building while simultaneously manipulating a large virtual object that was held in the virtual human's hands, resembling the simulation of manual materials handling in a VE for ergonomic design. Throughout, participants viewed the VE on a large monitor, using an “over-the-shoulder” perspective. In the most cluttered VEs, the time that participants took to complete the task varied by up to 76% with different combinations of rules, thus indicating the need for flexible forms of interaction in such environments.

2019 ◽  
Vol 9 (9) ◽  
pp. 1797
Author(s):  
Chen ◽  
Lin

Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly limit the scope of the simulation, and thus the depth of the user experience. In this study, we developed a process by which to realize virtual environments that are based entirely on scenes in the real world. In modeling the real world, the proposed scheme divides scenes into discrete objects, which are then replaced with virtual objects. This enables users to interact in and with virtual environments without limitations. An RGB-D camera is used in conjunction with simultaneous localization and mapping (SLAM) to obtain the movement trajectory of the user and derive information related to the real environment. In modeling the environment, graph-based segmentation is used to segment point clouds and perform object segmentation to enable the subsequent replacement of objects with equivalent virtual entities. Superquadrics are used to derive shape parameters and location information from the segmentation results in order to ensure that the scale of the virtual objects matches the original objects in the real world. Only after the objects have been replaced with their virtual counterparts in the real environment converted into a virtual scene. Experiments involving the emulation of real-world locations demonstrated the feasibility of the proposed rendering scheme. A rock-climbing application scenario is finally presented to illustrate the potential use of the proposed system in AR applications.


2018 ◽  
Vol 38 (1) ◽  
pp. 21-45 ◽  
Author(s):  
D. Mendes ◽  
F. M. Caputo ◽  
A. Giachetti ◽  
A. Ferreira ◽  
J. Jorge

1999 ◽  
Vol 4 (1) ◽  
pp. 8-17 ◽  
Author(s):  
G Jansson ◽  
H Petrie ◽  
C Colwell ◽  
D. Kornbrot ◽  
J. Fänger ◽  
...  

This paper is a fusion of two independent studies investigating related problems concerning the use of haptic virtual environments for blind people: a study in Sweden using a PHANToM 1.5 A and one in the U.K. using an Impulse Engine 3000. In general, the use of such devices is a most interesting option to provide blind people with information about representations of the 3D world, but the restriction at each moment to only one point of contact between observer and virtual object might decrease their effectiveness. The studies investigated the perception of virtual textures, the identification of virtual objects and the perception of their size and angles. Both sighted (blindfolded in one study) and blind people served as participants. It was found (1) that the PHANToM can effectively render textures in the form of sandpapers and simple 3D geometric forms and (2) that the Impulse Engine can effectively render textures consisting of grooved surfaces, as well as 3D objects, properties of which were, however, judged with some over- or underestimation. When blind and sighted participants' performance was compared differences were found that deserves further attention. In general, the haptic devices studied have demonstrated the great potential of force feedback devices in rendering relatively simple environments, in spite of the restricted ways they allow for exploring the virtual world. The results highly motivate further studies of their effectiveness, especially in more complex contexts.


2011 ◽  
Vol 10 (4) ◽  
pp. 1-10 ◽  
Author(s):  
Paulo Gallotti Rodrigues ◽  
Alberto Barbosa Raposo ◽  
Luciano Pereira Soares

Traditional interaction devices such as computer mice and keyboards do not adapt very well to immersive envi-ronments, since they were not necessarily designed for users who may be standing or in movement. Moreover, in the current inte-raction model for immersive environments, based on wands and 3D mice, a change of context is necessary in order to execute non-immersive tasks. These constant context changes from im-mersive to 2D desktops introduce a rupture in user interaction with the application. The objective of this work is to study how to adapt interaction techniques from touch surface based systems to 3D virtual environments to reduce this physical rupture from the fully immersive mode to the desktop paradigm. In order to do this, a wireless glove (v-Glove) that maps to a touch interface in a vir-tual reality immersive environment was developed, enabling it to interact in 3D applications. The glove has two main functionalities: tracking the position of the user's index finger and vibrating the fingertip when it reaches an area mapped in the interaction space to simulate a touch feeling. Quantitative and qualitative analysis were performed with users to evaluate the v-Glove, comparing it with a gyroscopic 3D mouse.


Author(s):  
Abdeldjallil Naceri ◽  
Thierry Hoinville ◽  
Ryad Chellali ◽  
Jesus Ortiz ◽  
Shannon Hennig

The main objective of this paper is to investigate whether observers are able to perceive depth of virtual objects within virtual environments during reaching tasks. In other words, we tackled the question of observer immersion in a displayed virtual environment. For this purpose, eight observers were asked to reach for a virtual objects displayed within their peripersonal space in two conditions: condition one provided a small virtual sphere that was displayed beyond the subjects index finger as an extension of their hand and condition two provided no visual feedback. In addition, audio feedback was provided when the contact with the virtual object was made in both conditions. Although observers slightly overestimated depth within the peripersonal space, they accurately aimed for the virtual objects based on the kinematics analysis. Furthermore, no significant difference was found concerning the movement between conditions for all observers. Observers accurately targeted the virtual point correctly with regard to time and space. This suggests the virtual environment sufficiently simulated the information normally present in the central nervous system.


2019 ◽  
Vol 9 (13) ◽  
pp. 2597 ◽  
Author(s):  
Koukeng Yang ◽  
Thomas Brown ◽  
Kelvin Sung

Recently released, depth-sensing-capable, and moderately priced handheld devices support the implementation of augmented reality (AR) applications without the requirement of tracking visually distinct markers. This relaxed constraint allows for applications with significantly increased augmentation space dimension, virtual object size, and user movement freedom. Being relatively new, there is currently a lack of study on issues concerning direct virtual object manipulation for AR applications on these devices. This paper presents the results from a survey of the existing object manipulation methods designed for traditional handheld devices and identifies potentially viable ones for newer, depth-sensing-capable devices. The paper then describes the following: a test suite that implements the identified methods, test cases designed specifically for the characteristics offered by the new devices, the user testing process, and the corresponding results. Based on the study, this paper concludes that AR applications on newer, depth-sensing-capable handheld devices should manipulate small-scale virtual objects by mapping directly to device movements and large-scale virtual objects by supporting separate translation and rotation modes. Our work and results are the first step in better understanding the requirements to support direct virtual object manipulation for AR applications running on a new generation of depth-sensing-capable handheld devices.


Author(s):  
Germánico González Badillo ◽  
Hugo I. Medellín Castillo ◽  
Theodore Lim ◽  
Víctor E. Espinoza López

Virtual environments (VE) are becoming a popular way to interact with virtual objects in several applications such as design, training, planning, etc. Physics simulation engines (PSE) used in games development can be used to increase the realism in virtual environments (VE) by enabling the virtual objects with dynamic behavior and collision detection. There exist several PSE available to be integrated with VE, each PSE uses different model representation methods to create the collision shape and compute virtual object dynamic behavior. The performance of physics based VEs is directly related to the PSE ability and its method to represent virtual objects. This paper analyzes different freely available PSEs — Bullet and the two latest versions of PhysX (v2.8 and 3.1) — based on their model representation algorithms, and evaluates them by performing various assembly tasks with different geometry complexity. The evaluation is based on the collision detection performance and their influence on haptic-virtual assembly process. The results have allowed the identification of the strengths and weaknesses of each PSE according to its representation method.


2003 ◽  
Author(s):  
Carlo Galimberti ◽  
Gloria Belloni ◽  
Maddalena Grassi ◽  
Alberto Cattaneo ◽  
Valentina Manias ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document