scholarly journals Interacting With a Large Virtual Environment by Combining a Ground-Based Haptic Device and a Mobile Robot Base

Author(s):  
Ryan A. Pavlik ◽  
Judy M. Vance ◽  
Greg R. Luecke

Ground-based haptic devices provide the capability of adding force feedback to virtual environments; however, the physical workspace of such devices is very limited due to the fixed base. By mounting a haptic device on a mobile robot, rather than a fixed stand, the reachable volume can be extended to function in full-scale virtual environments. This work presents the hardware, software, and integration developed to use such a mobile base with a Haption Virtuose™ 6D35-45. A mobile robot with a Mecanum-style omni-directional drive base and an Arduino-compatible microcontroller development board communicates with software on a host computer to provide a VRPN-based control and data acquisition interface. The position of the mobile robot in the physical space is tracked using an optical tracking system. The SPARTA virtual assembly software was extended to 1) apply transformations to the haptic device data based on the tracked base position, and 2) capture the error between the haptic device’s end effector and the center of its workspace and command the robot over VRPN to minimize this error. The completed system allows use of the haptic device in a wide area projection screen or head-mounted display virtual environment, providing smooth free-space motion and stiff display of forces to the user throughout the entire space. The availability of haptics in large immersive environments can contribute to future advances in virtual assembly planning, factory simulation, and other operations where haptics is an essential part of the simulation experience.

2019 ◽  
Vol 39 (5) ◽  
pp. 931-943
Author(s):  
Samir Garbaya ◽  
Daniela M. Romano ◽  
Gunjeet Hattar

Purpose The purpose of this paper is to study the effect of the gamification of virtual assembly planning on the user performance, user experience and engagement. Design/methodology/approach A multi-touch table was used to manipulate virtual parts and gamification features were integrated into the virtual assembly environment. An experiment was conducted in two conditions: a gamified and a non-gamified virtual environment. Subjects had to assemble a virtual pump. The user performance was evaluated in terms of the number of errors, the feasibility of the generated assembly sequence and the user feedback. Findings The gamification reduced the number of errors and increased the score representing the number of right decisions. The results of the subjective and objective analysis showed that the number of errors decreased with engagement in the gamified assembly. The increase in the overall user experience reduced the number of errors. The subjective evaluation showed a significant difference between the gamified and the non-gamified assembly in terms of the level of engagement, the learning usability and the overall experience. Research limitations/implications The effective learning retention after training has not been tested, and longitudinal studies are necessary. The effect of the used gamification elements has been evaluated as a whole; further work could isolate the most beneficial features and add other elements that might be more beneficial for learning. Originality/value The research reported in this paper provides valuable insights into the gamification of virtual assembly using a low-cost multi-touch interface. The results are promising for training operators to assemble a product at the design stage.


Author(s):  
Filipe Gaspar ◽  
Rafael Bastos ◽  
Miguel Sales

In large-scale immersive virtual reality (VR) environments, such as a CAVE, one of the most common problems is tracking the position of the user’s head while he or she is immersed in this environment to reflect perspective changes in the synthetic stereoscopic images. In this paper, the authors describe the theoretical foundations and engineering approach adopted in the development of an infrared-optical tracking system designed for large scale immersive Virtual Environments (VE) or Augmented Reality (AR) settings. The system is capable of tracking independent retro-reflective markers arranged in a 3D structure in real time, recovering all possible 6DOF. These artefacts can be adjusted to the user’s stereo glasses to track his or her head while immersed or used as a 3D input device for rich human-computer interaction (HCI). The hardware configuration consists of 4 shutter-synchronized cameras attached with band-pass infrared filters and illuminated by infrared array-emitters. Pilot lab results have shown a latency of 40 ms when simultaneously tracking the pose of two artefacts with 4 infrared markers, achieving a frame-rate of 24.80 fps and showing a mean accuracy of 0.93mm/0.51º and a mean precision of 0.19mm/0.04º, respectively, in overall translation/rotation, fulfilling the requirements initially defined.


Author(s):  
N. Pretto ◽  
F. Poiesi

We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user’s VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.


Author(s):  
Uma Jayaram ◽  
Roglenda Repp

Abstract A major disadvantage of some tracking systems used in virtual reality environments is the degradation in accuracy due to the presence of metals and other electromagnetic distortions in the environment. Calibration of the virtual environment to account for these distortions is essential for VR applications in engineering where correlation between the virtual environment and the physical world is important. The goal of the calibration process is to map the distorted tracker space to the physical space as accurately as possible for real-time applications. In this paper the authors present an integrated calibration system used with an electromagnetic tracking system. The components of this system are described in detail, including data collection, grid refinement, interpolation, and evaluation. The paper describes different alternatives for measuring systematic errors of magnetic trackers in a room, and for automatically correcting them using various techniques to interpolate between sets of measurements to achieve error estimates and corrections. Several key techniques and algorithms are presented in detail and evaluated in terms of accuracy and execution time over a range of cell densities. Among the interpolation methods considered are inverse distance weighting and affine transformation mappings, as well as variations and combinations of these. The calibration system, called COVE (Calibration of Virtual Environments), has been successful in allowing accurate tracking for several engineering applications.


1999 ◽  
Vol 8 (4) ◽  
pp. 469-473 ◽  
Author(s):  
Jeffrey S. Pierce ◽  
Randy Pausch ◽  
Christopher B. Sturgill ◽  
Kevin D. Christiansen

For entertainment applications, a successful virtual experience based on a head-mounted display (HMD) needs to overcome some or all of the following problems: entering a virtual world is a jarring experience, people do not naturally turn their heads or talk to each other while wearing an HMD, putting on the equipment is hard, and people do not realize when the experience is over. In the Electric Garden at SIGGRAPH 97, we presented the Mad Hatter's Tea Party, a shared virtual environment experienced by more than 1,500 SIGGRAPH attendees. We addressed these HMD-related problems with a combination of back story, see-through HMDs, virtual characters, continuity of real and virtual objects, and the layout of the physical and virtual environments.


Author(s):  
Filipe Gaspar ◽  
Rafael Bastos ◽  
Miguel Sales

In large-scale immersive virtual reality (VR) environments, such as a CAVE, one of the most common problems is tracking the position of the user’s head while he or she is immersed in this environment to reflect perspective changes in the synthetic stereoscopic images. In this paper, the authors describe the theoretical foundations and engineering approach adopted in the development of an infrared-optical tracking system designed for large scale immersive Virtual Environments (VE) or Augmented Reality (AR) settings. The system is capable of tracking independent retro-reflective markers arranged in a 3D structure in real time, recovering all possible 6DOF. These artefacts can be adjusted to the user’s stereo glasses to track his or her head while immersed or used as a 3D input device for rich human-computer interaction (HCI). The hardware configuration consists of 4 shutter-synchronized cameras attached with band-pass infrared filters and illuminated by infrared array-emitters. Pilot lab results have shown a latency of 40 ms when simultaneously tracking the pose of two artefacts with 4 infrared markers, achieving a frame-rate of 24.80 fps and showing a mean accuracy of 0.93mm/0.51º and a mean precision of 0.19mm/0.04º, respectively, in overall translation/rotation, fulfilling the requirements initially defined.


2004 ◽  
Vol 4 (2) ◽  
pp. 83-90 ◽  
Author(s):  
Chang E. Kim ◽  
Judy M. Vance

Realistic part interaction is an important component of an effective virtual assembly application. Both collision detection and part interaction modeling are needed to simulate part-to-part and hand-to-part interactions. This paper examines several polygonal-based collision detection packages and compares their usage for virtual assembly applications with the Voxmap PointShell (VPS) software developed by the Boeing Company. VPS is a software developer’s toolkit for real-time collision and proximity detection, swept-volume generation, dynamic animation, and 6 degree-of-freedom haptics which is based on volumetric collision detection and physically based modeling. VPS works by detecting interactions between two parts: a dynamic object moving in the virtual environment, and a static object defined as a collection of all other objects in the environment. The method was found to provide realistic collision detection and physically-based modeling interaction, with good performance at the expense of contact accuracy. Results from several performance tests on VPS are presented. This paper concludes by presenting how VPS has been implemented to handle multiple dynamic part collisions and two-handed assembly using the 5DT dataglove in a projection screen virtual environment.


Author(s):  
Doug A. Bowman ◽  
Ameya Datey ◽  
Young Sam Ryu ◽  
Umer Farooq ◽  
Omar Vasnaik

Although a wide range of display devices is used in virtual environment (VE) systems, no guidelines exist to choose an appropriate display for a particular VE application. Our goal in this research is to develop such guidelines on the basis of empirical results. In this paper, we present a preliminary experiment comparing human behavior and performance between a head-mounted display (HMD) and a four-sided spatially immersive display (SID). In particular, we studied users' preferences for real vs. virtual turns in the VE. The results indicate that subjects have a significant preference for real turns in the HMD and for virtual turns in the SID. The experiment also found that females are more likely to choose real turns than males. We suggest that HMDs are an appropriate choice when users perform frequent turns and require spatial orientation.


Author(s):  
Matthew A. Mandiak ◽  
Thenkurussi Kesavadas

The concept of virtual assembly offers endless possibilities to a designer in a manufacturing setting. Various assembly measures such as clearance, assembly planning, force and alternate designs can be examined without the need to waste resources through manufacturing. This paper investigates the possibility of using force feedback through haptics in a virtual environment as a way of understanding the intricacy of simple assembly. This allows a designer to further understand what will, and will not, work during the design phase. The study of certain characteristics, including lubrication, is also examined to see what impact it has on the ease of assembly. A user, or designer, can then get an actual feel to see what characteristics change the likelihood for a proper fit. In addition, the creation of a virtual part run is also examined to fully understand what constitutes a defect. Statistical output is provided so a user can analyze a run of parts for quality control and be able to pinpoint possible causes for error in a process. Virtual parts can then be assembled from the run to see how each differs from the others and to emphasize the undesirability of a defect for assembly purposes.


Sign in / Sign up

Export Citation Format

Share Document