Experiments Using Multimodal Virtual Environments in Design for Assembly Analysis

1997 ◽  
Vol 6 (3) ◽  
pp. 318-338 ◽  
Author(s):  
Rakesh Gupta ◽  
Thomas Sheridan ◽  
Daniel Whitney

The goal of this work is to investigate whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual environment (VE) technology. The long-term goal is to use this data to extend computer-aided design (CAD) systems in order to evaluate and compare alternate designs using design for assembly analysis. A unified, physically-based model has been developed for modeling dynamic interactions and has been built into a multimodal VE system called the Virtual Environment for Design for Assembly (VEDA). The designer sees a visual representation of objects, hears collision sounds when objects hit each other, and can feel and manipulate the objects through haptic interface devices with force feedback. Currently these models are 2D in order to preserve interactive update rates. Experiments were conducted with human subjects using a two-dimensional peg-in-hole apparatus and a VEDA simulation of the same apparatus. The simulation duplicated as well as possible the weight, shape, size, peg-hole clearance, and fictional characteristics of the physical apparatus. The experiments showed that the multimodal VE is able to replicate experimental results in which increased task completion times correlated with increasing task difficulty (measured as increased friction, increased handling distance, and decreased peg-hole clearance). However, the multimodal VE task completion times are approximately twice those of the physical apparatus completion process. A number of possible factors have been identified, but the effect of these factors has not been quantified.

Author(s):  
Rakesh Gupta ◽  
David Zeltzer

Abstract This work investigates whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual environment (VE) technology, rather than by using conventional table-based methods such as Boothroyd and Dewhurst Charts. To do this, a unified physically based model has been developed for modeling dynamic interactions among virtual objects and haptic interactions between the human designer and the virtual objects. This model is augmented with auditory events in a multimodal VE system called the “Virtual Environment for Design for Assembly” (VEDA). Currently these models are 2D in order to preserve interactive update rates, but we expect that these results will be generalizable to 3d models. VEDA has been used to evaluate the feasibility and advantages of using multimodal virtual environments as a design tool for manual assembly. The designer sees a visual representation of the objects and can interactively sense and manipulate virtual objects through haptic interface devices with force feedback. He/She can feel these objects and hear sounds when there are collisions among the objects. Objects can be interactively grasped and assembled with other parts of the assembly to prototype new designs and perform Design for Assembly analysis. Experiments have been conducted with human subjects to investigate whether Multimodal Virtual Environments are able to replicate experiments linking increases in assembly time with increase in task difficulty. In particular, the effect of clearance, friction, chamfers and distance of travel on handling and insertion time have been compared in real and virtual environments for peg-in-hole assembly task. In addition, the effects of degrading/removing the different modes (visual, auditory and haptic) on different phases of manual assembly have been examined.


Author(s):  
Goktug A. Dazkir ◽  
Hakan Gurocak

Most haptic gloves are complicated interfaces with many actuators. If the gloves were more compact and simpler, they would greatly increase our ability to interact with virtual worlds in a more natural way. This research explored design of force feedback gloves with a new finger mechanism. The mechanism enabled application of distributed forces at the bottom surface of the fingers while reducing the number of actuators. Most glove designs available in the literature apply a reaction force only to the fingertips. Two prototype gloves were built using (1) DC servo motors, and (2) brakes filled with magnetorheological fluid. The glove with MR-brakes is lighter and simpler than the one with motors. However, the glove with motors enabled much faster task completion times.


Author(s):  
Audrey K. Bell ◽  
Caroline G.L. Cao

The use of haptic devices to provide force feedback in teleoperation has been shown to enhance performance. An experiment was conducted to examine whether artificial force feedback is utilized in the same manner as real force feedback in a simulated laparoscopic tissue-probing task. Forces in probing a double-layer silicon gel mass were replicated and exaggerated in a virtual environment using a haptic device. Ten subjects performed the probing task in four different conditions: 1) realistic force feedback, 2) exaggerated feedback, 3) disproportionately exaggerated forces, and 4) reversed and disproportionately exaggerated forces. Results showed a significantly higher maximum force, detection time and error rate in virtual probing than in real probing. Time to task completion was significantly different between the virtually realistic and exaggerated force feedback conditions. These results suggest that artificial force information may be processed differently than real haptic information, leading to higher force application, inefficiency, and reduced accuracy in tissue probing tasks.


Author(s):  
Göran A. V. Christiansson

Haptic feedback is known to improve teleoperation task performance for a number of tasks, and one important question is which haptic cues are the most important for each specific task. This research quantifies human performance in an assembly task for two types of haptic cues: low-frequency (LF) force feedback and high-frequency (HF) force feedback. A human subjects study was performed with those two main factors: LF force feedback on/off and HF force (acceleration) feedback on/off. All experiments were performed using a three degree-of-freedom teleoperator where the slave device has a low intrinsic stiffness, while the master device on the other hand is stiff. The results show that the LF haptic feedback reduces impact forces, but does not influence low-frequency contact forces or task completion time. The HF information did not improve task performance, but did reduce the mental load of the teleoperator, but only in combination with the LF feedback.


Author(s):  
S. Jayaram ◽  
H. Joshi ◽  
U. Jayaram ◽  
Y. Kim ◽  
H. Kate ◽  
...  

This paper describes recent work completed to provide haptics-enabled virtual tools in a native CAD environment, such as CATIA V5™. This was a collaborative effort between Washington State University, Sandia National Laboratories, and Immersion Technologies. The intent was to start by utilizing Immersion’s Haptic Workstation™ hardware and supporting CATIA V5™ software at Sandia and leverage the existing work on virtual assembly done by the VRCIM laboratory at Washington State University (WSU). The key contribution of this paper is a unique capability to perform interactive assembly and disassembly simulations in a native Computer Aided Design (CAD) environment using tools such as allen and box-end wrenches with force feedback using a cyberforce™ and cybergrasp™. Equally important, it also contributes to the new trend in the integration of various commercial-off-the-shelf (COTS) systems with specific user driven systems and solutions using component-based software design concepts. We discuss some of the key approaches and concepts including: different approaches to integrating the native CAD assembly data with the virtual environment constraints data; integration of the native CAD kinematics capability with the immersive environment; algorithms to dynamically organize the assembly constraints for use in manipulation with a virtual hand for assembly and disassembly simulations; and an event-callback mechanism in which different events and callback functions were designed and implemented to simulate different situations in the virtual environment. This integrated capability of haptic tools in a native CAD environment provides functionality beyond extracting data from a CAD model and using it in a virtual environment.


2021 ◽  
Author(s):  
Koki Watanabe ◽  
Fumihiko Nakamura ◽  
Kuniharu Sakurada ◽  
Theophilus Teo ◽  
Maki Sugimoto

Author(s):  
Holland M. Vasquez ◽  
Justin G. Hollands ◽  
Greg A. Jamieson

Some previous research using a new augmented reality map display called Mirror-in-the-Sky (MitS) showed that performance was worse and mental workload (MWL) greater with MitS relative to a track-up map for navigation and wayfinding tasks. The purpose of the current study was to determine—for both MitS and track-up map—how much performance improves and MWL decreases with practice in a simple navigation task. We conducted a three-session experiment in which twenty participants completed a route following task in a virtual environment. Task completion times and collisions decreased, subjective MWL decreased, and secondary task performance improved with practice. The NASA-TLX Global ratings and Detection Response Task Hit Rates showed a larger decrease in MWL with MitS than the track-up map. Additionally, means for performance and workload measures showed that differences between the MitS and track-up map decreased in the first session. In later sessions the differences between the MitS and track-up map were negligible. As such, with practice performance and MWL may be comparable to a traditional track-up map.


2016 ◽  
Vol 13 (122) ◽  
pp. 20160414 ◽  
Author(s):  
Mehdi Moussaïd ◽  
Mubbasir Kapadia ◽  
Tyler Thrash ◽  
Robert W. Sumner ◽  
Markus Gross ◽  
...  

Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects.


Sign in / Sign up

Export Citation Format

Share Document