scholarly journals Prototyping and Design for Assembly Analysis Using Multimodal Virtual Environments

Author(s):  
Rakesh Gupta ◽  
David Zeltzer

Abstract This work investigates whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual environment (VE) technology, rather than by using conventional table-based methods such as Boothroyd and Dewhurst Charts. To do this, a unified physically based model has been developed for modeling dynamic interactions among virtual objects and haptic interactions between the human designer and the virtual objects. This model is augmented with auditory events in a multimodal VE system called the “Virtual Environment for Design for Assembly” (VEDA). Currently these models are 2D in order to preserve interactive update rates, but we expect that these results will be generalizable to 3d models. VEDA has been used to evaluate the feasibility and advantages of using multimodal virtual environments as a design tool for manual assembly. The designer sees a visual representation of the objects and can interactively sense and manipulate virtual objects through haptic interface devices with force feedback. He/She can feel these objects and hear sounds when there are collisions among the objects. Objects can be interactively grasped and assembled with other parts of the assembly to prototype new designs and perform Design for Assembly analysis. Experiments have been conducted with human subjects to investigate whether Multimodal Virtual Environments are able to replicate experiments linking increases in assembly time with increase in task difficulty. In particular, the effect of clearance, friction, chamfers and distance of travel on handling and insertion time have been compared in real and virtual environments for peg-in-hole assembly task. In addition, the effects of degrading/removing the different modes (visual, auditory and haptic) on different phases of manual assembly have been examined.

1997 ◽  
Vol 6 (3) ◽  
pp. 318-338 ◽  
Author(s):  
Rakesh Gupta ◽  
Thomas Sheridan ◽  
Daniel Whitney

The goal of this work is to investigate whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual environment (VE) technology. The long-term goal is to use this data to extend computer-aided design (CAD) systems in order to evaluate and compare alternate designs using design for assembly analysis. A unified, physically-based model has been developed for modeling dynamic interactions and has been built into a multimodal VE system called the Virtual Environment for Design for Assembly (VEDA). The designer sees a visual representation of objects, hears collision sounds when objects hit each other, and can feel and manipulate the objects through haptic interface devices with force feedback. Currently these models are 2D in order to preserve interactive update rates. Experiments were conducted with human subjects using a two-dimensional peg-in-hole apparatus and a VEDA simulation of the same apparatus. The simulation duplicated as well as possible the weight, shape, size, peg-hole clearance, and fictional characteristics of the physical apparatus. The experiments showed that the multimodal VE is able to replicate experimental results in which increased task completion times correlated with increasing task difficulty (measured as increased friction, increased handling distance, and decreased peg-hole clearance). However, the multimodal VE task completion times are approximately twice those of the physical apparatus completion process. A number of possible factors have been identified, but the effect of these factors has not been quantified.


2016 ◽  
Vol 13 (122) ◽  
pp. 20160414 ◽  
Author(s):  
Mehdi Moussaïd ◽  
Mubbasir Kapadia ◽  
Tyler Thrash ◽  
Robert W. Sumner ◽  
Markus Gross ◽  
...  

Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects.


Author(s):  
Casper G. Wickman ◽  
Rikard So¨derberg

In the automotive industry today, virtual geometry verification activities are conducted with nominal models in the early design phases. Later in the design process when the first physical test series are made, are concepts verified in a non-nominal manner. Errors detected at this stage can result in expensive post-conceptual changes. By combining Computer Aided Tolerance (CAT) simulation tools with Virtual Reality (VR) tools, virtual environments for non-nominal geometry verification can be utilized. This paper presents the results from a study, conducted at Volvo Cars, that investigates the perceptional aspects that are related to verification of quality appearance, using non-nominal virtual models. Although a realistic non-nominal model is created, the interpretation, i.e. how the model is perceived, must be clarified. This would represent a validation of the model from a perceptional point of view. Since the effect of geometric variation is a specific application, with high demands on realistic and detailed representation, perceptional studies are needed to ensure that VR and other virtual representations can be used for this kind of application. The question is whether it is possible to evaluate aspects like flush, gap and see-through in virtual environments. In this paper, two environments are compared, one physical and one corresponding virtual environment. Three adjusted physical vehicles are mapped to the virtual environment and compared using non-immersive desktop VR in a visualization clinic with test subjects from the automotive industry. The study indicates that virtual objects are judged as less good looking compared with physical objects. There is also a higher degree of uncertainness when judging virtual objects.


Author(s):  
Rasul Fesharakifard ◽  
Maryam Khalili ◽  
Laure Leroy ◽  
Alexis Paljic ◽  
Philippe Fuchs

A grasp exoskeleton actuated by a string-based platform is proposed to provide the force feedback for a user’s hand in human-scale virtual environments. The user of this interface accedes to seven active degrees of freedom in interaction with virtual objects, which comprises three degrees of translation, three degrees of rotation, and one degree of grasping. The exoskeleton has a light and ergonomic structure and provides the grasp gesture for five fingers. The actuation of the exoskeleton is performed by eight strings that are the parallel arms of the platform. Each string is connected to a block of motor, rotary encoder, and force sensor with a novel design to create the necessary force and precision for the interface. A hybrid control method based on the string’s tension measured by the force sensor is developed to resolve the ordinary problems of string-based interface. The blocks could be moved on a cubic frame around the virtual environment. Finally the results of preliminary experimentation of interface are presented to show its practical characteristics. Also the interface is mounted on an automotive model to demonstrate its industrial adaptability.


1997 ◽  
Vol 29 (8) ◽  
pp. 585-597 ◽  
Author(s):  
Rakesh Gupta ◽  
Daniel Whitney ◽  
David Zeltzer

1999 ◽  
Vol 4 (1) ◽  
pp. 8-17 ◽  
Author(s):  
G Jansson ◽  
H Petrie ◽  
C Colwell ◽  
D. Kornbrot ◽  
J. Fänger ◽  
...  

This paper is a fusion of two independent studies investigating related problems concerning the use of haptic virtual environments for blind people: a study in Sweden using a PHANToM 1.5 A and one in the U.K. using an Impulse Engine 3000. In general, the use of such devices is a most interesting option to provide blind people with information about representations of the 3D world, but the restriction at each moment to only one point of contact between observer and virtual object might decrease their effectiveness. The studies investigated the perception of virtual textures, the identification of virtual objects and the perception of their size and angles. Both sighted (blindfolded in one study) and blind people served as participants. It was found (1) that the PHANToM can effectively render textures in the form of sandpapers and simple 3D geometric forms and (2) that the Impulse Engine can effectively render textures consisting of grooved surfaces, as well as 3D objects, properties of which were, however, judged with some over- or underestimation. When blind and sighted participants' performance was compared differences were found that deserves further attention. In general, the haptic devices studied have demonstrated the great potential of force feedback devices in rendering relatively simple environments, in spite of the restricted ways they allow for exploring the virtual world. The results highly motivate further studies of their effectiveness, especially in more complex contexts.


Author(s):  
Hugo I. Medellín-Castillo ◽  
Germánico González-Badillo ◽  
Eder Govea ◽  
Raquel Espinosa-Castañeda ◽  
Enrique Gallegos

The technological growth in the last years have conducted to the development of virtual reality (VR) systems able to immerse the user into a three-dimensional (3D) virtual environment where the user can interact in real time with virtual objects. This interaction is mainly based on visualizing the virtual environment and objects. However, with the recent beginning of haptic systems, the interaction with the virtual world has been extended to also feel, touch and manipulate virtual objects. Virtual reality has been successfully used in the development of applications in different scientific areas ranging from basic sciences, social science, education and entertainment. On the other hand, the use of haptics has increased in the last decade in domains from sciences and engineering to art and entertainment. Despite many developments, there is still relatively little knowledge about the confluence of software, enabling hardware, visual and haptic representations, to enable the conditions that best provide for an immersive sensory environment to convey information about a particular subject domain. In this paper, the state of the art of the research work regarding virtual reality and haptic technologies carried out by the authors in the last years is presented. The aim is to evidence the potential use of these technologies to develop usable systems for analysis and simulation in different areas of knowledge. The development of three different systems in the areas of engineering, medicine and art is presented. In the area of engineering, a system for the planning, evaluation and training of assembly and manufacturing tasks has been developed. The system, named as HAMS (Haptic Assembly and Manufacturing System), is able to simulate assembly tasks of complex components with force feedback provided by the haptic device. On the other hand, in the area of medicine, a surgical simulator for planning and training orthognathic surgeries has been developed. The system, named as VOSS (Virtual Osteotomy Simulator System), allows the realization of virtual osteotomies with force feedback. Finally, in the area of art, an interactive cinema system for blind people has been developed. The system is able to play a 3D virtual movie for the blind user to listen to and touch by means of the haptic device. The development of these applications and the results obtained from these developments are presented and discussed in this paper.


Author(s):  
Conrad Bullion ◽  
Goktug A. Dazkir ◽  
Hakan Gurocak

In this paper we present details of a finger mechanism designed as part of an ongoing research on a force feedback glove. The glove will be used in virtual reality applications where it will provide force feedback to the user as he grasps virtual objects. Haptic (touch and force) feedback is an essential component to make the simulated environment feel more realistic to the user. The design employs an innovative mechanism that wraps around each finger. Each mechanism is controlled by one cable. By controlling the tension on the cable and the displacement of the cable, we can control the amount of force applied to the user’s finger at any given position of the mechanism. The glove can provide distributed forces at the bottom surface of each finger while reducing the number of actuators and sensors. First kinematic and force analysis of the mechanism along with experimental verifications are presented. Following description of an experiment to determine grasping forces, we conclude with an overview of the next steps in this research.


Author(s):  
Eder Govea ◽  
Hugo I. Medellín-Castillo

Virtual Reality (VR) is one of the areas of knowledge that have taken advantage of the computer technological development and scientific visualization. It has been used in different applications such as engineering, medicine, education, entertainment, astronomy, archaeology and arts. A main issue of VR and computer assisted applications is the design and development of the virtual environment, which comprises the virtual objects. Thus, the process of designing virtual environment requires the modelling of the virtual scene and virtual objects, including their geometry and surface characteristics such as colours, textures, etc. This research work presents a new methodology to develop low-cost and high quality virtual environments and scenarios for biomechanics, biomedical and engineering applications. The proposed methodology is based on open-source software. Four case studies corresponding to two applications in medicine and two applications in engineering are presented. The results show that the virtual environments developed for these applications are realistic and similar to the real environments. When comparing these virtual reality scenarios with pictures of the actual devices, it can be observed that the appearance of the virtual scenarios is very good. In particular the use of textures greatly helps in assessing specific features such as simulation of bone or metal. Thus, the usability of the proposed methodology for developing virtual reality applications in biomedical and engineering is proved. It is important to mention that the quality of the virtual environment will also depend on the 3D modelling skills of the VR designer.


Author(s):  
Abdeldjallil Naceri ◽  
Thierry Hoinville ◽  
Ryad Chellali ◽  
Jesus Ortiz ◽  
Shannon Hennig

The main objective of this paper is to investigate whether observers are able to perceive depth of virtual objects within virtual environments during reaching tasks. In other words, we tackled the question of observer immersion in a displayed virtual environment. For this purpose, eight observers were asked to reach for a virtual objects displayed within their peripersonal space in two conditions: condition one provided a small virtual sphere that was displayed beyond the subjects index finger as an extension of their hand and condition two provided no visual feedback. In addition, audio feedback was provided when the contact with the virtual object was made in both conditions. Although observers slightly overestimated depth within the peripersonal space, they accurately aimed for the virtual objects based on the kinematics analysis. Furthermore, no significant difference was found concerning the movement between conditions for all observers. Observers accurately targeted the virtual point correctly with regard to time and space. This suggests the virtual environment sufficiently simulated the information normally present in the central nervous system.


Sign in / Sign up

Export Citation Format

Share Document