Semi-Immersive Virtual Turbine Engine Simulation System

2018 ◽  
Vol 35 (2) ◽  
pp. 149-160 ◽  
Author(s):  
Mustufa H. Abidi ◽  
Abdulrahman M. Al-Ahmari ◽  
Ali Ahmad ◽  
Saber Darmoul ◽  
Wadea Ameen

AbstractThe design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.

2019 ◽  
Vol 121 (4) ◽  
pp. 1398-1409 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.


Author(s):  
Daniela Faas

Experience with current Virtual Reality (VR) systems that simulate low clearance assembly operations with haptic feedback indicate that such systems are highly desirable tools in the evaluation of preliminary designs, as well as virtual training and maintenance processes. The purpose of this research is to develop methods to support manual low clearance assembly using haptic (force) feedback in a virtual environment. The results of this research will be used in an engineering framework for assembly simulation, training, and maintenance. The proposed method combines voxel-based collision detection and boundary representation to support both force feedback and constraint recognition. The key to this approach is developing the data structure and logic needed to seamlessly move between the two representations while supporting smooth haptic feedback. Collision forces and constraint-guided forces are blended to provide support for low clearance haptic assembly. This paper describes the development of the method.


2019 ◽  
Vol 9 (18) ◽  
pp. 3692 ◽  
Author(s):  
Seonghoon Ban ◽  
Kyung Hoon Hyun

In recent years, consumer-level virtual-reality (VR) devices and content have become widely available. Notably, establishing a sense of presence is a key objective of VR and an immersive interface with haptic feedback for VR applications has long been in development. Despite the state-of-the-art force feedback research being conducted, a study on directional feedback, based on force concentration, has not yet been reported. Therefore, we developed directional force feedback (DFF), a device that generates directional sensations for virtual-reality (VR) applications via mechanical force concentrations. DFF uses the rotation of motors to concentrate force and deliver directional sensations to the user. To achieve this, we developed a novel method of force concentration for directional sensation; by considering both rotational rebound and gravity, the optimum rotational motor speeds and rotation angles were identified. Additionally, we validated the impact of DFF in a virtual environment, showing that the users’ presence and immersion within VR were higher with DFF than without. The result of the user studies demonstrated that the device significantly improves immersivity of virtual applications.


Author(s):  
Daniela Faas ◽  
Judy M. Vance

This paper investigates the effect of pointshell shrinking and feature size on manual assembly operations in a virtual environment with haptic force feedback. Specific emphasis is on exploring methods to improve voxel-based modeling to support manual assembly of low clearance parts. CAD parts were created, voxelized and tested for assembly. The results showed that pointshell shrinking allows the engineer to assemble parts with a lower clearance than without pointshell shrinking. Further results showed that assemble-ability is dependent on feature size, particularly part diameter and clearance. In a pin and hole assembly, as the pin diameter increases, for a given percent clearance, assembling low clearance features becomes difficult. An empirical equation is developed to guide the designer in selecting an appropriate voxel size based on feature size. These results advance the effort to improve manual assembly operations via haptic feedback in the virtual environment.


Author(s):  
Benjamin Williams ◽  
Alexandra E. Garton ◽  
Christopher J. Headleand

Sign in / Sign up

Export Citation Format

Share Document