scholarly journals Assessment of Pointshell Shrinking and Feature Size on Virtual Manual Assembly

Author(s):  
Daniela Faas ◽  
Judy M. Vance

This paper investigates the effect of pointshell shrinking and feature size on manual assembly operations in a virtual environment with haptic force feedback. Specific emphasis is on exploring methods to improve voxel-based modeling to support manual assembly of low clearance parts. CAD parts were created, voxelized and tested for assembly. The results showed that pointshell shrinking allows the engineer to assemble parts with a lower clearance than without pointshell shrinking. Further results showed that assemble-ability is dependent on feature size, particularly part diameter and clearance. In a pin and hole assembly, as the pin diameter increases, for a given percent clearance, assembling low clearance features becomes difficult. An empirical equation is developed to guide the designer in selecting an appropriate voxel size based on feature size. These results advance the effort to improve manual assembly operations via haptic feedback in the virtual environment.

2018 ◽  
Vol 35 (2) ◽  
pp. 149-160 ◽  
Author(s):  
Mustufa H. Abidi ◽  
Abdulrahman M. Al-Ahmari ◽  
Ali Ahmad ◽  
Saber Darmoul ◽  
Wadea Ameen

AbstractThe design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.


Author(s):  
Daniela Faas

Experience with current Virtual Reality (VR) systems that simulate low clearance assembly operations with haptic feedback indicate that such systems are highly desirable tools in the evaluation of preliminary designs, as well as virtual training and maintenance processes. The purpose of this research is to develop methods to support manual low clearance assembly using haptic (force) feedback in a virtual environment. The results of this research will be used in an engineering framework for assembly simulation, training, and maintenance. The proposed method combines voxel-based collision detection and boundary representation to support both force feedback and constraint recognition. The key to this approach is developing the data structure and logic needed to seamlessly move between the two representations while supporting smooth haptic feedback. Collision forces and constraint-guided forces are blended to provide support for low clearance haptic assembly. This paper describes the development of the method.


2019 ◽  
Vol 9 (18) ◽  
pp. 3692 ◽  
Author(s):  
Seonghoon Ban ◽  
Kyung Hoon Hyun

In recent years, consumer-level virtual-reality (VR) devices and content have become widely available. Notably, establishing a sense of presence is a key objective of VR and an immersive interface with haptic feedback for VR applications has long been in development. Despite the state-of-the-art force feedback research being conducted, a study on directional feedback, based on force concentration, has not yet been reported. Therefore, we developed directional force feedback (DFF), a device that generates directional sensations for virtual-reality (VR) applications via mechanical force concentrations. DFF uses the rotation of motors to concentrate force and deliver directional sensations to the user. To achieve this, we developed a novel method of force concentration for directional sensation; by considering both rotational rebound and gravity, the optimum rotational motor speeds and rotation angles were identified. Additionally, we validated the impact of DFF in a virtual environment, showing that the users’ presence and immersion within VR were higher with DFF than without. The result of the user studies demonstrated that the device significantly improves immersivity of virtual applications.


Author(s):  
Rakesh Gupta ◽  
David Zeltzer

Abstract This work investigates whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual environment (VE) technology, rather than by using conventional table-based methods such as Boothroyd and Dewhurst Charts. To do this, a unified physically based model has been developed for modeling dynamic interactions among virtual objects and haptic interactions between the human designer and the virtual objects. This model is augmented with auditory events in a multimodal VE system called the “Virtual Environment for Design for Assembly” (VEDA). Currently these models are 2D in order to preserve interactive update rates, but we expect that these results will be generalizable to 3d models. VEDA has been used to evaluate the feasibility and advantages of using multimodal virtual environments as a design tool for manual assembly. The designer sees a visual representation of the objects and can interactively sense and manipulate virtual objects through haptic interface devices with force feedback. He/She can feel these objects and hear sounds when there are collisions among the objects. Objects can be interactively grasped and assembled with other parts of the assembly to prototype new designs and perform Design for Assembly analysis. Experiments have been conducted with human subjects to investigate whether Multimodal Virtual Environments are able to replicate experiments linking increases in assembly time with increase in task difficulty. In particular, the effect of clearance, friction, chamfers and distance of travel on handling and insertion time have been compared in real and virtual environments for peg-in-hole assembly task. In addition, the effects of degrading/removing the different modes (visual, auditory and haptic) on different phases of manual assembly have been examined.


2014 ◽  
Vol 23 (3) ◽  
pp. 320-338 ◽  
Author(s):  
Clemens Schuwerk ◽  
Giulia Paggetti ◽  
Rahul Chaudhari ◽  
Eckehard Steinbach

Shared Haptic Virtual Environments (SHVEs) are often realized using a client–server communication architecture. In this case, a centralized physics engine, running on the server, is used to simulate the object-states in the virtual environment (VE). At the clients, a copy of the VE is maintained and used to render the interaction forces locally, which are then displayed to the human through a haptic device. While this architecture ensures stability in the coupling between the haptic device and the virtual environment, it necessitates a high number of object-state update packets transmitted from the server to the clients to achieve satisfactory force feedback quality. In this paper, we propose a perception-based traffic control scheme to reduce the number of object-state update packets by allowing a variable but not perceivable object-state error at the client. To find a balance between packet rate reduction and force rendering fidelity, our approach uses different error thresholds for the visual and haptic modality, where the haptic thresholds are determined by psychophysical experiments in this paper. Force feedback quality is evaluated with subjective tests for a variety of different traffic control parameter settings. The results show that the proposed scheme reduces the packet rate by up to 97%, compared to communication approaches that work without data reduction. At the same time, the proposed scheme does not degrade the haptic feedback quality significantly. Finally, it outperforms well-known dead reckoning, commonly used in visual-only distributed applications.


2021 ◽  
Author(s):  
Koki Watanabe ◽  
Fumihiko Nakamura ◽  
Kuniharu Sakurada ◽  
Theophilus Teo ◽  
Maki Sugimoto

2000 ◽  
Author(s):  
Michael L. Turner ◽  
Ryan P. Findley ◽  
Weston B. Griffin ◽  
Mark R. Cutkosky ◽  
Daniel H. Gomez

Abstract This paper describes the development of a system for dexterous telemanipulation and presents the results of tests involving simple manipulation tasks. The user wears an instrumented glove augmented with an arm-grounded haptic feedback apparatus. A linkage attached to the user’s wrist measures gross motions of the arm. The movements of the user are transferred to a two fingered dexterous robot hand mounted on the end of a 4-DOF industrial robot arm. Forces measured at the robot fingers can be transmitted back to the user via the haptic feedback apparatus. The results obtained in block-stacking and object-rolling experiments indicate that the addition of force feedback to the user did not improve the speed of task execution. In fact, in some cases the presence of incomplete force information is detrimental to performance speed compared to no force information. There are indications that the presence of force feedback did aid in task learning.


2005 ◽  
Vol 128 (2) ◽  
pp. 216-226 ◽  
Author(s):  
M. A. Vitrani ◽  
J. Nikitczuk ◽  
G. Morel ◽  
C. Mavroidis ◽  
B. Weinberg

Force-feedback mechanisms have been designed to simplify and enhance the human-vehicle interface. The increase in secondary controls within vehicle cockpits has created a desire for a simpler, more efficient human-vehicle interface. By consolidating various controls into a single, haptic feedback control device, information can be transmitted to the operator, without requiring the driver’s visual attention. In this paper, the experimental closed loop torque control of electro-rheological fluids (ERF) based resistive actuators for haptic applications is performed. ERFs are liquids that respond mechanically to electric fields by changing their properties, such as viscosity and shear stress electroactively. Using the electrically controlled rheological properties of ERFs, we developed resistive-actuators for haptic devices that can resist human operator forces in a controlled and tunable fashion. In this study, the ERF resistive-actuator analytical model is derived and experimentally verified and accurate closed loop torque control is experimentally achieved using a non-linear proportional integral controller with a feedforward loop.


Sign in / Sign up

Export Citation Format

Share Document