Virtual Reality: A Tool for Assembly?

2000 ◽  
Vol 9 (5) ◽  
pp. 486-496 ◽  
Author(s):  
A. C. Boud ◽  
C. Baber ◽  
S. J. Steiner

This paper reports on an investigation into the proposed usability of virtual reality for a manufacturing application such as the assembly of a number of component parts into a final product. Before the assembly task itself is considered, the investigation explores the use of VR for the training of human assembly operators and compares the findings to conventionally adopted techniques for parts assembly. The investigation highlighted several limitations of using VR technology. Most significant was the lack of haptic feedback provided by current input devices for virtual environments. To address this, an instrumented object (IO) was employed that enabled the user to pick up and manipulate the IO as the representation of a component from a product to be assembled. The reported findings indicate that object manipulation times are superior when IOs are employed as the interaction device, and that IO devices could therefore be adopted in VEs to provide haptic feedback for diverse applications and, in particular, for assembly task planning.

2020 ◽  
Vol 11 (1) ◽  
pp. 99-106
Author(s):  
Marián Hudák ◽  
Štefan Korečko ◽  
Branislav Sobota

AbstractRecent advances in the field of web technologies, including the increasing support of virtual reality hardware, have allowed for shared virtual environments, reachable by just entering a URL in a browser. One contemporary solution that provides such a shared virtual reality is LIRKIS Global Collaborative Virtual Environments (LIRKIS G-CVE). It is a web-based software system, built on top of the A-Frame and Networked-Aframe frameworks. This paper describes LIRKIS G-CVE and introduces its two original components. The first one is the Smart-Client Interface, which turns smart devices, such as smartphones and tablets, into input devices. The advantage of this component over the standard way of user input is demonstrated by a series of experiments. The second component is the Enhanced Client Access layer, which provides access to positions and orientations of clients that share a virtual environment. The layer also stores a history of connected clients and provides limited control over the clients. The paper also outlines an ongoing experiment aimed at an evaluation of LIRKIS G-CVE in the area of virtual prototype testing.


2020 ◽  
Vol 6 (3) ◽  
pp. 127-130
Author(s):  
Max B. Schäfer ◽  
Kent W. Stewart ◽  
Nico Lösch ◽  
Peter P. Pott

AbstractAccess to systems for robot-assisted surgery is limited due to high costs. To enable widespread use, numerous issues have to be addressed to improve and/or simplify their components. Current systems commonly use universal linkage-based input devices, and only a few applicationoriented and specialized designs are used. A versatile virtual reality controller is proposed as an alternative input device for the control of a seven degree of freedom articulated robotic arm. The real-time capabilities of the setup, replicating a system for robot-assisted teleoperated surgery, are investigated to assess suitability. Image-based assessment showed a considerable system latency of 81.7 ± 27.7 ms. However, due to its versatility, the virtual reality controller is a promising alternative to current input devices for research around medical telemanipulation systems.


Author(s):  
Shujie Deng ◽  
Julie A. Kirkby ◽  
Jian Chang ◽  
Jian Jun Zhang

The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications.


2009 ◽  
Vol 18 (1) ◽  
pp. 39-53 ◽  
Author(s):  
Anatole Lécuyer

This paper presents a survey of the main results obtained in the field of “pseudo-haptic feedback”: a technique meant to simulate haptic sensations in virtual environments using visual feedback and properties of human visuo-haptic perception. Pseudo-haptic feedback uses vision to distort haptic perception and verges on haptic illusions. Pseudo-haptic feedback has been used to simulate various haptic properties such as the stiffness of a virtual spring, the texture of an image, or the mass of a virtual object. This paper describes the several experiments in which these haptic properties were simulated. It assesses the definition and the properties of pseudo-haptic feedback. It also describes several virtual reality applications in which pseudo-haptic feedback has been successfully implemented, such as a virtual environment for vocational training of milling machine operations, or a medical simulator for training in regional anesthesia procedures.


2007 ◽  
Vol 16 (3) ◽  
pp. 293-306 ◽  
Author(s):  
Gregorij Kurillo ◽  
Matjaž Mihelj ◽  
Marko Munih ◽  
Tadej Bajd

In this article we present a new isometric input device for multi-fingered grasping in virtual environments. The device was designed to simultaneously assess forces applied by the thumb, index, and middle finger. A mathematical model of grasping, adopted from the analysis of multi-fingered robot hands, was applied to achieve multi-fingered interaction with virtual objects. We used the concept of visual haptic feedback where the user was presented with visual cues to acquire haptic information from the virtual environment. The virtual object corresponded dynamically to the forces and torques applied by the three fingers. The application of the isometric finger device for multi-fingered interaction is demonstrated in four tasks aimed at the rehabilitation of hand function in stroke patients. The tasks include opening the combination lock on a safe, filling and pouring water from a glass, muscle strength training with an elastic torus, and a force tracking task. The training tasks were designed to train patients' grip force coordination and increase muscle strength through repetitive exercises. The presented virtual reality system was evaluated in a group of healthy subjects and two post-stroke patients (early post-stroke and chronic) to obtain overall performance results. The healthy subjects demonstrated consistent performance with the finger device after the first few trials. The two post-stroke patients completed all four tasks, however, with much lower performance scores as compared to healthy subjects. The results of the preliminary assessment suggest that the patients could further improve their performance through virtual reality training.


2018 ◽  
Vol 8 (1) ◽  
pp. 1-9 ◽  
Author(s):  
Marián Hudák ◽  
Štefan Korečko ◽  
Branislav Sobota

Abstract LIRKIS CAVE is a unique immersive virtual reality installation with a compact cylinder-based construction and a high-quality stereoscopic video output rendered on twenty 55” Full HD LCD panels. While the video output of the CAVE provides a high level of immersion to a virtual world, its original implementation of peripherals support had a negative impact because of a limited number of supported devices and certain performance issues. In this paper we describe a new, distributed, peripheral devices support implementation for the LIRKIS CAVE, which solves the performance issues and allows for ease of integration of new input devices into the CAVE. We also present a successful integration of a special input device, the Myo armband,which allows a natural and unobtrusive gesture-based control of virtual environments. The integration includes a newly developed control and monitoring application for the Myo, called MLCCS, which utilization is not limited to CAVE systems or virtual reality applications.


2020 ◽  
Author(s):  
Madhur Mangalam ◽  
Mathew Yarossi ◽  
Mariusz P. Furmanek ◽  
Eugene Tunik

AbstractVirtual reality (VR) has garnered much interest as a training environment for motor skill acquisition, including for neurological rehabilitation of upper extremities. While the focus has been on gross upper limb motion, VR applications that involve reaching for, and interacting with, virtual objects are growing. The absence of true haptics in VR when it comes to hand-object interactions raises a fundamentally important question: can haptic-free immersive virtual environments (hf-VEs) support naturalistic coordination of reach-to-grasp movements? This issue has been grossly understudied, and yet is of significant importance in the development and application of VR across a number of sectors. In a previous study (Furmanek et al. 2019), we reported that reach-to-grasp movements are similarly coordinated in both the physical environment (PE) and hf-VE. The most noteworthy difference was that the closure phase—which begins at maximum aperture and lasts through the end of the movement—was longer in hf-VE than in PE, suggesting that different control laws might govern the initiation of closure between the two environments. To do so, we reanalyzed data from Furmanek et al. (2019), in which the participants reached to grasp three differently sized physical objects, and matching 3D virtual object renderings, placed at three different locations. Our analysis revealed two key findings pertaining to the initiation of closure in PE and hf-VE. First, the respective control laws governing the initiation of aperture closure in PE and hf-VE both included state estimates of transport velocity and acceleration, supporting a general unified control scheme for implementing reach-to-grasp across physical and virtual environments. Second, aperture was less informative to the control law in hf-VE. We suggest that the latter was likely because transport velocity at closure onset and aperture at closure onset were less independent in hf-VE than in PE, ultimately resulting in aperture at closure onset having a weaker influence on the initiation of closure. In this way, the excess time and muscular effort needed to actively bring the fingers to a stop at the interface of a virtual object was factored into the control law governing the initiation of closure in hf-VE. Crucially, this control law remained applicable, albeit with different weights in hf-VE, despite the absence of terminal haptic feedback and potential perceptual differences.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Felix Heimann ◽  
Giulio Barteselli ◽  
André Brand ◽  
Andreas Dingeldey ◽  
Laszlo Godard ◽  
...  

AbstractWe present a summary of the development and clinical use of two custom designed high-fidelity virtual-reality simulator training platforms. This simulator development program began in 2016 to support the phase III clinical trial Archway (ClinicalTrials.gov identifier, NCT03677934) intended to evaluate the Port Delivery System (PDS) developed by Genentech Inc. and has also been used to support additional clinical trials. The two simulators address two specific ophthalmic surgical procedures required for the successful use of PDS and provide state-of-the-art physical simulation models and graphics. The simulators incorporate customized active haptic feedback input devices that approximate different hand pieces including a custom hand piece specifically designed for PDS implantation. We further describe the specific challenges of the procedure and the development of corresponding training strategies realized within the simulation platform.


Author(s):  
Sarah Beadle ◽  
Randall Spain ◽  
Benjamin Goldberg ◽  
Mahdi Ebnali ◽  
Shannon Bailey ◽  
...  

Virtual environments and immersive technologies are growing in popularity for human factors purposes. Whether it is training in a low-risk environment or using simulated environments for testing future automated vehicles, virtual environments show promise for the future of our field. The purpose of this session is to have current human factors practitioners and researchers demonstrate their immersive technologies. This is the eighth iteration of the “Me and My VE” interactive session. Presenters in this session will provide a brief introduction of their virtual reality, augmented reality, or virtual environment work before engaging with attendees in an interactive demonstration period. During this period, the presenters will each have a multimedia display of their immersive technology as well as discuss their work and development efforts. The selected demonstrations cover issues of designing immersive interfaces, military and medical training, and using simulation to better understand complex tasks. This includes a mix of government, industry, and academic-based work. Attendees will be virtually immersed in the technologies and research presented allowing for interaction with the work being done in this field.


Author(s):  
Silvia Francesca Maria Pizzoli ◽  
Dario Monzani ◽  
Laura Vergani ◽  
Virginia Sanchini ◽  
Ketti Mazzocco

AbstractIn recent years, virtual reality (VR) has been effectively employed in several settings, ranging from health care needs to leisure and gaming activities. A new application of virtual stimuli appeared in social media: in the documentary ‘I met you’ from the South-Korean Munhwa Broadcasting, a mother made the experience of interacting with the avatar of the seven-year-old daughter, who died four years before. We think that this new application of virtual stimuli should open a debate on its possible implications: it represents contents related to grief, a dramatic and yet natural experience, that can have deep psychological impacts on fragile subjects put in virtual environments. In the present work, possible side-effects, as well as hypothetical therapeutical application of VR for the treatment of mourning, are discussed.


Sign in / Sign up

Export Citation Format

Share Document