Designing Haptics: Improving a Virtual Reality Glove with Respect to Realism, Performance, and Comfort

2019 ◽  
Vol 13 (4) ◽  
pp. 453-463
Author(s):  
Daniel Shor ◽  
◽  
Bryan Zaaijer ◽  
Laura Ahsmann ◽  
Max Weetzel ◽  
...  

This design paper describes the development of custom built interface between a force-replicating virtual reality (VR) haptic interface glove, and a user. The ability to convey haptic information – both kinematic and tactile – is a critical barrier in creating comprehensive simulations. Haptic interface gloves can convey haptic information, but often the haptic “signal” is diluted by sensory “noise,” miscuing the user’s brain. Our goal is to convey compelling interactions – such as grasping, squeezing, and pressing – with virtual objects by improving one such haptic interface glove, the SenseGlove, through a redesign of the user-glove interface, soft glove. The redesign revolves around three critical design factors – comfort, realism, and performance – and three critical design areas – thimble/fingertip, palm, and haptic feedback. This paper introduces the redesign method and compares the two designs with a quantitative user study. The benefit of the improved soft glove can be shown by a significant improvement of the design factors, quantified through QUESI, NASA-TLX, and comfort questionnaires.

2005 ◽  
Vol 14 (3) ◽  
pp. 345-365 ◽  
Author(s):  
Sangyoon Lee ◽  
Gaurav Sukhatme ◽  
Gerard Jounghyun Kim ◽  
Chan-Mo Park

The problem of teleoperating a mobile robot using shared autonomy is addressed: An onboard controller performs close-range obstacle avoidance while the operator uses the manipulandum of a haptic probe to designate the desired speed and rate of turn. Sensors on the robot are used to measure obstacle-range information. A strategy to convert such range information into forces is described, which are reflected to the operator's hand via the haptic probe. This haptic information provides feedback to the operator in addition to imagery from a front-facing camera mounted on the mobile robot. Extensive experiments with a user population both in virtual and in real environments show that this added haptic feedback significantly improves operator performance, as well as presence, in several ways (reduced collisions, increased minimum distance between the robot and obstacles, etc.) without a significant increase in navigation time.


2007 ◽  
Vol 16 (3) ◽  
pp. 293-306 ◽  
Author(s):  
Gregorij Kurillo ◽  
Matjaž Mihelj ◽  
Marko Munih ◽  
Tadej Bajd

In this article we present a new isometric input device for multi-fingered grasping in virtual environments. The device was designed to simultaneously assess forces applied by the thumb, index, and middle finger. A mathematical model of grasping, adopted from the analysis of multi-fingered robot hands, was applied to achieve multi-fingered interaction with virtual objects. We used the concept of visual haptic feedback where the user was presented with visual cues to acquire haptic information from the virtual environment. The virtual object corresponded dynamically to the forces and torques applied by the three fingers. The application of the isometric finger device for multi-fingered interaction is demonstrated in four tasks aimed at the rehabilitation of hand function in stroke patients. The tasks include opening the combination lock on a safe, filling and pouring water from a glass, muscle strength training with an elastic torus, and a force tracking task. The training tasks were designed to train patients' grip force coordination and increase muscle strength through repetitive exercises. The presented virtual reality system was evaluated in a group of healthy subjects and two post-stroke patients (early post-stroke and chronic) to obtain overall performance results. The healthy subjects demonstrated consistent performance with the finger device after the first few trials. The two post-stroke patients completed all four tasks, however, with much lower performance scores as compared to healthy subjects. The results of the preliminary assessment suggest that the patients could further improve their performance through virtual reality training.


2019 ◽  
Author(s):  
David Harris ◽  
Gavin Buckingham ◽  
Mark Wilson ◽  
Samuel James Vine

Virtual reality (VR) is a promising tool for expanding the possibilities of psychological experimentation and implementing immersive training applications. Despite a recent surge in interest, there remains an inadequate understanding of how VR impacts basic cognitive processes. Due to the artificial presentation of egocentric distance cues in virtual environments, a number of cues to depth in the optic array are impaired or placed in conflict with each other. Moreover, realistic haptic information is all but absent from current VR systems. The resulting conflicts could impact not only the execution of motor skills in VR but raises deeper concerns about basic visual processing, and the extent to which virtual objects elicit neural and behavioural responses representative of real objects. In this brief review we outline how the novel perceptual environment of VR may affect vision for action, by shifting users away from a dorsal mode of control. Fewer binocular cues to depth, conflicting depth information and limited haptic feedback may all impair the specialised, efficient, online control of action characteristic of the dorsal stream. A shift from dorsal to ventral control of action may create a fundamental disparity between virtual and real-world skills that has important consequences for how we understand perception and action in the virtual world.


2021 ◽  
Vol 2 ◽  
Author(s):  
Pornthep Preechayasomboon ◽  
Eric Rombokas

We introduce Haplets, a wearable, low-encumbrance, finger-worn, wireless haptic device that provides vibrotactile feedback for hand tracking applications in virtual and augmented reality. Haplets are small enough to fit on the back of the fingers and fingernails while leaving the fingertips free for interacting with real-world objects. Through robust physically-simulated hands and low-latency wireless communication, Haplets can render haptic feedback in the form of impacts and textures, and supplements the experience with pseudo-haptic illusions. When used in conjunction with handheld tools, such as a pen, Haplets provide haptic feedback for otherwise passive tools in virtual reality, such as for emulating friction and pressure-sensitivity. We present the design and engineering for the hardware for Haplets, as well as the software framework for haptic rendering. As an example use case, we present a user study in which Haplets are used to improve the line width accuracy of a pressure-sensitive pen in a virtual reality drawing task. We also demonstrate Haplets used during manipulation of objects and during a painting and sculpting scenario in virtual reality. Haplets, at the very least, can be used as a prototyping platform for haptic feedback in virtual reality.


2012 ◽  
Author(s):  
R. A. Grier ◽  
H. Thiruvengada ◽  
S. R. Ellis ◽  
P. Havig ◽  
K. S. Hale ◽  
...  

Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3673
Author(s):  
Stefan Grushko ◽  
Aleš Vysocký ◽  
Petr Oščádal ◽  
Michal Vocetka ◽  
Petr Novák ◽  
...  

In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers. Assuming the production task has a high degree of variability, the robot’s movements can be difficult to predict, leading to a feeling of anxiety in the worker when the robot changes its trajectory and approaches since the worker has no information about the planned movement of the robot. Additionally, without information about the robot’s movement, the human worker cannot effectively plan own activity without forcing the robot to constantly replan its movement. We propose a novel approach to communicating the robot’s intentions to a human worker. The improvement to the collaboration is presented by introducing haptic feedback devices, whose task is to notify the human worker about the currently planned robot’s trajectory and changes in its status. In order to verify the effectiveness of the developed human-machine interface in the conditions of a shared collaborative workspace, a user study was designed and conducted among 16 participants, whose objective was to accurately recognise the goal position of the robot during its movement. Data collected during the experiment included both objective and subjective parameters. Statistically significant results of the experiment indicated that all the participants could improve their task completion time by over 45% and generally were more subjectively satisfied when completing the task with equipped haptic feedback devices. The results also suggest the usefulness of the developed notification system since it improved users’ awareness about the motion plan of the robot.


Sign in / Sign up

Export Citation Format

Share Document