scholarly journals Effect of Haptic Feedback on the Perceived Size of a Virtual Object

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 83673-83681 ◽  
Author(s):  
Jaeyoung Park ◽  
Ilhwan Han ◽  
Woochan Lee
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gyuwon Kim ◽  
Donghyun Hwang ◽  
Jaeyoung Park

AbstractAs touch screen technologies advanced, a digital stylus has become one of the essential accessories for a smart device. However, most of the digital styluses so far provide limited tactile feedback to a user. Therefore we focused on the limitation and noted the potential that a digital stylus may offer the sensation of realistic interaction with virtual environments on a touch screen using a 2.5D haptic system. Thus, we developed a haptic stylus with SMA (Shape Memory Alloy) and a 2.5D haptic rendering algorithm to provide lateral skin-stretch feedback to mimic the interaction force between fingertip and a stylus probing over a bumpy surface. We conducted two psychophysical experiments to evaluate the effect of 2.5D haptic feedback on the perception of virtual object geometry. Experiment 1 investigated the human perception of virtual bump size felt via the proposed lateral skin-stretch stylus and a vibrotactile stylus as reference. Experiment 2 tested the participants’ ability to count the number of virtual bumps rendered via the two types of haptic styluses. The results of Experiment 1 indicate that the participants felt the size of virtual bumps rendered with lateral skin-stretch stylus significantly sensitively than the vibrotactile stylus. Similarly, the participants counted the number of virtual bumps rendered with the lateral skin-stretch stylus significantly better than with the vibrotactile stylus. A common result of the two experiments is a significantly longer mean trial time for the skin-stretch stylus than the vibrotactile stylus.


2011 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Frank Steinicke

The mission of the Immersive Media Group (IMG) is to develop virtual locomotion user interfaces which allow humans to experience arbitrary 3D environments by means of the natural walking metaphor. Traveling through immersive virtual environments (IVEs) by means of real walking is an important activity to increase naturalness of virtual reality (VR)-based interaction. However, the size of the virtual world often differs from the size of the tracked lab space so that a straightforward implementation of omni-directional and unlimited walking is not possible. Redirected walking is one concept to address this issue by inconspicuously guiding the user on a physical path that may differ from the path the user perceives in the virtual world. For example, intentionally rotating the virtual camera to one side causes the user to unknowingly compensate by walking on a circular arc into the opposite direction. In the scope of the LOCUI project, which is funded by the German Research Foundation, we analyze how gains of locomotor speed, turns and curvatures can gradually alter the physical trajectory with respect to the path perceived in the virtual world without the users observing any discrepancy. Thus, users can be guided in order to avoid collisions with physical obstacles (e.g., lab walls) or they can be guided to arbitrary locations in the physical space. For example, if the user approaches a virtual object, she can be guided to a real proxy prop that is registered to and aligned with its virtual counterpart. Hence, the user can interact with a virtual object by touching the corresponding real-world proxy prop that provides haptic feedback. Based on the results of psychophysical experiments we plan With such a user interface it becomes possible to intuitively interact with any virtual object by touching registered real-world props.


2009 ◽  
Vol 18 (1) ◽  
pp. 39-53 ◽  
Author(s):  
Anatole Lécuyer

This paper presents a survey of the main results obtained in the field of “pseudo-haptic feedback”: a technique meant to simulate haptic sensations in virtual environments using visual feedback and properties of human visuo-haptic perception. Pseudo-haptic feedback uses vision to distort haptic perception and verges on haptic illusions. Pseudo-haptic feedback has been used to simulate various haptic properties such as the stiffness of a virtual spring, the texture of an image, or the mass of a virtual object. This paper describes the several experiments in which these haptic properties were simulated. It assesses the definition and the properties of pseudo-haptic feedback. It also describes several virtual reality applications in which pseudo-haptic feedback has been successfully implemented, such as a virtual environment for vocational training of milling machine operations, or a medical simulator for training in regional anesthesia procedures.


2007 ◽  
Vol 16 (3) ◽  
pp. 293-306 ◽  
Author(s):  
Gregorij Kurillo ◽  
Matjaž Mihelj ◽  
Marko Munih ◽  
Tadej Bajd

In this article we present a new isometric input device for multi-fingered grasping in virtual environments. The device was designed to simultaneously assess forces applied by the thumb, index, and middle finger. A mathematical model of grasping, adopted from the analysis of multi-fingered robot hands, was applied to achieve multi-fingered interaction with virtual objects. We used the concept of visual haptic feedback where the user was presented with visual cues to acquire haptic information from the virtual environment. The virtual object corresponded dynamically to the forces and torques applied by the three fingers. The application of the isometric finger device for multi-fingered interaction is demonstrated in four tasks aimed at the rehabilitation of hand function in stroke patients. The tasks include opening the combination lock on a safe, filling and pouring water from a glass, muscle strength training with an elastic torus, and a force tracking task. The training tasks were designed to train patients' grip force coordination and increase muscle strength through repetitive exercises. The presented virtual reality system was evaluated in a group of healthy subjects and two post-stroke patients (early post-stroke and chronic) to obtain overall performance results. The healthy subjects demonstrated consistent performance with the finger device after the first few trials. The two post-stroke patients completed all four tasks, however, with much lower performance scores as compared to healthy subjects. The results of the preliminary assessment suggest that the patients could further improve their performance through virtual reality training.


1996 ◽  
Vol 5 (1) ◽  
pp. 95-108 ◽  
Author(s):  
Paul Richard ◽  
Georges Birebent ◽  
Philippe Coiffet ◽  
Grigore Burdea ◽  
Daniel Gomez ◽  
...  

Research on virtual environments (VE) produced significant advances in computer hardware (graphics boards and i/o tools) and software (real-time distributed simulations). However, fundamental questions remain about how user performance is affected by such factors as graphics refresh rate, resolution, control latencies, and multimodal feedback. This article reports on two experiments performed to examine dextrous manipulation of virtual objects. The first experiment studies the effect of graphics frame rate and viewing mode (monoscopic vs. stereoscopic) on the time required to grasp a moving target. The second experiment studies the effect of direct force feedback, pseudoforce feedback, and redundant force feedback on grasping force regulation. The trials were performed using a partially-immersive environment (graphics workstation and LCD glasses), a DataGlove, and the Rutgers Master with force feedback. Results of the first experiment indicate that stereoscopic viewing is beneficial for low refresh rates (it reduced task completion time by about 50% vs. monoscopic graphics). Results of the second experiment indicate that haptic feedback increases performance and reduces error rates, as compared to the open loop case (with no force feedback). The best performance was obtained when both direct haptic and redundant auditory feedback were provided to the user. The large number of subjects participating in these experiments (over 160 male and female) indicates good statistical significance for the above results.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Lei Huang ◽  
Zengxuan Hou

A novel variable stiffness 3D virtual brush model and haptic decoration technique suitable for the surface of the three-dimensional objects for the automobile industry are introduced based on real-time haptic feedback mechanism using a 6 DOFs input device, and the haptic behavior of an expressive virtual 3D brush with variable stiffness is studied in detail for the first time. First, the intrinsic mechanism between the deformation of real hair brush and the applied external forces (such as the bending moment) is analyzed and studied in detail by introducing a bending spring to express the basic mechanical behavior for the 3D hair brush. Based on this brush model, many important painting features can be simulated, such as the softer brush tip, brush flattening, and bristle spreading. And a useful algorithm (named the weighted-average distance) for dealing with collision checking among the two objects (3D clay and the 3D brush) is presented. As long as the brush head is close to the 3D object, within a tolerance range, the computational tactile sensation force will be emerged, and the interactive painting process is implemented actually on the outer surface of the virtual object. We then calculate the related bounding ball for deformed 3D brush using a fast ball-expanding search algorithm to determine the virtual projection plane. Based on the real-time deformation about the virtual brush head at a sampling point, the 2D painting footprints, which is produced between the brush head and virtual projection painting plane, is calculated and rendered. Next, the 3D painting footprint could be easily produced via mapping the 2D painting footprints onto the surface of the 3D model in real time. Finally, the 3D painting strokes are formed via controlling the exerted force and overlapping the virtual 3D painting footprints with different shape, size, and following the moving direction of the 3D brush. Experiment result shows that the adopted method can effectively enhance reality to users, with high performance.


2021 ◽  
Vol 2 ◽  
Author(s):  
Mariusz P. Furmanek ◽  
Madhur Mangalam ◽  
Kyle Lockwood ◽  
Andrea Smith ◽  
Mathew Yarossi ◽  
...  

Technological advancements and increased access have prompted the adoption of head- mounted display based virtual reality (VR) for neuroscientific research, manual skill training, and neurological rehabilitation. Applications that focus on manual interaction within the virtual environment (VE), especially haptic-free VR, critically depend on virtual hand-object collision detection. Knowledge about how multisensory integration related to hand-object collisions affects perception-action dynamics and reach-to-grasp coordination is needed to enhance the immersiveness of interactive VR. Here, we explored whether and to what extent sensory substitution for haptic feedback of hand-object collision (visual, audio, or audiovisual) and collider size (size of spherical pointers representing the fingertips) influences reach-to-grasp kinematics. In Study 1, visual, auditory, or combined feedback were compared as sensory substitutes to indicate the successful grasp of a virtual object during reach-to-grasp actions. In Study 2, participants reached to grasp virtual objects using spherical colliders of different diameters to test if virtual collider size impacts reach-to-grasp. Our data indicate that collider size but not sensory feedback modality significantly affected the kinematics of grasping. Larger colliders led to a smaller size-normalized peak aperture. We discuss this finding in the context of a possible influence of spherical collider size on the perception of the virtual object’s size and hence effects on motor planning of reach-to-grasp. Critically, reach-to-grasp spatiotemporal coordination patterns were robust to manipulations of sensory feedback modality and spherical collider size, suggesting that the nervous system adjusted the reach (transport) component commensurately to the changes in the grasp (aperture) component. These results have important implications for research, commercial, industrial, and clinical applications of VR.


2020 ◽  
Author(s):  
Madhur Mangalam ◽  
Mathew Yarossi ◽  
Mariusz P. Furmanek ◽  
Eugene Tunik

AbstractVirtual reality (VR) has garnered much interest as a training environment for motor skill acquisition, including for neurological rehabilitation of upper extremities. While the focus has been on gross upper limb motion, VR applications that involve reaching for, and interacting with, virtual objects are growing. The absence of true haptics in VR when it comes to hand-object interactions raises a fundamentally important question: can haptic-free immersive virtual environments (hf-VEs) support naturalistic coordination of reach-to-grasp movements? This issue has been grossly understudied, and yet is of significant importance in the development and application of VR across a number of sectors. In a previous study (Furmanek et al. 2019), we reported that reach-to-grasp movements are similarly coordinated in both the physical environment (PE) and hf-VE. The most noteworthy difference was that the closure phase—which begins at maximum aperture and lasts through the end of the movement—was longer in hf-VE than in PE, suggesting that different control laws might govern the initiation of closure between the two environments. To do so, we reanalyzed data from Furmanek et al. (2019), in which the participants reached to grasp three differently sized physical objects, and matching 3D virtual object renderings, placed at three different locations. Our analysis revealed two key findings pertaining to the initiation of closure in PE and hf-VE. First, the respective control laws governing the initiation of aperture closure in PE and hf-VE both included state estimates of transport velocity and acceleration, supporting a general unified control scheme for implementing reach-to-grasp across physical and virtual environments. Second, aperture was less informative to the control law in hf-VE. We suggest that the latter was likely because transport velocity at closure onset and aperture at closure onset were less independent in hf-VE than in PE, ultimately resulting in aperture at closure onset having a weaker influence on the initiation of closure. In this way, the excess time and muscular effort needed to actively bring the fingers to a stop at the interface of a virtual object was factored into the control law governing the initiation of closure in hf-VE. Crucially, this control law remained applicable, albeit with different weights in hf-VE, despite the absence of terminal haptic feedback and potential perceptual differences.


2009 ◽  
Vol 628-629 ◽  
pp. 155-160 ◽  
Author(s):  
F.X. Yan ◽  
Z.X. Hou ◽  
Ding Hua Zhang ◽  
Wen Ke Kang

This paper describes an innovative free-form modeling system, Virtual Clay Modeling System (VCMS), in which users can directly manipulate the shape of a virtual object like a clay model in real world. With this system, some disadvantages of interaction with computer aided industry design (CAID) systems can be resolved. In order to enhance the immersion feelings and improve the controlling abilities to cut, paste, and compensate of VCMS, we use Spaceball 5000 and PHANTOM Desktop to assign the set of interaction tasks. During the process of realizing 6 degree-of-freedom (DOF) haptic feedback modeling control, we developed and accomplished the device interfaces with Open Inventor and Qt application framework. VCMS provides us a good immersion of allowing for effective modeling in a virtual world.


Sign in / Sign up

Export Citation Format

Share Document