Visual, Haptic, and Bimodal Perception of Size and Stiffness in Virtual Environments

1999 ◽  
Author(s):  
Wan-Chen Wu ◽  
Cagatay Basdogan ◽  
Mandayam A. Srinivasan

Abstract Human psychophysical experiments were designed and conducted to investigate the effect of 3D perspective visual images on the visual and haptic perception of size and stiffness in multimodal virtual environments (VEs). Virtual slots of varying length and buttons of varying stiffness were displayed to the subjects, who then were asked to discriminate their size and stiffness respectively using visual and/or haptic cues. The results of the size experiments show that under vision alone, farther objects are perceived to be smaller due to perspective cues and the addition of haptic feedback reduces this visual bias. Similarly, the results of the stiffness experiments show that compliant objects that are farther are perceived to be softer when there is only haptic feedback and the addition of visual feedback reduces this haptic bias. Hence, we conclude that our visual and haptic systems compensate for each other such that the sensory information that comes from visual and haptic channels is fused in an optimal manner.

2009 ◽  
Vol 18 (1) ◽  
pp. 39-53 ◽  
Author(s):  
Anatole Lécuyer

This paper presents a survey of the main results obtained in the field of “pseudo-haptic feedback”: a technique meant to simulate haptic sensations in virtual environments using visual feedback and properties of human visuo-haptic perception. Pseudo-haptic feedback uses vision to distort haptic perception and verges on haptic illusions. Pseudo-haptic feedback has been used to simulate various haptic properties such as the stiffness of a virtual spring, the texture of an image, or the mass of a virtual object. This paper describes the several experiments in which these haptic properties were simulated. It assesses the definition and the properties of pseudo-haptic feedback. It also describes several virtual reality applications in which pseudo-haptic feedback has been successfully implemented, such as a virtual environment for vocational training of milling machine operations, or a medical simulator for training in regional anesthesia procedures.


2021 ◽  
Vol 2 ◽  
Author(s):  
Jun Rong Jeffrey Neo ◽  
Andrea Stevenson Won ◽  
Mardelle McCuskey Shepley

What are strategies for the design of immersive virtual environments (IVEs) to understand environments’ influence on behaviors? To answer this question, we conducted a systematic review to assess peer-reviewed publications and conference proceedings on experimental and proof-of-concept studies that described the design, manipulation, and setup of the IVEs to examine behaviors influenced by the environment. Eighteen articles met the inclusion criteria. Our review identified key categories and proposed strategies in the following areas for consideration when deciding on the level of detail that should be included when prototyping IVEs for human behavior research: 1) the appropriate level of detail (primarily visual) in the environment: important commonly found environmental accessories, realistic textures, computational costs associated with increased details, and minimizing unnecessary details, 2) context: contextual element, cues, and animation social interactions, 3) social cues: including computer-controlled agent-avatars when necessary and animating social interactions, 4) self-avatars, navigation concerns, and changes in participants’ head directions, and 5) nonvisual sensory information: haptic feedback, audio, and olfactory cues.


1998 ◽  
Vol 25 (1) ◽  
pp. 64-67 ◽  
Author(s):  
René Verry

Susan Lederman (SL) is an invited member of the International Council of Research Fellows for the Braille Research Center and a Fellow of he Canadian Psychology Association. She was also an Associate of the Canadian Institute for Advanced Research in the Robotics and Artificial Intelligence Programme for 8 years. A Professor in the Departments of Psychology and Computing & Information Science at Queen's University at Kingston (Ontario, Canada), she has written and coauthored numerous articles on tactile psychophysics, haptic perception and cognition, motor control, and haptic applications in robotics, teleoperation, and virtual environments. She is currently the coorganizer of the Annual Symposium a Haptic Interfaces for Teleoperation and Virtual Environment Systems. René Verry (RV) is a psychology professor at Millikin University (Decatur, IL), where she teaches a variety of courses in the experimental core, including Sensation and Perception. She chose the often-subordinated somatic senses as the focus of her interview, and recruited Susan Lederman as our research specialist.


2008 ◽  
Vol 49 (1) ◽  
Author(s):  
Faieza Abdul Aziz ◽  
D. T. Pham ◽  
Shamsuddin Sulaiman ◽  
Napsiah Ismail ◽  
Mohd Khairol Anuar Ariffin ◽  
...  

2019 ◽  
Vol 121 (4) ◽  
pp. 1543-1560 ◽  
Author(s):  
Robert W. Nickl ◽  
M. Mert Ankarali ◽  
Noah J. Cowan

Volitional rhythmic motor behaviors such as limb cycling and locomotion exhibit spatial and timing regularity. Such rhythmic movements are executed in the presence of exogenous visual and nonvisual cues, and previous studies have shown the pivotal role that vision plays in guiding spatial and temporal regulation. However, the influence of nonvisual information conveyed through auditory or touch sensory pathways, and its effect on control, remains poorly understood. To characterize the function of nonvisual feedback in rhythmic arm control, we designed a paddle juggling task in which volunteers bounced a ball off a rigid elastic surface to a target height in virtual reality by moving a physical handle with the right hand. Feedback was delivered at two key phases of movement: visual feedback at ball peaks only and simultaneous audio and haptic feedback at ball-paddle collisions. In contrast to previous work, we limited visual feedback to the minimum required for jugglers to assess spatial accuracy, and we independently perturbed the spatial dimensions and the timing of feedback. By separately perturbing this information, we evoked dissociable effects on spatial accuracy and timing, confirming that juggling, and potentially other rhythmic tasks, involves two complementary processes with distinct dynamics: spatial error correction and feedback timing synchronization. Moreover, we show evidence that audio and haptic feedback provide sufficient information for the brain to control the timing synchronization process by acting as a metronome-like cue that triggers hand movement. NEW & NOTEWORTHY Vision contains rich information for control of rhythmic arm movements; less is known, however, about the role of nonvisual feedback (touch and sound). Using a virtual ball bouncing task allowing independent real-time manipulation of spatial location and timing of cues, we show their dissociable roles in regulating motor behavior. We confirm that visual feedback is used to correct spatial error and provide new evidence that nonvisual event cues act to reset the timing of arm movements.


2015 ◽  
Vol 1 (1) ◽  
pp. 160-163 ◽  
Author(s):  
Carsten Neupert ◽  
Sebastian Matich ◽  
Peter P. Pott ◽  
Christian Hatzfeld ◽  
Roland Werthschützky

AbstractPseudo-haptic feedback is a haptic illusion based on a mismatch of haptic and visual perception. It is well known from applications in virtual environments. In this work, we discuss the usabiliy of the principle of pseudo-haptic feedback for teleoperation. Using pseudo-haptic feedback can ease the design of haptic medical tele-operation systems.Thereby a user’s grasping force at an isometric user interface is used to control the closing angle of an end effector of a surgical robot. To provide a realistic haptic feedback, the coupling characteristic of grasping force and end effector closing angle is changed depending on acting end effector interaction forces.With an experiment, we show the usability of pseudo-haptic feedback for discriminating compliances, comparable to the mechanical characteristic of muscles relaxed and contracted. The provided results base upon the data of 10 subjects, and 300 trails.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


1999 ◽  
Vol 8 (4) ◽  
pp. 394-411 ◽  
Author(s):  
Pierre E. Dupont ◽  
Capt. Timothy M. Schulteis ◽  
Paul A. Millman ◽  
Robert D. Howe

Many applications can be imagined for a system that processes sensory information collected during telemanipulation tasks in order to automatically identify properties of the remote environment. These applications include generating model-based simulations for training operators in critical procedures and improving real-time performance in unstructured environments or when time delays are large. This paper explores the research issues involved in developing such an identification system, focusing on properties that can be identified from remote manipulator motion and force data. As a case study, a simple block-stacking task, performed with a teleoperated two-fingered planar hand, is considered. An algorithm is presented that automatically segments the data collected during the task, given only a general description of the temporal sequence of task events. Using the segmented data, the algorithm then successfully estimates the weight, width, height, and coefficient of friction of the two blocks handled during the task. This data is used to calibrate a virtual model incorporating visual and haptic feedback. This case study highlights the broader research issues that must be addressed in automatic property identification.


Sign in / Sign up

Export Citation Format

Share Document