scholarly journals A Study of Perceptual Performance in Haptic Virtual Environments

2006 ◽  
Vol 18 (4) ◽  
pp. 467-475 ◽  
Author(s):  
Marcia K. O’Malley ◽  
◽  
Gina Upperman

The performance levels of human subjects in size identification and size discrimination experiments in both real and virtual environments are presented. The virtual environments are displayed with a PHANToM desktop three degree-of-freedom haptic interface. Results indicate that performance of the size identification and size discrimination tasks in the virtual environment is comparable to that in the real environment, implying that the haptic device does a good job of simulating reality for these tasks. Additionally, performance in the virtual environment was measured at below maximum machine performance levels for two machine parameters. The tabulated scores for the perception tasks in a sub-optimal virtual environment were found to be comparable to that in the real environment, supporting previous claims that haptic interface hardware may be able to convey, for these perceptual tasks, sufficient perceptual information to the user with relatively low levels of machine quality in terms of the following parameters: maximum endpoint force and maximum virtual surface stiffness. Results are comparable to those found for similar experiments conducted with other haptic interface hardware, further supporting this claim. Finally, it was found that varying maximum output force and virtual surface stiffness simultaneously does not have a compounding effect that significantly affects performance for size discrimination tasks.

2008 ◽  
Vol 17 (2) ◽  
pp. 176-198 ◽  
Author(s):  
Victoria Interrante ◽  
Brian Ries ◽  
Jason Lindquist ◽  
Michael Kaeding ◽  
Lee Anderson

Ensuring veridical spatial perception in immersive virtual environments (IVEs) is an important yet elusive goal. In this paper, we present the results of two experiments that seek further insight into this problem. In the first of these experiments, initially reported in Interrante, Ries, Lindquist, and Anderson (2007), we seek to disambiguate two alternative hypotheses that could explain our recent finding (Interrante, Anderson, and Ries, 2006a) that participants appear not to significantly underestimate egocentric distances in HMD-based IVEs, relative to in the real world, in the special case that they unambiguously know, through first-hand observation, that the presented virtual environment is a high-fidelity 3D model of their concurrently occupied real environment. Specifically, we seek to determine whether people are able to make similarly veridical judgments of egocentric distances in these matched real and virtual environments because (1) they are able to use metric information gleaned from their exposure to the real environment to calibrate their judgments of sizes and distances in the matched virtual environment, or because (2) their prior exposure to the real environment enabled them to achieve a heightened sense of presence in the matched virtual environment, which leads them to act on the visual stimulus provided through the HMD as if they were interpreting it as a computer-mediated view of an actual real environment, rather than just as a computer-generated picture, with all of the uncertainties that that would imply. In our second experiment, we seek to investigate the extent to which augmenting a virtual environment model with faithfully-modeled replicas of familiar objects might enhance people's ability to make accurate judgments of egocentric distances in that environment.


2016 ◽  
Vol 13 (122) ◽  
pp. 20160414 ◽  
Author(s):  
Mehdi Moussaïd ◽  
Mubbasir Kapadia ◽  
Tyler Thrash ◽  
Robert W. Sumner ◽  
Markus Gross ◽  
...  

Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects.


Author(s):  
A.I. Zagranichny

The article presents the results of a research of different types of activity depending on the frequency of transfer of social activity from the real environment to the virtual environment and vice versa. In the course of the research the following types of activity were identified: play activity; educational activity; work; communicative activity. 214 respondents from the following cities participated in the research: Balakovo, Saratov, Moscow. They were at the age of 15 to 24 years. 52% of them were women. They had the following social statuses: "pupil", "student", "young specialist". The correlation interrelation between the specified types of activity and the frequency of transfer of social activity from one environment into another has been analyzed and interpreted. In the course of the research the following results were received: the frequency of transfer of social activity from the real environment to the virtual environment has a direct positive link with such types of activity as play activity (r=0.221; p <0.01); educational activity (r=0.228; p <0.01) and communicative activity (r=0.346; p <0.01). The frequency of transfer of social activity from the virtual environment to the real one has a direct positive link only with two types of activity: educational activity (r=0.188; p <0.05) and communicative activity (r=0.331; p <0.01).


Author(s):  
Christophe Duret

This chapter will propose an ontology of virtual environments that calls into question the dichotomy between the real and the virtual. This will draw on the concepts of trajectivity and ‘médiance' in order to describe the way virtual environments, with their technological and symbolic features, take part in the construction of human environments. This theoretical proposition will be illustrated with the analysis of Arcadia, a virtual environment built in Second Life. Finally, a mesocriticism will be proposed as a new approach for the study of virtual environments.


2019 ◽  
Vol 9 (9) ◽  
pp. 1797
Author(s):  
Chen ◽  
Lin

Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly limit the scope of the simulation, and thus the depth of the user experience. In this study, we developed a process by which to realize virtual environments that are based entirely on scenes in the real world. In modeling the real world, the proposed scheme divides scenes into discrete objects, which are then replaced with virtual objects. This enables users to interact in and with virtual environments without limitations. An RGB-D camera is used in conjunction with simultaneous localization and mapping (SLAM) to obtain the movement trajectory of the user and derive information related to the real environment. In modeling the environment, graph-based segmentation is used to segment point clouds and perform object segmentation to enable the subsequent replacement of objects with equivalent virtual entities. Superquadrics are used to derive shape parameters and location information from the segmentation results in order to ensure that the scale of the virtual objects matches the original objects in the real world. Only after the objects have been replaced with their virtual counterparts in the real environment converted into a virtual scene. Experiments involving the emulation of real-world locations demonstrated the feasibility of the proposed rendering scheme. A rock-climbing application scenario is finally presented to illustrate the potential use of the proposed system in AR applications.


2005 ◽  
Vol 14 (3) ◽  
pp. 366-376 ◽  
Author(s):  
Marcia K. O'Malley ◽  
Michael Goldfarb

The ability of human subjects to identify and discriminate between different-sized real objects was compared with their ability to identify and discriminate between different-sized simulated objects generated by a haptic interface. This comparison was additionally performed for cases of limited force and limited stiffness output from the haptic device, which in effect decrease the fidelity of the haptic simulation. Results indicate that performance of size-identification tasks with haptic-interface hardware capable of a minimum of 3 N of maximum force output can approach performance in real environments, but falls short when virtual surface stiffness is limited. For size-discrimination tasks, performance in simulated environments was consistently lower than performance in a comparable real environment. Interestingly, significant variations in the fidelity of the haptic simulation do not appear to significantly alter the ability of a subject to identify or discriminate between the types of simulated objects described herein.


1996 ◽  
Vol 5 (1) ◽  
pp. 122-135 ◽  
Author(s):  
Takashi Oishi ◽  
Susumu Tachi

See-through head-mounted displays (STHMDs), which superimpose the virtual environment generated by computer graphics (CG) on the real world, are expected to be able to vividly display various simulations and designs by using both the real environment and the virtual environment around us. However, we must ensure that the virtual environment is superimposed exactly on the real environment because both environments are visible. Disagreement in matching locations and size between real and virtual objects is likely to occur between the world coordinates of the real environment where the STHMD user actually exists and those of the virtual environment described as parameters of CG. This disagreement directly causes displacement of locations where virtual objects are superimposed. The STHMD must be calibrated so that the virtual environment is superimposed properly. Among the causes of such errors, we focus both on systematic errors of projection transformation parameters caused in manufacturing and differences between actual and supposed location of user's eye on STHMD when in use, and propose a calibration method to eliminate these effects. In the calibration method, the virtual cursor drawn in the virtual environment is directly fitted onto targets in the real environment. Based on the result of fitting, the least-squares method identifies values of the parameters that minimize differences between locations of the virtual cursor in the virtual environment and targets in the real environment. After we describe the calibration methods, we also report the result of this application to the STHMD that we have made. The result is accurate enough to prove the effectiveness of the calibration methods.


Robotica ◽  
2009 ◽  
Vol 28 (1) ◽  
pp. 47-56 ◽  
Author(s):  
M. Karkoub ◽  
M.-G. Her ◽  
J.-M. Chen

SUMMARYIn this paper, an interactive virtual reality motion simulator is designed and analyzed. The main components of the system include a bilateral control interface, networking, a virtual environment, and a motion simulator. The virtual reality entertainment system uses a virtual environment that enables the operator to feel the actual feedback through a haptic interface as well as the distorted motion from the virtual environment just as s/he would in the real environment. The control scheme for the simulator uses the change in velocity and acceleration that the operator imposes on the joystick, the environmental changes imposed on the motion simulator, and the haptic feedback to the operator to maneuver the simulator in the real environment. The stability of the closed-loop system is analyzed based on the Nyquist stability criteria. It is shown that the proposed design for the simulator system works well and the theoretical findings are validated experimentally.


2005 ◽  
Vol 32 (5) ◽  
pp. 777-785 ◽  
Author(s):  
Ebru Cubukcu ◽  
Jack L Nasar

Discrepanices between perceived and actual distance may affect people's spatial behavior. In a previous study Nasar, using self report of behavior, found that segmentation (measured through the number of buildings) along the route affected choice of parking garage and path from the parking garage to a destination. We recreated that same environment in a three-dimensional virtual environment and conducted a test to see whether the same factors emerged under these more controlled conditions and to see whether spatial behavior in the virtual environment accurately reflected behavior in the real environment. The results confirmed similar patterns of response in the virtual and real environments. This supports the use of virtual reality as a tool for predicting behavior in the real world and confirms increases in segmentation as related to increases in perceived distance.


Author(s):  
Hannah M. Solini ◽  
Ayush Bhargava ◽  
Christopher C. Pagano

It is often questioned whether task performance attained in a virtual environment can be transferred appropriately and accurately to the same task in the real world. With advancements in virtual reality (VR) technology, recent research has focused on individuals’ abilities to transfer calibration achieved in a virtual environment to a real-world environment. Little research, however, has shown whether transfer of calibration from a virtual environment to the real world is similar to transfer of calibration from a virtual environment to another virtual environment. As such, the present study investigated differences in calibration transfer to real-world and virtual environments. In either a real-world or virtual environment, participants completed blind walking estimates before and after experiencing perturbed virtual optic flow via a head-mounted virtual display (HMD). Results showed that individuals calibrated to perturbed virtual optic flow and that this calibration carried over to both real-world and virtual environments in a like manner.


Sign in / Sign up

Export Citation Format

Share Document