Towards Quantifying Depth and Size Perception in Virtual Environments

1995 ◽  
Vol 4 (1) ◽  
pp. 24-49 ◽  
Author(s):  
Jannick P. Rolland ◽  
William Gibson ◽  
Dan Ariely

With the rapid advance of real-time computer graphics, head-mounted displays (HMDs) have become popular tools for 3D visualization. One of the most promising and challenging future uses of HMDs, however, is in applications where virtual environments enhance rather than replace real environments. In such applications, a virtual image is superimposed on a real image. The unique problem raised by this superimposition is the difficulty that the human visual system may have in integrating information from these two environments. As a starting point to studying the problem of information integration in see-through environments, we investigate the quantification of depth and size perception of virtual objects relative to real objects in combined real and virtual environments. This starting point leads directly to the important issue of system calibration, which must be completed before perceived depth and sizes are measured. Finally, preliminary experimental results on the perceived depth of spatially nonoverlapping real and virtual objects are presented.

2003 ◽  
Vol 12 (6) ◽  
pp. 615-628 ◽  
Author(s):  
Benjamin Lok ◽  
Samir Naik ◽  
Mary Whitton ◽  
Frederick P. Brooks

Immersive virtual environments (VEs) provide participants with computer-generated environments filled with virtual objects to assist in learning, training, and practicing dangerous and/or expensive tasks. But does having every object being virtual inhibit the interactivity and level of immersion? If participants spend most of their time and cognitive load on learning and adapting to interacting with virtual objects, does this reduce the effectiveness of the VE? We conducted a study that investigated how handling real objects and self-avatar visual fidelity affects performance and sense of presence on a spatial cognitive manual task. We compared participants' performance of a block arrangement task in both a real-space environment and several virtual and hybrid environments. The results showed that manipulating real objects in a VE brings task performance closer to that of real space, compared to manipulating virtual objects. There was no signifi-cant difference in reported sense of presence, regardless of the self-avatar's visual fidelity or the presence of real objects.


2013 ◽  
Vol 22 (3) ◽  
pp. 255-270 ◽  
Author(s):  
Yuki Ban ◽  
Takuji Narumi ◽  
Tomohiro Tanikawa ◽  
Michitaka Hirose

In this study, we aim to construct a perception-based shape display system to provide users with the sensation of touching virtual objects of varying shapes using only a simple mechanism. Thus far, we have proved that identified curved surface shapes or edge angles can be modified by displacing the visual representation of the user's hand. However, using this method, we cannot emulate multifinger touch, because of spatial unconformity. To solve this problem, we focus on modifying the identification of shapes using two fingers by deforming the visual representation of the user's hand. We devised a video see-through system that enables us to change the perceived shape of an object that a user is touching visually. The visual representation of the user's hand is deformed as if the user were handling a visual object; however, the user is actually handling an object of a different shape. Using this system, we conducted two experiments to investigate the effects of visuo-haptic interaction and evaluate its effectiveness. One is an investigation on the modification of size perception to confirm that the fingers did not stroke the shape but only touched it statically. The other is an investigation on the modification of shape perception for confirming that the fingers dynamically stroked the surface of the shape. The results of these experiments show that the perceived sizes of objects handled using a thumb and other finger(s) could be modified if the difference between the size of physical and visual stimuli was in the −40% to 35% range. In addition, we found that the algorithm can create an effect of shape perception modification when users stroke the shape with multiple fingers.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yea Som Lee ◽  
Bong-Soo Sohn

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.


2008 ◽  
Vol 02 (02) ◽  
pp. 207-233
Author(s):  
SATORU MEGA ◽  
YOUNES FADIL ◽  
ARATA HORIE ◽  
KUNIAKI UEHARA

Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.


2013 ◽  
Vol 12 (1) ◽  
pp. 30-43
Author(s):  
Bruno Eduardo Madeira ◽  
Luiz Velho

We describe a new architecture composed of software and hardware for displaying stereoscopic images over a horizontal surface. It works as a ``Virtual Table and Teleporter'', in the sense that virtual objects depicted over a table have the appearance of real objects. This system can be used for visualization and interaction. We propose two basic configurations: the Virtual Table, consisting of a single display surface, and the Virtual Teleporter, consisting of a pair of tables for image capture and display. The Virtual Table displays either 3D computer generated images or previously captured stereoscopic video and can be used for interactive applications. The Virtual Teleporter captures and transmits stereoscopic video from one table to the other and can be used for telepresence applications. In both configurations the images are properly deformed and displayed for horizontal 3D stereo. In the Virtual Teleporter two cameras are pointed to the first table, capturing a stereoscopic image pair. These images are shown on the second table that is, in fact, a stereoscopic display positioned horizontally. Many applications can benefit from this technology such as virtual reality, games, teleconferencing, and distance learning. We present some interactive applications that we developed using this architecture.


Author(s):  
Stephen R. Ellis ◽  
Urs J. Bucher

The influence of physically presented background stimuli on distance judgements to optically overlaid, stereoscopic virtual images has been studied using head-mounted stereoscopic, virtual image displays. Positioning of an opaque physical object either at the perceived depth of the virtual image or at a position substantially in front of it, has been observed to cause the virtual image to apparently move closer to the observer. In the case of physical objects positioned substantially in front of the virtual image, subjects often perceive the opaque object as transparent. Evidence is presented that the apparent change of position caused by interposition of the physical object is not influenced by the strengthening of occlusion cues but is influenced by motion of the physical objects which would attract the subjects ocular vergence. The observed effect appears to be associated with the relative conspicuousness of the overlaid virtual image and the background. This effect may be related to Foley's models of open-loop stereoscopic pointing errors which attributed the stereoscopic distance errors to misjudgment of a reference point for interpretation of retinal disparities. Some implications for the design of see-through displays for manufacturing will also be discussed briefly.


2020 ◽  
Vol 10 (16) ◽  
pp. 5436 ◽  
Author(s):  
Dong-Hyun Kim ◽  
Yong-Guk Go ◽  
Soo-Mi Choi

A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects.


2015 ◽  
Vol 82 (5) ◽  
Author(s):  
Max-Gerd Retzlaff ◽  
Josua Stabenow ◽  
Jürgen Beyerer ◽  
Carsten Dachsbacher

AbstractWhen designing or improving systems for automated optical inspection (AOI), systematic evaluation is an important but costly necessity to achieve and ensure high quality. Computer graphics methods can be used to quickly create large virtual sets of samples of test objects and to simulate image acquisition setups. We use procedural modeling techniques to generate virtual objects with varying appearance and properties, mimicking real objects and sample sets. Physical simulation of rigid bodies is deployed to simulate the placement of virtual objects, and using physically-based rendering techniques we create synthetic images. These are used as input to an AOI system instead of physically acquired images. This enables the development, optimization, and evaluation of the image processing and classification steps of an AOI system independently of a physical realization. We demonstrate this approach for shards of glass, as sorting glass is one challenging practical application for AOI.


Sign in / Sign up

Export Citation Format

Share Document