scholarly journals Visual Distance Estimation in Static Compared to Moving Virtual Scenes

2006 ◽  
Vol 9 (2) ◽  
pp. 321-331 ◽  
Author(s):  
Harald Frenz ◽  
Markus Lappe

Visual motion is used to control direction and speed of self-motion and time-to-contact with an obstacle. In earlier work, we found that human subjects can discriminate between the distances of different visually simulated self-motions in a virtual scene. Distance indication in terms of an exocentric interval adjustment task, however, revealed linear correlation between perceived and indicated distances but with a profound distance underestimation. One possible explanation for this underestimation is the perception of visual space in virtual environments. Humans perceive visual space in natural scenes as curved, and distances are increasingly underestimated with increasing distance from the observer. Such spatial compression may also exist in our virtual environment. We therefore surveyed perceived visual space in a static virtual scene. We asked observers to compare two horizontal depth intervals, similar to experiments performed in natural space. Subjects had to indicate the size of one depth interval relative to a second interval. Our observers perceived visual space in the virtual environment as compressed, similar to the perception found in natural scenes. However, the nonlinear depth function we found can not explain the observed distance underestimation of visual simulated self-motions in the same environment.

2010 ◽  
Author(s):  
Tamer Soliman ◽  
Alison E. Gibson ◽  
Arthur M. Glenberg

2013 ◽  
Vol 483 ◽  
pp. 229-233
Author(s):  
Yi Liu ◽  
Shi Qi Li ◽  
Jun Feng Wang

This paper presents a feasible approach for modeling and locating of assembly\disassembly tools in the virtual scene: First, a novel point-vector model for tool is presented by means of abstracting the locating constraints of tools; Then, the mapping relationship for locating constraints between tools and parts is detailed; Finally, the best matching constraints algorithm is proposed on basis of point-vector model, which can calculate the locating constraints to the triangle model of part in real time. The proposed method has been integrated in the virtual assembly system to solve practical assembly problems.


1997 ◽  
Vol 4 (4) ◽  
pp. 318-327 ◽  
Author(s):  
R Wolf ◽  
M Heisenberg
Keyword(s):  

2016 ◽  
Vol 13 (122) ◽  
pp. 20160414 ◽  
Author(s):  
Mehdi Moussaïd ◽  
Mubbasir Kapadia ◽  
Tyler Thrash ◽  
Robert W. Sumner ◽  
Markus Gross ◽  
...  

Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects.


2014 ◽  
Vol 23 (1) ◽  
pp. 33-50 ◽  
Author(s):  
Gabor Aranyi ◽  
Sid Kouider ◽  
Alan Lindsay ◽  
Hielke Prins ◽  
Imtiaj Ahmed ◽  
...  

The performance of current graphics engines makes it possible to incorporate subliminal cues within virtual environments (VEs), providing an additional way of communication, fully integrated with the exploration of a virtual scene. In order to advance the application of subliminal information in this area, it is necessary to explore in the psychological literature how techniques previously reported as rendering information subliminal can be successfully implemented in VEs. Previous literature has also described the effects of subliminal cues as quantitatively modest, which raises the issue of their inclusion in practical tasks. We used a 3D rendering engine (Unity3D) to implement a masking paradigm within the context of a realistic scene and a familiar (kitchen) environment. We report significant effects of subliminal cueing on the selection of objects in a virtual scene, demonstrating the feasibility of subliminal cueing in VEs. Furthermore, we show that multiple iterations of masked objects within a trial, as well as the speeding of selection choices, can substantially reinforce the impact of subliminal cues. This is consistent with previous findings suggesting that the effect of subliminal stimuli fades rapidly. We conclude by proposing, as part of further work, possible mechanisms for the inclusion of subliminal cueing in intelligent interfaces to maximize their effects.


2008 ◽  
Vol 99 (5) ◽  
pp. 2558-2576
Author(s):  
Mario Ruiz-Ruiz ◽  
Julio C. Martinez-Trujillo

Previous studies have demonstrated that human subjects update the location of visual targets for saccades after head and body movements and in the absence of visual feedback. This phenomenon is known as spatial updating. Here we investigated whether a similar mechanism exists for the perception of motion direction. We recorded eye positions in three dimensions and behavioral responses in seven subjects during a motion task in two different conditions: when the subject's head remained stationary and when subjects rotated their heads around an anteroposterior axis (head tilt). We demonstrated that after head-tilt subjects updated the direction of saccades made in the perceived stimulus direction (direction of motion updating), the amount of updating varied across subjects and stimulus directions, the amount of motion direction updating was highly correlated with the amount of spatial updating during a memory-guided saccade task, subjects updated the stimulus direction during a two-alternative forced-choice direction discrimination task in the absence of saccadic eye movements (perceptual updating), perceptual updating was more accurate than motion direction updating involving saccades, and subjects updated motion direction similarly during active and passive head rotation. These results demonstrate the existence of an updating mechanism for the perception of motion direction in the human brain that operates during active and passive head rotations and that resembles the one of spatial updating. Such a mechanism operates during different tasks involving different motor and perceptual skills (saccade and motion direction discrimination) with different degrees of accuracy.


2013 ◽  
Vol 373-375 ◽  
pp. 888-891
Author(s):  
Fang Liu ◽  
Wei Tong ◽  
Zhi Jun Qian ◽  
Yu Hong Dong

This paper introduced the laboratory model of Real-time monitor system based on the 3D Visualization for calefaction furnace, depicted the process of the model.In this paper we created a virtual environment and transport the real-time data which we collected from the locale to the virtual scene,to realize the real time monitor on the real environment.Through simulating in the lab,the effect of this system was realistic at the same time it arrived at the goal of better monitoring with better real-time.


2007 ◽  
Vol 180 (1) ◽  
pp. 35-48 ◽  
Author(s):  
Markus Lappe ◽  
Michael Jenkin ◽  
Laurence R. Harris

2002 ◽  
Vol 25 (2) ◽  
pp. 203-204 ◽  
Author(s):  
Romi Nijhawan ◽  
Beena Khurana

In the imagery debate, a key question concerns the inherent spatial nature of mental images. What do we mean by spatial representation? We explore a new idea that suggests that motion is instrumental in the coding of visual space. How is the imagery debate informed by the representation of space being determined by visual motion?


2009 ◽  
Vol 9 (2) ◽  
pp. 83-97 ◽  
Author(s):  
Timothy Cribbin

Previous work has shown that distance-similarity visualisation or ‘spatialisation’ can provide a potentially useful context in which to browse the results of a query search, enabling the user to adopt a simple local foraging or ‘cluster growing’ strategy to navigate through the retrieved document set. However, faithfully mapping feature-space models to visual space can be problematic owing to their inherent high dimensionality and non-linearity. Conventional linear approaches to dimension reduction tend to fail at this kind of task, sacrificing local structural in order to preserve a globally optimal mapping. In this paper the clustering performance of a recently proposed algorithm called isometric feature mapping (Isomap), which deals with non-linearity by transforming dissimilarities into geodesic distances, is compared to that of non-metric multidimensional scaling (MDS). Various graph pruning methods, for geodesic distance estimation, are also compared. Results show that Isomap is significantly better at preserving local structural detail than MDS, suggesting it is better suited to cluster growing and other semantic navigation tasks. Moreover, it is shown that applying a minimum-cost graph pruning criterion can provide a parameter-free alternative to the traditional K-neighbour method, resulting in spatial clustering that is equivalent to or better than that achieved using an optimal- K criterion.


Sign in / Sign up

Export Citation Format

Share Document