Managing Level of Detail in Virtual Environments: A Perceptual Framework

1997 ◽  
Vol 6 (6) ◽  
pp. 658-666 ◽  
Author(s):  
Martin Reddy ◽  
Benjamin Watson ◽  
Neff Walker ◽  
Larry F. Hodges

In the companion paper, Watson et al. (1997), we demonstrated the effectiveness of using perceptual criteria to select the amount of detail that is displayed in an immersive virtual reality (VR) system. Based upon this determination, we will now attempt to develop a principled, perceptually oriented framework to automatically select the appropriate level of detail (LOD) for each object in a scene, taking into consideration the limitations of the human visual system. We apply knowledge and theories from the domain of visual perception to the field of VR, thus optimizing the visual information presented to the user based upon solid metrics of human vision. Through a series of contrast grating experiments, a user's visual acuity may be assessed in terms of spatial frequency (c/deg) and contrast. The results of these tests can be modeled mathematically using a contrast sensitivity function (CSF). Therefore, we can use the CSF results to estimate how much visual detail the user can perceive in an object at any instant. Then, if we could describe this object in terms of its spatial frequencies, this would enable us to select the lowest LOD available without the user being able to perceive any visual change.

Author(s):  
Florian Hruby ◽  
Irma Castellanos ◽  
Rainer Ressl

Abstract Scale has been a defining criterion of mapmaking for centuries. However, this criterion is fundamentally questioned by highly immersive virtual reality (VR) systems able to represent geographic environments at a high level of detail and, thus, providing the user with a feeling of being present in VR space. In this paper, we will use the concept of scale as a vehicle for discussing some of the main differences between immersive VR and non-immersive geovisualization products. Based on a short review of diverging meanings of scale we will propose possible approaches to the issue of both spatial and temporal scale in immersive VR. Our considerations shall encourage a more detailed treatment of the specific characteristics of immersive geovisualization to facilitate deeper conceptual integration of immersive and non-immersive visualization in the realm of cartography.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yea Som Lee ◽  
Bong-Soo Sohn

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 58 ◽  
Author(s):  
J Daniel McCarthy ◽  
Colin Kupitz ◽  
Gideon P Caplovitz

Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.


2018 ◽  
pp. 1176-1199
Author(s):  
Diane Gromala ◽  
Xin Tong ◽  
Chris Shaw ◽  
Weina Jin

In the 1990s, when immersive Virtual Reality (VR) was first popular, researchers found it to be an effective intervention in reducing acute pain. Since that time, VR technologies have been used for treating acute pain. Although the exact mechanism is unclear, VR is thought to be an especially effective form of pain distraction. While pain-related virtual environments have built upon pain distraction, a handful of researchers have focused on a more difficult challenge: VR for long-term chronic pain. Because the nature of chronic pain is complex, pharmacological analgesics are often insufficient or unsustainable as an ideal long-term treatment. In this chapter, the authors explore how VR can be used as a non-pharmacological adjuvant for chronic pain. Two paradigms for virtual environments built for addressing chronic pain have emerged – Pain Distraction and what we term Pain Self-modulation. We discuss VR's validation for mitigating pain in patients who have acute pain, for those with chronic pain, and for addressing “breakthrough” periods of higher pain in patients with chronic pain.


Author(s):  
Elizabeth Thorpe Davis ◽  
Larry F. Hodges

Two fundamental purposes of human spatial perception, in either a real or virtual 3D environment, are to determine where objects are located in the environment and to distinguish one object from another. Although various sensory inputs, such as haptic and auditory inputs, can provide this spatial information, vision usually provides the most accurate, salient, and useful information (Welch and Warren, 1986). Moreover, of the visual cues available to humans, stereopsis provides an enhanced perception of depth and of three-dimensionality for a visual scene (Yeh and Silverstein, 1992). (Stereopsis or stereoscopic vision results from the fusion of the two slightly different views of the external world that our laterally displaced eyes receive (Schor, 1987; Tyler, 1983).) In fact, users often prefer using 3D stereoscopic displays (Spain and Holzhausen, 1991) and find that such displays provide more fun and excitement than do simpler monoscopic displays (Wichanski, 1991). Thus, in creating 3D virtual environments or 3D simulated displays, much attention recently has been devoted to visual 3D stereoscopic displays. Yet, given the costs and technical requirements of such displays, we should consider several issues. First, we should consider in what conditions and situations these stereoscopic displays enhance perception and performance. Second, we should consider how binocular geometry and various spatial factors can affect human stereoscopic vision and, thus, constrain the design and use of stereoscopic displays. Finally, we should consider the modeling geometry of the software, the display geometry of the hardware, and some technological limitations that constrain the design and use of stereoscopic displays by humans. In the following section we consider when 3D stereoscopic displays are useful and why they are useful in some conditions but not others. In the section after that we review some basic concepts about human stereopsis and fusion that are of interest to those who design or use 3D stereoscopic displays. Also in that section we point out some spatial factors that limit stereopsis and fusion in human vision as well as some potential problems that should be considered in designing and using 3D stereoscopic displays. Following that we discuss some software and hardware issues, such as modelling geometry and display geometry as well as geometric distortions and other artifacts that can affect human perception.


Perception ◽  
1973 ◽  
Vol 2 (1) ◽  
pp. 53-60 ◽  
Author(s):  
J A Movshon ◽  
C Blakemore

An adaptation method is used to determine the orientation specificity of channels sensitive to different spatial frequencies in the human visual system. Comparison between different frequencies is made possible by a data transformation in which orientational effects are expressed in terms of equivalent contrast (the contrast of a vertical grating producing the same adaptational effect as a high-contrast grating of a given orientation). It is shown that, despite great variances in the range of orientations affected by adaptation at different spatial frequencies (±10° to ±50°), the half-width at half-amplitude of the orientation channels does not vary systematically as a function of spatial frequency over the range tested (2·5 to 20 cycles deg−1). Two subjects were used and they showed significantly different orientation tuning across the range of spatial frequencies. The results are discussed with reference to previous determinations of orientation specificity, and to related psychophysical and neurophysiological phenomena.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 162-162 ◽  
Author(s):  
T Troscianko ◽  
C A Parraga ◽  
G Brelstaff ◽  
D Carr ◽  
K Nelson

A common assumption in the study of the relationship between human vision and the visual environment is that human vision has developed in order to encode the incident information in an optimal manner. Such arguments have been used to support the 1/f dependence of scene content as a function of spatial frequency. In keeping with this assumption, we ask whether there are any important differences between the luminance and (r/g) chrominance Fourier spectra of natural scenes, the simple expectation being that the chrominance spectrum should be relatively richer in low spatial frequencies than the luminance spectrum, to correspond with the different shape of luminance and chrominance contrast sensitivity functions. We analysed a data set of 29 images of natural scenes (predominantly of vegetation at different distances) which were obtained with a hyper-spectral camera (measuring the scene through a set of 31 wavelength bands in the range 400 – 700 nm). The images were transformed to the three Smith — Pokorny cone fundamentals, and further transformed into ‘luminance’ (r+g) and ‘chrominance’ (r-g) images, with various assumptions being made about the relative weighting of the r and g components, and the form of the chrominance response. We then analysed the Fourier spectra of these images using logarithmic intervals in spatial frequency space. This allowed a determination of the total energy within each Fourier band for each of the luminance and chrominance representations. The results strongly indicate that, for the set of scenes studied here, there was no evidence of a predominance of low-spatial-frequency chrominance information. Two classes of explanation are possible: (a) that raw Fourier content may not be the main organising principle determining visual encoding of colour, and/or (b) that our scenes were atypical of what may have driven visual evolution. We present arguments in favour of both of these propositions.


2019 ◽  
Vol 9 (10) ◽  
pp. 2020 ◽  
Author(s):  
Roi Méndez ◽  
Enrique Castelló ◽  
José Ramón Ríos Viqueira ◽  
Julián Flores

A virtual TV set combines actors and objects with computer-generated virtual environments in real time. Nowadays, this technology is widely used in television broadcasts and cinema productions. A virtual TV set consists of three main elements: the stage, the computer-system and the chroma-keyer. The stage is composed by a monochrome cyclorama (the background) in front of which actors and objects are located (the foreground). The computer-system generates the virtual elements that will form the virtual environment. The chroma-keyer combines the elements in the foreground with the computer-generated environments by erasing the monochrome background and insetting the synthetic elements using the chroma-keying technique. In order to ease the background removal, the cyclorama illumination must be diffuse and homogeneous, avoiding the hue differences that are introduced by shadows, shines and over-lighted areas. The analysis of this illumination is usually performed manually by an expert using a photometer which makes the process slow, tedious and dependent on the experience of the operator. In this paper, a new calibration process to check and improve the homogeneity of a cyclorama’s illumination by non-experts using a custom software which provides both visual information and statistical data, is presented. This calibration process segments a cyclorama image in regions with similar luminance and calculates the centroid of each of them. The statistical study of the variation in the size of the regions and the position of the centroids are the key tools used to determine the homogeneity of the cyclorama lighting.


Sign in / Sign up

Export Citation Format

Share Document