Virtual Environments and Advanced Interface Design
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By Oxford University Press

9780195075557, 9780197560310

Author(s):  
Christopher D. Wickens ◽  
Polly Baker

Virtual reality involves the creation of multisensory experience of an environment (its space and events) through artificial, electronic means; but that environment incorporates a sufficient number of features of the non-artificial world that it is experienced as “reality.” The cognitive issues of virtual reality are those that are involved in knowing and understanding about the virtual environment (cognitive: to perceive and to know). The knowledge we are concerned with in this chapter is both short term (Where am I in the environment? What do I see? Where do I go and how do I get there?), and long term (What can and do I learn about the environment as I see and explore it?). Given the recent interest in virtual reality as a concept (Rheingold, 1991; Wexelblat, 1993; Durlach and Mavor, 1994), it is important to consider that virtual reality is not, in fact, a unified thing, but can be broken down into a set of five features, any one of which can be present or absent to create a greater sense of reality. These features consist of the following five points. 1. Three-dimensional (perspective and/or stereoscopic) viewing vs. two-dimensional planar viewing. (Sedgwick, 1986; Wickens et al., 1989). Thus, the geography student who views a 3D representation of the environment has a more realistic view than one who views a 2D contour map. 2. Dynamic vs. static display. A video or movie is more real than a series of static images of the same material. 3. Closed-loop (interactive or learner-centered) vs. open-loop interaction. A more realistic closed-loop mode is one in which the learner has control over what aspect of the learning “world” is viewed or visited. That is, the learner is an active navigator as well as an observer. 4. Inside-out (ego-referenced) vs. outside-in (world-referenced) frame-of-reference. The more realistic inside-out frame-of-reference is one in which the image of the world on the display is viewed from the perspective of the point of ego-reference of the user (that point which is being manipulated by the control). This is often characterized as the property of “immersion.” Thus, the explorer of a virtual undersea environment will view that world from a perspective akin to that of a camera placed on the explorer’s head;


Author(s):  
Elizabeth Thorpe Davis ◽  
Larry F. Hodges

Two fundamental purposes of human spatial perception, in either a real or virtual 3D environment, are to determine where objects are located in the environment and to distinguish one object from another. Although various sensory inputs, such as haptic and auditory inputs, can provide this spatial information, vision usually provides the most accurate, salient, and useful information (Welch and Warren, 1986). Moreover, of the visual cues available to humans, stereopsis provides an enhanced perception of depth and of three-dimensionality for a visual scene (Yeh and Silverstein, 1992). (Stereopsis or stereoscopic vision results from the fusion of the two slightly different views of the external world that our laterally displaced eyes receive (Schor, 1987; Tyler, 1983).) In fact, users often prefer using 3D stereoscopic displays (Spain and Holzhausen, 1991) and find that such displays provide more fun and excitement than do simpler monoscopic displays (Wichanski, 1991). Thus, in creating 3D virtual environments or 3D simulated displays, much attention recently has been devoted to visual 3D stereoscopic displays. Yet, given the costs and technical requirements of such displays, we should consider several issues. First, we should consider in what conditions and situations these stereoscopic displays enhance perception and performance. Second, we should consider how binocular geometry and various spatial factors can affect human stereoscopic vision and, thus, constrain the design and use of stereoscopic displays. Finally, we should consider the modeling geometry of the software, the display geometry of the hardware, and some technological limitations that constrain the design and use of stereoscopic displays by humans. In the following section we consider when 3D stereoscopic displays are useful and why they are useful in some conditions but not others. In the section after that we review some basic concepts about human stereopsis and fusion that are of interest to those who design or use 3D stereoscopic displays. Also in that section we point out some spatial factors that limit stereopsis and fusion in human vision as well as some potential problems that should be considered in designing and using 3D stereoscopic displays. Following that we discuss some software and hardware issues, such as modelling geometry and display geometry as well as geometric distortions and other artifacts that can affect human perception.


Author(s):  
Robert J. K. Jacob

The problem of human-computer interaction can be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrow-bandwidth, highly constrained interface (Tufte, 1989). To address it, we seek faster, more natural, and more convenient means for users and computers to exchange information. The user’s side is constrained by the nature of human communication organs and abilities; the computer’s is constrained only by input/output devices and interaction techniques that we can invent. Current technology has been stronger in the computer-to-user direction than the user-to-computer, hence today’s user-computer dialogues are rather one-sided, with the bandwidth from the computer to the user far greater than that from user to computer. Using eye movements as a user-to-computer communication medium can help redress this imbalance. This chapter describes the relevant characteristics of the human eye, eye-tracking technology, how to design interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way, and the relationship between eye-movement interfaces and virtual environments. As with other areas of research and design in human-computer interaction, it is helpful to build on the equipment and skills humans have acquired through evolution and experience and search for ways to apply them to communicating with a computer. Direct manipulation interfaces have enjoyed great success largely because they draw on analogies to existing human skills (pointing, grabbing, moving objects in space), rather than trained behaviors. Similarly, we try to make use of natural eye movements in designing interaction techniques for the eye. Because eye movements are so different from conventional computer inputs, our overall approach in designing interaction techniques is, wherever possible, to obtain information from a user’s natural eye movements while viewing the screen, rather than requiring the user to make specific trained eye movements to actuate the system. This requires careful attention to issues of human design, as will any successful work in virtual environments. The goal is for human-computer interaction to start with studies of the characteristics of human communication channels and skills and then develop devices, interaction techniques, and interfaces that communicate effectively to and from those channels.


Author(s):  
Michael Cohen ◽  
Elizabeth M. Wenzel

Early computer terminals allowed only textual I/O. Because the user read and wrote vectors of character strings, this mode of I/O (character-based user interface, or “CUI”) could be thought of as one-dimensional, 1D. As terminal technology improved, users could manipulate graphical objects (via a graphical user interface, or “GUI”) in 2D. Although the I/O was no longer unidimensional, it was still limited to the planar dimensionality of a CRT or tablet. Now there exist 3D spatial pointers and 3D graphics devices; this latest phase of I/O devices (Blattner, 1992; Blattner and Dannenberg, 1992; Robinett, 1992) approaches the way that people deal with “the real world.” 3D audio (in which the sound has a spatial attribute, originating, virtually or actually, from an arbitrary point with respect to the listener) and more exotic spatial I/O modalities are under development. The evolution of I/O devices can be roughly grouped into generations that also correspond to the number of dimensions. Representative instances of each technology are shown in Table 8-1. This chapter focuses on the italicized entries in the third-generation aural sector. Audio alarms and signals of various types have been with us since long before there were computers, but even though music and visual arts are considered sibling muses, a disparity exists between the exploitation of sound and graphics in interfaces. (Most people think that it would be easier to be hearing- than sight-impaired, even though the incidence of disability-related cultural isolation is higher among the deaf than the blind.) For whatever reasons, the development of user interfaces has historically been focused more on visual modes than aural. This imbalance is especially striking in view of the increasing availability of sound in current technology platforms. Sound is frequently included and utilized to the limits of its availability or affordability in personal computers. However, computer-aided exploitation of audio bandwidth is only beginning to rival that of graphics. General sound capability is slowly being woven into the fabric of applications. Indeed, some of these programs are inherently dependent on sound—voicemail, or voice annotation to electronic mail, teleconferencing, audio archiving—while other applications use sound to complement their underlying functionality.


Author(s):  
Stephen R. Ellis

Virtual environments created through computer graphics are communications media (Licklider et al., 1978). Like other media, they have both physical and abstract components. Paper, for example, is a medium for communication. The paper is itself one possible physical embodiment of the abstraction of a two-dimensional surface onto which marks may be made. The corresponding abstraction for head-coupled, virtual image, stereoscopic displays that synthesize a coordinated sensory experience is an environment. These so-called “virtual reality” media have only recently caught the international public imagination (Pollack, 1989; D’Arcy, 1990; Stewart, 1991; Brehde, 1991), but they have arisen from continuous development in several technical and non-technical areas during the past 25 years (Brooks Jr., 1988; Ellis, 1990; Ellis, et al., 1991, 1993; Kalawsky, 1993). A well designed computer interface affords the user an efficient and effortless flow of information to and from the device with which he interacts. When users are given sufficient control over the pattern of this interaction, they themselves can evolve efficient interaction strategies that match the coding of their communications to the characteristics of their communication channel (Zipf, 1949; Mandelbrot, 1982; Ellis and Hitchcock, 1986; Grudin and Norman, 1991). But successful interface design should strive to reduce this adaptation period by analysis of the user’s task and performance limitations. This analysis requires understanding of the operative design metaphor for the interface in question. The dominant interaction metaphor for the computer interface changed in the 1980’s. Modern graphical interfaces, like those first developed at Xerox PARC (Smith et al., 1982) and used for the Apple Macintosh, have transformed the “conversational” interaction from one in which users “talked” to their computers to one in which they “acted out” their commands in a “desk-top” display. This so called desk-top metaphor provides the users with an illusion of an environment in which they enact wishes by manipulating symbols on a computer screen. Virtual environment displays represent a three-dimensional generalization of the two-dimensional “desk-top” metaphor. These synthetic environments may be experienced either from egocentric or exocentric viewpoints. That is to say, the users may appear to actually be in the environment or see themselves represented as a “You are here” symbol (Levine, 1984) which they can control.


Author(s):  
Woodrow Barfield ◽  
Craig Rosenberg

Recent technological advancements in virtual environment equipment have led to the development of augmented reality displays for applications in medicine, manufacturing, and scientific visualization (Bajura et al., 1992; Janin et al., 1993; Milgram et al., 1991; Lion et al., 1993). However, even with technological advances in virtual environment equipment, the development of augmented reality displays are still in the early stages of development, primarily demonstrating the possibilities, the use, and the technical realization of the concept. The purpose of this chapter is to review the literature on the design and use of augmented reality displays, to suggest applications for this technology, and to suggest new techniques to create these displays. In addition, the chapter also discusses the technological issues associated with creating augmented realities such as image registration, update rate, and the range and sensitivity of position sensors. Furthermore, the chapter discusses humanfactors issues and visual requirements that should be considered when creating augmented-reality displays. Essentially, an augmented-reality display allows a designer to combine part or all of a real-world visual scene, with synthetic imagery. Typically, the real-world visual scene in an augmented-reality display is captured by video or directly viewed. In terms of descriptions of augmented reality found in the literature, Janin et al. (1993) used the term “augmented reality” to signify a see-through head-mounted display (HMD) which allowed the user to view his surroundings with the addition of computer graphics overlaid on the real-world scene. Similarly, Robinett (1992) suggested the term “augmented reality” for a real image that was being enhanced with synthetic parts; he called the result a “merged representation”. Finally, Fuchs and Neuman (1993) observed that an augmented-reality display combined a simulated environment with direct perception of the world with the capability to interactively manipulate the real or virtual object(s). Based on the above descriptions, most current augmented-reality displays are designed using see-through HMDs which allow the observer to view the real world directly with the naked eye. However, if video is used to capture the real world, one may use either an opaque HMD or a screen-based system to view the scene (Lion et al., 1993).


Author(s):  
Woodrow Barfield ◽  
David Zeltzer

Recent developments in display technology, specifically head-mounted displays slaved to the user’s head position, techniques to spatialize sound, and computer-generated tactile and kinesthetic feedback allow humans to experience impressive visual, auditory, and tactile simulations of virtual environments. However, while technological advancements in the equipment to produce virtual environments have been quite impressive, what is currently lacking is a conceptual and analytical framework in which to guide research in this developing area. What is also lacking is a set of metrics which can be used to measure performance within virtual environments and to quantify the level of presence experienced by participants of virtual worlds. Given the importance of achieving presence in virtual environments, it is interesting to note that we currently have no theory of presence, let alone a theory of virtual presence (feeling like you are present in the environment generated by the computer) or telepresence (feeling like you are actually “there” at the remote site of operation). This in spite of the fact that students of literature, the graphic arts, the theater arts, film, and TV have long been concerned with the observer’s sense of presence. In fact, one might ask, what do the new technological interfaces in the virtual environment domain add, and how do they affect this sense, beyond the ways in which our imaginations (mental models) have been stimulated by authors and artists for centuries? Not only is it necessary to develop a theory of presence for virtual environments, it is also necessary to develop a basic research program to investigate the relationship between presence and performance using virtual environments. To develop a basic research program focusing on presence, several important questions need to be addressed. The first question to pose is, how do we measure the level of presence experienced by an operator within a virtual environment? We need to develop an operational, reliable, useful, and robust measure of presence in order to evaluate various techniques used to produce virtual environments. Second, we need to determine when, and under what conditions, presence can be a benefit or a detriment to performance.


Author(s):  
Blake Hannaford ◽  
Steven Venema

Humans perceive their surrounding environment through five sensory channels, popularly labeled “sight,” “sound,” “taste,” “smell,” and “touch.” All of these modalities are fused together in our brains into an apparently seamless perception of our world. While we typically place the most importance on our visual sense, it is our sense of touch which provides us with much of the information necessary to modify and manipulate the world around us. This sense can be divided into two categories: the kinesthetic sense, through which we sense movement or force in muscles and joints; and the tactile sense, through which we sense shapes and textures. This chapter will focus on the use of kinesthetic sense in realistic teleoperation and virtual environment simulations. Artificial kinesthetic feedback techniques were first developed in the field of teleoperation—robot manipulators remotely controlled by human operators. In teleoperation, the perceptions from a physically remote environment must be conveyed to the human operator in a realistic manner. This differs from virtual reality in which the perceptions from a simulated environment are conveyed to the user. Thus, teleoperation and virtual environments communities share many of the same user interface issues but in teleoperation the need for detailed world modeling is less central. The earliest remote manipulation systems were operated by direct mechanical linkages and the operator viewed the workspace directly through windows (Goertz, 1964). Perhaps because of their relative simplicity and high performance, little was learned about sensory requirements for remote manipulation from these early devices. When remote manipulation was developed for long distances and mobile platforms, electronic links became mandatory. The earliest attempts drove the remote manipulator with a position signal only and no information was returned to the operator about contact force. In the original mechanical designs, force information was intrinsically available because the linkages (actually metal tape and pulley transmissions) were relatively stiff, low-mass, connections between the operator and the environment. With the shift to electronic links, the loss of kinesthetic information was immediately apparent to the operators. The first artificial kinesthetic displays arose to provide improved functionality for remote manipulators.


Sign in / Sign up

Export Citation Format

Share Document