Efficient navigation in virtual environments: A comparative study of two interaction techniques: The Magic Wand vs. the Human Joystick

Author(s):  
Vassilis-Javed Khan ◽  
Marije Pekelharing ◽  
Nils Desle
Author(s):  
Arda Tezcan ◽  
Debbie Richards

Multi-User Virtual Environments (MUVEs) have been found to be engaging and provide an environment in which the elements of discovery, exploration and concept testing, fundamental to the field of science, can be experienced. Furthermore, MUVEs accommodate lifelike experiences with the benefit of the situated and distributed nature of cognition; they also provide virtual worlds to simulate the conditions that are not doable or practicable under real world circumstances making them very relevant to many other fields of study such as history, geography and foreign language learning. However, constructing MUVEs can be expensive and time consuming depending on the platform considered. Therefore, providing the most appropriate platform that requires minimal effort, cost and time will make MUVE deployment in the classroom faster and more viable. In this chapter, the authors provide a comparative study of prominent existing platforms for MUVEs that can be used to identify the right balance of functionality, flexibility, effort and cost for a given educational and technical context. A number of metrics are identified, described and used to enable the comparison. Platform assessment was done in four main metric groups: communication and interaction, characters, features and education. Communication and interaction metrics are used to assess how the communication and interaction is done within the examined platform. Character metrics are employed to measure avatar and agent affordances. Features metrics are defined to compare what the platform offers in terms of technology. Lastly, education metrics are used to identify the value of the associated platform for educational purposes.


1993 ◽  
Vol 17 (6) ◽  
pp. 655-661 ◽  
Author(s):  
Mauro Figueiredo ◽  
Klaus Böhm ◽  
José Teixeira

2012 ◽  
Vol 21 (3) ◽  
pp. 321-337 ◽  
Author(s):  
Paul Richard ◽  
Mickael Naud ◽  
Francois-Xavier Inglese ◽  
Emmanuelle Richard

Virtual reality (VR) is a technology covering a large field of applications among which are sports and video games. In both gaming and sporting VR applications, interaction techniques involve specific gestures such as catching or striking. However, such dynamic gestures are not currently being recognized as elementary task primitives, and have therefore not been investigated as such. In this paper, we propose a framework for the analysis of interaction in dynamic virtual environments (DVEs). This framework is based on three dynamic interaction primitives (DIPs) that are common to many sporting activities: catching, throwing, and striking. For each of these primitives, an original modeling approach is proposed. Furthermore, we introduce and formalize the concept of dynamic virtual fixtures (DVFs). These fixtures aim to assist the user in tasks involving interaction with moving objects or with objects to be set in movement. Two experiments have been carried out to investigate the influence of different DVFs on human performance in the context of ball catching and archery. The results reveal a significant positive effect of the DVFs, and that DVFs could be either classified as “performance-assisted” or “learning-assisted.”


Author(s):  
Robert J. K. Jacob

The problem of human-computer interaction can be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrow-bandwidth, highly constrained interface (Tufte, 1989). To address it, we seek faster, more natural, and more convenient means for users and computers to exchange information. The user’s side is constrained by the nature of human communication organs and abilities; the computer’s is constrained only by input/output devices and interaction techniques that we can invent. Current technology has been stronger in the computer-to-user direction than the user-to-computer, hence today’s user-computer dialogues are rather one-sided, with the bandwidth from the computer to the user far greater than that from user to computer. Using eye movements as a user-to-computer communication medium can help redress this imbalance. This chapter describes the relevant characteristics of the human eye, eye-tracking technology, how to design interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way, and the relationship between eye-movement interfaces and virtual environments. As with other areas of research and design in human-computer interaction, it is helpful to build on the equipment and skills humans have acquired through evolution and experience and search for ways to apply them to communicating with a computer. Direct manipulation interfaces have enjoyed great success largely because they draw on analogies to existing human skills (pointing, grabbing, moving objects in space), rather than trained behaviors. Similarly, we try to make use of natural eye movements in designing interaction techniques for the eye. Because eye movements are so different from conventional computer inputs, our overall approach in designing interaction techniques is, wherever possible, to obtain information from a user’s natural eye movements while viewing the screen, rather than requiring the user to make specific trained eye movements to actuate the system. This requires careful attention to issues of human design, as will any successful work in virtual environments. The goal is for human-computer interaction to start with studies of the characteristics of human communication channels and skills and then develop devices, interaction techniques, and interfaces that communicate effectively to and from those channels.


2008 ◽  
Vol 4 (8) ◽  
pp. 15-26
Author(s):  
Jose Mondejar-Jimenez ◽  
Juan-Antonio Mondejar-Jimenez ◽  
Manuel Vargas-Vargas ◽  
Maria-Leticia Meseguer-Santamaria

Castilla-La Mancha University has decided to implement two tools: WebCT and Moodle, Virtual Campus has emerged: www.campusvirtual.ulcm.es. This paper is dedicated to the analysis of said tool as a primary mode of e-learning expansion in the university environment. It can be used to carry out standard educational university activities in accordance with the guidelines set out by the new European Space for Higher Education. New needs continue to present themselves, not only with regard to the exchange of information and documents, but the complete and integrated management of teaching which is carried out using virtual environments and the Internet: e-learning.


Author(s):  
Florian Klompmaker ◽  
Alexander Dridger ◽  
Karsten Nebe

Since 2010 when the Microsoft Kinect with its integrated depth-sensing camera appeared on the market, completely new kinds of interaction techniques have been integrated into console games. They don’t require any instrumentalization and no complicated calibration or time-consuming setup anymore. But even having these benefits, some drawbacks exist. Most games only enable the user to fulfill very simple gestures like waving, jumping or stooping, which is not the natural behavior of a user. In addition the depth-sensing technology lacks of haptic feedback. Of course we cannot solve the lack of haptic feedback, but we want to improve the whole-body interaction. Our goal is to develop 3D interaction techniques that give a maximum of freedom to the user and enable her to perform precise and immersive interactions. This work focuses on whole-body interaction in immersive virtual environments. We present 3D interaction techniques that provide the user with a maximum of freedom and enables her to operate precisely and immersive in virtual environments. Furthermore we present a user study, in which we analyzed how Navigation and Manipulation techniques can be performed by users’ body-interaction using a depth-sensing camera and a huge projection screen. Therefore three alternative approaches have been developed and tested: classical gamepad interaction, an indirect pointer-based interaction and a more direct whole-body interaction technique. We compared their effectiveness and preciseness. It turned out that users act faster, while using the gamepad, but generate significantly more errors at the same time. Using depth-sensing based whole-body interaction techniques it became apparent, that the interaction is much more immersive, natural and intuitive, even if slower. We will show the advantages of our approach and how it can be used in various domains, more effectively and efficiently for their users.


Sign in / Sign up

Export Citation Format

Share Document