scholarly journals Design and Evaluation of Interaction Techniques Dedicated to Integrate Encountered-Type Haptic Displays in Virtual Environments

Author(s):  
Victor Mercado ◽  
Maud Marchai ◽  
Anatole Lecuyer
1993 ◽  
Vol 17 (6) ◽  
pp. 655-661 ◽  
Author(s):  
Mauro Figueiredo ◽  
Klaus Böhm ◽  
José Teixeira

2012 ◽  
Vol 21 (3) ◽  
pp. 321-337 ◽  
Author(s):  
Paul Richard ◽  
Mickael Naud ◽  
Francois-Xavier Inglese ◽  
Emmanuelle Richard

Virtual reality (VR) is a technology covering a large field of applications among which are sports and video games. In both gaming and sporting VR applications, interaction techniques involve specific gestures such as catching or striking. However, such dynamic gestures are not currently being recognized as elementary task primitives, and have therefore not been investigated as such. In this paper, we propose a framework for the analysis of interaction in dynamic virtual environments (DVEs). This framework is based on three dynamic interaction primitives (DIPs) that are common to many sporting activities: catching, throwing, and striking. For each of these primitives, an original modeling approach is proposed. Furthermore, we introduce and formalize the concept of dynamic virtual fixtures (DVFs). These fixtures aim to assist the user in tasks involving interaction with moving objects or with objects to be set in movement. Two experiments have been carried out to investigate the influence of different DVFs on human performance in the context of ball catching and archery. The results reveal a significant positive effect of the DVFs, and that DVFs could be either classified as “performance-assisted” or “learning-assisted.”


Author(s):  
Robert J. K. Jacob

The problem of human-computer interaction can be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrow-bandwidth, highly constrained interface (Tufte, 1989). To address it, we seek faster, more natural, and more convenient means for users and computers to exchange information. The user’s side is constrained by the nature of human communication organs and abilities; the computer’s is constrained only by input/output devices and interaction techniques that we can invent. Current technology has been stronger in the computer-to-user direction than the user-to-computer, hence today’s user-computer dialogues are rather one-sided, with the bandwidth from the computer to the user far greater than that from user to computer. Using eye movements as a user-to-computer communication medium can help redress this imbalance. This chapter describes the relevant characteristics of the human eye, eye-tracking technology, how to design interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way, and the relationship between eye-movement interfaces and virtual environments. As with other areas of research and design in human-computer interaction, it is helpful to build on the equipment and skills humans have acquired through evolution and experience and search for ways to apply them to communicating with a computer. Direct manipulation interfaces have enjoyed great success largely because they draw on analogies to existing human skills (pointing, grabbing, moving objects in space), rather than trained behaviors. Similarly, we try to make use of natural eye movements in designing interaction techniques for the eye. Because eye movements are so different from conventional computer inputs, our overall approach in designing interaction techniques is, wherever possible, to obtain information from a user’s natural eye movements while viewing the screen, rather than requiring the user to make specific trained eye movements to actuate the system. This requires careful attention to issues of human design, as will any successful work in virtual environments. The goal is for human-computer interaction to start with studies of the characteristics of human communication channels and skills and then develop devices, interaction techniques, and interfaces that communicate effectively to and from those channels.


Author(s):  
Florian Klompmaker ◽  
Alexander Dridger ◽  
Karsten Nebe

Since 2010 when the Microsoft Kinect with its integrated depth-sensing camera appeared on the market, completely new kinds of interaction techniques have been integrated into console games. They don’t require any instrumentalization and no complicated calibration or time-consuming setup anymore. But even having these benefits, some drawbacks exist. Most games only enable the user to fulfill very simple gestures like waving, jumping or stooping, which is not the natural behavior of a user. In addition the depth-sensing technology lacks of haptic feedback. Of course we cannot solve the lack of haptic feedback, but we want to improve the whole-body interaction. Our goal is to develop 3D interaction techniques that give a maximum of freedom to the user and enable her to perform precise and immersive interactions. This work focuses on whole-body interaction in immersive virtual environments. We present 3D interaction techniques that provide the user with a maximum of freedom and enables her to operate precisely and immersive in virtual environments. Furthermore we present a user study, in which we analyzed how Navigation and Manipulation techniques can be performed by users’ body-interaction using a depth-sensing camera and a huge projection screen. Therefore three alternative approaches have been developed and tested: classical gamepad interaction, an indirect pointer-based interaction and a more direct whole-body interaction technique. We compared their effectiveness and preciseness. It turned out that users act faster, while using the gamepad, but generate significantly more errors at the same time. Using depth-sensing based whole-body interaction techniques it became apparent, that the interaction is much more immersive, natural and intuitive, even if slower. We will show the advantages of our approach and how it can be used in various domains, more effectively and efficiently for their users.


2016 ◽  
Vol 78 (12-3) ◽  
Author(s):  
Arief Hydayat ◽  
Haslina Arshad ◽  
Nazlena Mohamad Ali ◽  
Lam Meng Chun

In a 3D user interface, interaction plays an important role in helping users to manipulate 3D objects in virtual environments. 3D devices, such as data glove and motion tracking, can potentially give users the opportunity to manipulate 3D objects in virtual reality environments such as checking, zooming, translating, rotating, merging and splitting 3D objects in a more natural and easy manner through the use of hand gestures. Hand gestures are often applied in 3D interaction techniques for converting the manipulation mode. This paper will discuss the interaction technique in a virtual environment using a combination of the Push and Pull navigation and the rotation technique. The unimanual use of these 3D interaction techniques can improve the effectiveness of users in their interaction with and manipulation of 3D objects. This study has enhanced the capability of the unimanual 3D interaction technique in terms of 3D interaction feedback in virtual environments.


Sign in / Sign up

Export Citation Format

Share Document