Force and Contact Location Shading Methods for Use Within Two- and Three-Dimensional Polygonal Environments

2011 ◽  
Vol 20 (6) ◽  
pp. 505-528 ◽  
Author(s):  
Andrew J. Doxon ◽  
David E. Johnson ◽  
Hong Z. Tan ◽  
William R. Provancher

Current state-of-the-art haptic interfaces only provide kinesthetic (force) feedback, yet studies have shown that providing tactile feedback in concert with kinesthetic information can dramatically improve a person's ability to dexterously interact with and explore virtual environments. In this research, tactile feedback was provided by a device, called a contact location display (CLD), which is capable of rendering the center of contact to a user. The chief goal of the present work was to develop algorithms that allow the CLD to be used with polygonal geometric models, and to do this without the resulting contact location feedback being overwhelmed by the perception of polygonal edges and vertices. Two haptic shading algorithms were developed to address this issue and successfully extend the use of the CLD to 2D and 3D polygonal environments. Two experiments were run to evaluate these haptic shading algorithms. The first measured perception thresholds for rendering faceted objects as smooth objects. It was found that the addition of contact location feedback significantly increased user sensitivity to edges and that the use of shading algorithms was able to significantly reduce the number of polygons needed for objects to feel smooth. The second experiment explored the CLD device's ability to facilitate exploration and shape recognition within a 3D environment. While this study provided a validation of our 3D algorithm, as people were able to identify the rendered objects with reasonable accuracy, this study underscored the need for improvements in the CLD device design in order to be effectively used in general 3D environments.

2008 ◽  
pp. 530-554
Author(s):  
Christos Bouras ◽  
Eleftheria Giannaka ◽  
Maria Nani ◽  
Alexandros Panagopoulos ◽  
Thrasyvoulos Tsiatosos

In this chapter, we present the design and implementation of an integrated platform for Educational Virtual Environments. This platform aims to support an educational community, synchronous online courses in multi-user three-dimensional (3D) environments, and the creation and access of asynchronous courses through a learning content management system. In order to offer synchronous courses, we have implementeda system called EVE-II, which supports stable event sharing for multi-user 3D places, easy creation of multi-user 3D places, H.323-based voice- over IP services fully integrated in a 3D space, as well as many concurrent 3D multi-user spaces.


Author(s):  
Adam S. Coutee ◽  
Bert Bras

Virtual reality allows users to visualize and interact with a three-dimensional world in a computer-generated environment. Haptic technology has allowed enhancement to these environments, adding the sense of touch through force and tactile feedback devices. In the engineering domain, these devices have been implemented in many areas including product design. We have developed a real-time simulation test bed to assess the usefulness of haptic technology for assembly and disassembly planning. In this paper, we present a study conducted to characterize the perception of weight in this virtual environment. Specifically, the experiments performed test the ability of a user to distinguish weight differences between two objects in real and virtual environments. This paper describes the experiments conducted and an analysis of the results.


2016 ◽  
Vol 213 (3) ◽  
pp. 355-369 ◽  
Author(s):  
Paulina S. Mrozowska ◽  
Mitsunori Fukuda

MDCK II cells, a widely used model of polarized epithelia, develop into different structures depending on culture conditions: two-dimensional (2D) monolayers when grown on synthetic supports or three-dimensional (3D) cysts when surrounded by an extracellular matrix. The establishment of epithelial polarity is accompanied by transcytosis of the apical marker podocalyxin from the outer plasma membrane to the newly formed apical domain, but its exact route and regulation remain poorly understood. Here, through comprehensive colocalization and knockdown screenings, we identified the Rab GTPases mediating podocalyxin transcytosis and showed that different sets of Rabs coordinate its transport during cell polarization in 2D and 3D structures. Moreover, we demonstrated that different Rab35 effectors regulate podocalyxin trafficking in 2D and 3D environments; trafficking is mediated by OCRL in 2D monolayers and ACAP2 in 3D cysts. Our results give substantial insight into regulation of the transcytosis of this apical marker and highlight differences between trafficking mechanisms in 2D and 3D cell cultures.


2006 ◽  
Vol 5-6 ◽  
pp. 55-62
Author(s):  
I.A. Jones ◽  
A.A. Becker ◽  
A.T. Glover ◽  
P. Wang ◽  
S.D. Benford ◽  
...  

Boundary element (BE) analysis is well known as a tool for assessing the stiffness and strength of engineering components, but, along with finite element (FE) techniques, it is also finding new applications as a means of simulating the behaviour of deformable objects within virtual reality simulations since it exploits precisely the same kind of surface-only definition used for visual rendering of three-dimensional solid objects. This paper briefly reviews existing applications of BE and FE within virtual reality, and describes recent work on the BE-based simulation of aspects of surgical operations on the brain, making use of commercial hand-held force-feedback interfaces (haptic devices) to measure the positions of the virtual surgical tools and provide tactile feedback to the user. The paper presents an overview of the project then concentrates on recent developments, including the incorporation of simulated tumours in the virtual brain.


Author(s):  
Christos Bouras ◽  
Eleftheria Giannaka ◽  
Maria Nani ◽  
Alexandros Panagopoulos

In this chapter, we present the design and implementation of an integrated platform for Educational Virtual Environments. This platform aims to support an educational community, synchronous online courses in multi-user three-dimensional (3D) environments, and the creation and access of asynchronous courses through a learning content management system. In order to offer synchronous courses, we have implementeda system called EVE-II, which supports stable event sharing for multi-user 3D places, easy creation of multi-user 3D places, H.323-based voice- over IP services fully integrated in a 3D space, as well as many concurrent 3D multi-user spaces.


Author(s):  
Thomas A. Furness III ◽  
Woodrow Barfield

We understand from the anthropologists that almost from the beginning of our species we have been tool builders. Most of these tools have been associated with the manipulation of matter. With these tools we have learned to organize or reorganize and arrange the elements for our comfort, safety, and entertainment. More recently, the advent of the computer has given us a new kind of tool. Instead of manipulating matter, the computer allows us to manipulate symbols. Typically, these symbols represent language or other abstractions such as mathematics, physics, or graphical images. These symbols allow us to operate at a different conscious level, providing a mechanism to communicate ideas as well as to organize and plan the manipulation of matter that will be accomplished by other tools. However, a problem with the current technology that we use to manipulate symbols is the interface between the human and computer. That is, the means by which we interact with the computer and receive feedback that our actions, thoughts, and desires are recognized and acted upon. Another problem with current computing systems is the format with which they display information. Typically, the computer, via a display monitor, only allows a limited two-dimensional view of the three-dimensional world we live in. For example, when using a computer to design a three dimensional building, what we see and interact with is often only a two-dimensional representation of the building, or at most a so-called 2½D perspective view. Furthermore, unlike the sounds in the real world which stimulate us from all directions and distances, the sounds emanating from a computer originate from a stationary speaker, and when it comes to touch, with the exception of a touch screen or the tactile feedback provided by pressing a key or mouse button (limited haptic feedback to be sure), the tools we use to manipulate symbols are primitive at best. This book is about a new and better way to interact with and manipulate symbols. These are the technologies associated with virtual environments and what we term advanced interfaces. In fact, the development of virtual environment technologies for interacting with and manipulating symbols may represent the next step in the evolution of tools.


Author(s):  
Christos Bouras ◽  
Eri Giannaka ◽  
Thrasyvoulos Tsiatsos

The main goal of this chapter is to facilitate educational designers and developers by providing a point of reference for making decisions on how to incorporate 3D environments into the applications they develop as well as for extending their capabilities by integrating more functionality. Therefore, this chapter presents the design principles for virtual spaces, which aim at supporting multi-user communication in web-based learning communities. In addition the implementation of these principles is presented using as point of reference EVE Training Area. This environment constitutes a three-dimensional space where participants, represented by 3D humanoid avatars, have the ability to use a variety of 3D e-collaboration tools for learning together. Furthermore, this chapter presents how these principles could be used as criteria for validating and extending ready Web2.0 Immersive worlds for supporting collaborative e-learning. Finally, collaborative e-learning usage scenarios that could be realized by exploiting collaborative virtual environments are described.


Author(s):  
Andreas M. Kunz ◽  
Adrian Burri

Abstract Virtual Reality becomes more and more important within the product development process. It enables the engineer to realize constraints or mistakes in the product design at a very early stage by viewing the digital geometric prototype. Beside viewing the design of a product, additional functionalities like simulation of assembling, the physically correct behavior of a machine or the machine control come into focus of interest. Therefore, the interaction modality of haptic feedback gains more and more importance for simulation tasks in virtual environments. However there are only a few portable haptic interfaces with which the user can experience in a natural way the sensation of force feedback. The scope of this paper is to present a new passive haptic interface that is lightweight and easy to use. Furthermore it has no constraints in the workspace and applies high forces to the fingertips of the user by blocking the natural grasping.


Author(s):  
Sergei A. Volkov ◽  
Judy M. Vance

Abstract Virtual Reality techniques provide a unique new way to interact with three-dimensional digital objects. Virtual prototyping refers to the use of virtual reality to obtain evaluations of designs while they are still in digital form before physical prototypes are built. While the current state-of-the-art in virtual reality relies mainly on the use of stereo viewing and auditory feedback, commercial haptic devices have recently become available that can be integrated into the virtual environment to provide force feedback to the user. This paper outlines a study that was performed to determine whether the addition of force feedback to the virtual prototyping task improved the ability of the participants to make design decisions. The specific task involved comparing the location and movement of two virtual parking brakes located in the virtual cockpit of an automobile. The paper describes the purpose, methods and results of the study.


2019 ◽  
Vol 6 (1) ◽  
pp. 93-116 ◽  
Author(s):  
Laurence Hanes ◽  
Robert Stone

Virtual environments are an important aspect of serious games for heritage. However navigable three-dimensional (3D) environments can be costly and resource-intensive to create and for users to run. In this paper, we propose an alternative approach using “constrained virtual environments”, which present an environment through a series of reduced fidelity two-dimensional (2D) scenes without exhaustive detail. We describe the development of a constrained virtual environment to replicate a 3D environment from a serious game concerning ancient Mesopotamian history. An exploratory experiment discovered that participants experienced a similar sense of presence in the constrained environment to that of the 3D environment and rated the two games to be of similar quality. Participants were equally likely to pursue further information on the subject matter afterwards and collected more information tokens from within the constrained environment. A subsequent interview with a museum expert explored opportunities for such games to be implemented in museum displays, and based on the experiences and issues encountered, a preliminary set of guidelines was compiled for implementing future constrained virtual environments within serious games for heritage.


Sign in / Sign up

Export Citation Format

Share Document