ASME 2010 World Conference on Innovative Virtual Reality
Latest Publications


TOTAL DOCUMENTS

39
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By ASMEDC

9780791849088, 9780791838693

Author(s):  
Jonathan Becker ◽  
Aveek Purohit ◽  
Zheng Sun

USARSim group at NIST developed a simulated robot that operated in the Unreal Tournament 3 (UT3) gaming environment. They used a software PID controller to control the robot in UT3 worlds. Unfortunately, the PID controller did not work well, so NIST asked us to develop a better controller using machine learning techniques. In the process, we characterized the software PID controller and the robot’s behavior in UT3 worlds. Using data collected from our simulations, we compared different machine learning techniques including linear regression and reinforcement learning (RL). Finally, we implemented a RL based controller in Matlab and ran it in the UT3 environment via a TCP/IP link between Matlab and UT3.


Author(s):  
Bin Chen ◽  
John Moreland

Magnetic resonance diffusion tensor imaging (DTI) is sensitive to the anisotropic diffusion of water exerted by its macromolecular environment and has been shown useful in characterizing structures of ordered tissues such as the brain white matter, myocardium, and cartilage. The water diffusivity inside of biological tissues is characterized by the diffusion tensor, a rank-2 symmetrical 3×3 matrix, which consists of six independent variables. The diffusion tensor contains much information of diffusion anisotropy. However, it is difficult to perceive the characteristics of diffusion tensors by looking at the tensor elements even with the aid of traditional three dimensional visualization techniques. There is a need to fully explore the important characteristics of diffusion tensors in a straightforward and quantitative way. In this study, a virtual reality (VR) based MR DTI visualization with high resolution anatomical image segmentation and registration, ROI definition and neuronal white matter fiber tractography visualization and fMRI activation map integration is proposed. The VR application will utilize brain image visualization techniques including surface, volume, streamline and streamtube rendering, and use head tracking and wand for navigation and interaction, the application will allow the user to switch between different modalities and visualization techniques, as well making point and choose queries. The main purpose of the application is for basic research and clinical applications with quantitative and accurate measurements to depict the diffusivity or the degree of anisotropy derived from the diffusion tensor.


Author(s):  
Eliab Z. Opiyo

Flat screen displays such as CRT displays, liquid crystal displays and plasma displays are predominantly used for visualization of product models in computer aided design (CAD) processes. However, future platforms for product model visualization are expected to include 3D displays as well. It can be expected that different types of display systems, each offering different visualization capability will complement the traditional flat-screen visual display units. Among the 3D display systems with biggest potential for product models visualization are holographic volumetric displays. One of the most appealing characteristic features of these displays is that they generate images with spatial representation and that appear to pop out of the flat screen. This allows multiple viewers to see 3D images or scenes from different perspectives. One of the main shortcomings of these displays, however, is that they lack suitable interfaces for interactive visualization. The work reported in this paper focused on this problem and is part of a large research in which the aim is to develop suitable interfaces for interactive viewing of holographic virtual models. Emphasis in this work was specifically on exploration of possible interaction styles and creation of a suitable interaction framework. The proposed framework consists of three interface methods: an intermediary graphical user interface (IGUI) — designed to be utilizable via a flat screen display and by using standard input devices; a gestural/hand-motions interface; and a haptic interface. Preliminary tests have shown that the IGUI helps viewers to rotate, scale and navigate virtual models in 3D scenes quickly and conveniently. On the other hand, these tests have shown that tasks such as selecting or moving virtual models in 3D scenes are not sufficiently supported by the IGUI, and that complementary interfaces may probably enable viewers to interact with models more effectively and intuitively.


Author(s):  
Adam J. Faeth ◽  
Chris Harding

This research describes a theoretical framework for designing multimodal feedback for 3D buttons in a virtual environment. Virtual button implementations often suffer from inadequate feedback compared to their mechanical, real-world, counterparts. This lack of feedback can lead to accidental button actuations and reduce the user’s ability to discover how to interact with the virtual button. We propose a framework for more expressive virtual button feedback that communicates visual, audio, and haptic feedback to the user. We apply the theoretical framework by implementing a software library prototype to support multimodal feedback from virtual buttons in a 3D virtual reality workspace.


Author(s):  
Eve S. Wurtele ◽  
Diane C. Bassham ◽  
Julie Dickerson ◽  
David J. Kabala ◽  
William Schneller ◽  
...  

Knowledge of cellular structure and function has increased dramatically with the advent of modern molecular and computational technologies. Helping students to understand cellular dynamics is a major challenge to educators. To address this challenge, we have developed the Kabala Engine, an open source engine based on OpenSG (http://www.opensg.org) and VRJuggler (http://www.vrjuggler.org). This engine is designed to enable biologists, and indeed any domain expert — chemists, artists, psychologists — to create virtual interactive worlds for teaching or research. As a proof-of-concept, we have used this engine to create Meta!Blast, a virtual plant cell containing a prototype chloroplast in which students can enter, activate the light reactions, including electron excitation, and create molecular oxygen and ATP.


Author(s):  
Matthew Swanson ◽  
Eric Johnson ◽  
Alexander Stoytchev

This paper describes a method for non-destructive evaluation of the quality of welds from 3D point data. The method uses a stereo camera system to capture high-resolution 3D images of deposited welds, which are then processed in order to extract key parameters of the welds. These parameters (the weld angle and the radius of the weld at the weld toe) can in turn be used to estimate the stress concentration factor of the weld and thus to infer its quality. The method is intended for quality control applications in manufacturing environments and aims to supplement, and even eliminate, the manual inspections which are currently the predominant inspection method. Experimental results for T-fillet welds are reported.


Author(s):  
Anup M. Vader ◽  
Abhinav Chadda ◽  
Wenjuan Zhu ◽  
Ming C. Leu ◽  
Xiaoqing F. Liu ◽  
...  

This paper presents the integration and evaluation of two popular camera calibration techniques for multi-camera vision system development for motion capture. An integrated calibration technique for multi-camera vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo were used to form a vision system to perform 3D motion capture in real time. This integrated technique is a two-step process: it first calibrates the intrinsic parameters of each camera using Zhang’s algorithm [5] and then calibrates the extrinsic parameters of the cameras together using Svoboda’s algorithm [9]. Computer software has been developed for implementation of the integrated technique, and experiments carried out using this technique to perform motion capture with Wiimotes show a significant improvement in the measurement accuracy over the existing calibration techniques.


Author(s):  
Rafael Radkowski ◽  
Helene Waßmann

This paper presents a virtual experimental environment for testing virtual prototypes of intelligent mechatronic systems. A virtual prototype is a computer internal model of a real product. Virtual environments are used to verify the functionality of these virtual prototypes during the product development process. But normally, the virtual environments are composed manually. Engineers model the set of virtual prototypes and the relations between them manually. Furthermore, a lack of formal test methods exists for testing virtual prototypes of mechatronic systems. This paper presents software agents, which detect relations between virtual prototypes in a virtual environment, automatically. The concept of the agent-supported virtual environment is presented as well as the data needed by the agents for identifying relations between the virtual prototypes. The concept has been tested. One of the examples is described.


Author(s):  
Vladimir Ortega-Gonza´lez ◽  
Samir Garbaya ◽  
Fre´de´ric Merienne

In this paper we describe a proposal based on the use of 3D sound metaphors for providing precise spatial cueing in virtual environment. A 3D sound metaphor is a combination of the audio spatialization and audio cueing techniques. The 3D sound metaphors are supposed to improve the user performance and perception. The interest of this kind of stimulation mechanism is that it could allow providing efficient 3D interaction for interactive tasks such as selection, manipulation and navigation among others. We describe the main related concepts, the most relevant related work, the current theoretical and technical problems, the description of our approach, our scientific objectives, our methodology and our research perspectives.


Author(s):  
Jay Roltgen ◽  
Stephen Gilbert

In this paper we investigate whether the use of a multitouch interface allows users of a supervisory control system to perform tasks more effectively than possible with a mouse-based interface. Supervisory control interfaces are an active field of research, but so far have generally utilized mouse-based interaction. Additionally, most such interfaces require a skilled operator due to their intrinsic complexity. We present an interface for controlling multiple unmanned ground vehicles that is conducive to multitouch as well as mouse-based interaction, which allows us to evaluate novice users’ performance in several areas. Results suggest that a multitouch interface can be used as effectively as a mouse-based interface for certain tasks which are relevant in a supervisory control environment.


Sign in / Sign up

Export Citation Format

Share Document