ASME 2010 World Conference on Innovative Virtual Reality
Latest Publications


TOTAL DOCUMENTS

39
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By ASMEDC

9780791849088, 9780791838693

Author(s):  
Bin Chen ◽  
John Moreland

Magnetic resonance diffusion tensor imaging (DTI) is sensitive to the anisotropic diffusion of water exerted by its macromolecular environment and has been shown useful in characterizing structures of ordered tissues such as the brain white matter, myocardium, and cartilage. The water diffusivity inside of biological tissues is characterized by the diffusion tensor, a rank-2 symmetrical 3×3 matrix, which consists of six independent variables. The diffusion tensor contains much information of diffusion anisotropy. However, it is difficult to perceive the characteristics of diffusion tensors by looking at the tensor elements even with the aid of traditional three dimensional visualization techniques. There is a need to fully explore the important characteristics of diffusion tensors in a straightforward and quantitative way. In this study, a virtual reality (VR) based MR DTI visualization with high resolution anatomical image segmentation and registration, ROI definition and neuronal white matter fiber tractography visualization and fMRI activation map integration is proposed. The VR application will utilize brain image visualization techniques including surface, volume, streamline and streamtube rendering, and use head tracking and wand for navigation and interaction, the application will allow the user to switch between different modalities and visualization techniques, as well making point and choose queries. The main purpose of the application is for basic research and clinical applications with quantitative and accurate measurements to depict the diffusivity or the degree of anisotropy derived from the diffusion tensor.


Author(s):  
Jonathan Becker ◽  
Aveek Purohit ◽  
Zheng Sun

USARSim group at NIST developed a simulated robot that operated in the Unreal Tournament 3 (UT3) gaming environment. They used a software PID controller to control the robot in UT3 worlds. Unfortunately, the PID controller did not work well, so NIST asked us to develop a better controller using machine learning techniques. In the process, we characterized the software PID controller and the robot’s behavior in UT3 worlds. Using data collected from our simulations, we compared different machine learning techniques including linear regression and reinforcement learning (RL). Finally, we implemented a RL based controller in Matlab and ran it in the UT3 environment via a TCP/IP link between Matlab and UT3.


Author(s):  
Adam J. Faeth ◽  
Chris Harding

This research describes a theoretical framework for designing multimodal feedback for 3D buttons in a virtual environment. Virtual button implementations often suffer from inadequate feedback compared to their mechanical, real-world, counterparts. This lack of feedback can lead to accidental button actuations and reduce the user’s ability to discover how to interact with the virtual button. We propose a framework for more expressive virtual button feedback that communicates visual, audio, and haptic feedback to the user. We apply the theoretical framework by implementing a software library prototype to support multimodal feedback from virtual buttons in a 3D virtual reality workspace.


Author(s):  
Eliab Z. Opiyo

Flat screen displays such as CRT displays, liquid crystal displays and plasma displays are predominantly used for visualization of product models in computer aided design (CAD) processes. However, future platforms for product model visualization are expected to include 3D displays as well. It can be expected that different types of display systems, each offering different visualization capability will complement the traditional flat-screen visual display units. Among the 3D display systems with biggest potential for product models visualization are holographic volumetric displays. One of the most appealing characteristic features of these displays is that they generate images with spatial representation and that appear to pop out of the flat screen. This allows multiple viewers to see 3D images or scenes from different perspectives. One of the main shortcomings of these displays, however, is that they lack suitable interfaces for interactive visualization. The work reported in this paper focused on this problem and is part of a large research in which the aim is to develop suitable interfaces for interactive viewing of holographic virtual models. Emphasis in this work was specifically on exploration of possible interaction styles and creation of a suitable interaction framework. The proposed framework consists of three interface methods: an intermediary graphical user interface (IGUI) — designed to be utilizable via a flat screen display and by using standard input devices; a gestural/hand-motions interface; and a haptic interface. Preliminary tests have shown that the IGUI helps viewers to rotate, scale and navigate virtual models in 3D scenes quickly and conveniently. On the other hand, these tests have shown that tasks such as selecting or moving virtual models in 3D scenes are not sufficiently supported by the IGUI, and that complementary interfaces may probably enable viewers to interact with models more effectively and intuitively.


Author(s):  
Eve S. Wurtele ◽  
Diane C. Bassham ◽  
Julie Dickerson ◽  
David J. Kabala ◽  
William Schneller ◽  
...  

Knowledge of cellular structure and function has increased dramatically with the advent of modern molecular and computational technologies. Helping students to understand cellular dynamics is a major challenge to educators. To address this challenge, we have developed the Kabala Engine, an open source engine based on OpenSG (http://www.opensg.org) and VRJuggler (http://www.vrjuggler.org). This engine is designed to enable biologists, and indeed any domain expert — chemists, artists, psychologists — to create virtual interactive worlds for teaching or research. As a proof-of-concept, we have used this engine to create Meta!Blast, a virtual plant cell containing a prototype chloroplast in which students can enter, activate the light reactions, including electron excitation, and create molecular oxygen and ATP.


Author(s):  
Matthew Swanson ◽  
Eric Johnson ◽  
Alexander Stoytchev

This paper describes a method for non-destructive evaluation of the quality of welds from 3D point data. The method uses a stereo camera system to capture high-resolution 3D images of deposited welds, which are then processed in order to extract key parameters of the welds. These parameters (the weld angle and the radius of the weld at the weld toe) can in turn be used to estimate the stress concentration factor of the weld and thus to infer its quality. The method is intended for quality control applications in manufacturing environments and aims to supplement, and even eliminate, the manual inspections which are currently the predominant inspection method. Experimental results for T-fillet welds are reported.


Author(s):  
Anup M. Vader ◽  
Abhinav Chadda ◽  
Wenjuan Zhu ◽  
Ming C. Leu ◽  
Xiaoqing F. Liu ◽  
...  

This paper presents the integration and evaluation of two popular camera calibration techniques for multi-camera vision system development for motion capture. An integrated calibration technique for multi-camera vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo were used to form a vision system to perform 3D motion capture in real time. This integrated technique is a two-step process: it first calibrates the intrinsic parameters of each camera using Zhang’s algorithm [5] and then calibrates the extrinsic parameters of the cameras together using Svoboda’s algorithm [9]. Computer software has been developed for implementation of the integrated technique, and experiments carried out using this technique to perform motion capture with Wiimotes show a significant improvement in the measurement accuracy over the existing calibration techniques.


Author(s):  
Jay Roltgen ◽  
Stephen Gilbert

In this paper we investigate whether the use of a multitouch interface allows users of a supervisory control system to perform tasks more effectively than possible with a mouse-based interface. Supervisory control interfaces are an active field of research, but so far have generally utilized mouse-based interaction. Additionally, most such interfaces require a skilled operator due to their intrinsic complexity. We present an interface for controlling multiple unmanned ground vehicles that is conducive to multitouch as well as mouse-based interaction, which allows us to evaluate novice users’ performance in several areas. Results suggest that a multitouch interface can be used as effectively as a mouse-based interface for certain tasks which are relevant in a supervisory control environment.


Author(s):  
Mario Covarrubias ◽  
Michele Antolini ◽  
Monica Bordegoni ◽  
Umberto Cugini

This paper describes a multimodal system whose aim is to replicate in a virtual reality environment some typical operations performed by professional designers with real splines laid over the surface of a physical prototype of an aesthetic product, in order to better evaluate the characteristics of the shape they are creating. The system described is able not only to haptically render a continuous contact along a curve, by means of a servo controlled haptic strip, but also to allow the user to modify the shape applying force directly on the haptic device. The haptic strip is able to bend and twist in order to better approximate the portion of the surface of the virtual object over which the strip is laying. This device is 600mm long and is controlled by 11 digital servos for the control of the shape (6 for bending and 5 for twisting) and by two MOOG-FCS HapticMaster devices and two additional digital servos for 6-DOF positioning. We have developed additional input devices, which have been integrated with the haptic strip, which consist of two force sensitive handles positioned at the extremities of the strip, and a capacitive linear touch sensor placed along the surface of the strip, and four buttons. These devices are used to interact with the system, to select menu options, and to apply deformations to the virtual object. The paper describes the interaction modalities and the developed user interface, the applied methodologies, the achieved results and the conclusions elicited from the user tests.


Author(s):  
Lauren Cairco ◽  
Amy C. Ulinski ◽  
Jerome McClendon ◽  
Toni Bloodworth ◽  
James Matheison ◽  
...  

There is a direct need in industry to improve the in-production vehicle inspection process and to support mobility for inspection stations. In this paper we present a novel interface design implemented on three multi-modal prototype systems, in which design was based on results from an initial field study we conducted. The design of these systems incorporate two main objectives: 1) enforce a systematic check on each of the items on the list to reduce missed items and 2) facilitate mobility in that the tools used to assist in inspection can be installed at one area and then later easily moved to another area. Our novel graphical software interface aims to enforce systematic checks through incorporation of a system-directed delivery of the checklist items with options for error correction and support of dynamic inspection, where items identified for inspection may differ among checkpoints. We have designed three hardware configurations that support our interface, with aims to achieve mobility from one inspection area to another, leave both hands free for inspection, and incorporate a more convenient way to refer to the list while conducting an inspection. This paper additionally presents preliminary feedback and suggestions for improvement from a pilot study conducted on our interface implemented on three hardware configurations. In the future we plan to incorporate the suggestions from the pilot study and to conduct a more formal evaluation on our multi-modal systems.


Sign in / Sign up

Export Citation Format

Share Document