The Effects of 3D Audio on Hologram Localization in Augmented Reality Environments

Author(s):  
Terek Arce ◽  
Henry Fuchs ◽  
Kyla McMullen

Currently available augmented reality systems have a narrow field of view, giving users only a small window to look through to find holograms in the environment. The challenge for developers is to direct users’ attention to holograms outside this window. To alleviate this field of view constraint, most research has focused on hardware improvements to the head mounted display. However, incorporating 3D audio cues into programs could also aid users in this localization task. This paper investigates the effectiveness of 3D audio on hologram localization. A comparison of 3D audio, visual, and mixed-mode stimuli shows that users are able to localize holograms significantly faster under conditions that include 3D audio. To our knowledge, this is the first study to explore the use of 3D audio in localization tasks using augmented reality systems. The results provide a basis for the incorporation of 3D audio in augmented reality applications.

Author(s):  
Eugene Hayden ◽  
Kang Wang ◽  
Chengjie Wu ◽  
Shi Cao

This study explores the design, implementation, and evaluation of an Augmented Reality (AR) prototype that assists novice operators in performing procedural tasks in simulator environments. The prototype uses an optical see-through head-mounted display (OST HMD) in conjunction with a simulator display to supplement sequences of interactive visual and attention-guiding cues to the operator’s field of view. We used a 2x2 within-subject design to test two conditions: with/without AR-cues, each condition had a voice assistant and two procedural tasks (preflight and landing). An experiment examined twenty-six novice operators. The results demonstrated that augmented reality had benefits in terms of improved situation awareness and accuracy, however, it yielded longer task completion time by creating a speed-accuracy trade-off effect in favour of accuracy. No significant effect on mental workload is found. The results suggest that augmented reality systems have the potential to be used by a wider audience of operators.


Author(s):  
Pei-Hsuan Tsai ◽  
Yu-Hsuan Huang ◽  
Yu-Ju Tsai ◽  
Hao-Yu Chang ◽  
Masatoshi Chang-Ogimoto ◽  
...  

Author(s):  
Tanasha Taylor ◽  
Shana Smith ◽  
David Suh

This research provides a prototype of a computer-generated harp, using physical string vibrations with haptic feedback in an augmented reality environment. The individuals, immersed in an augmented reality environment using a head mounted display, play the virtual harp with the Phantom Omni haptic device receiving realistic interactions from the strings of the harp. Most previous musical instruments research only provides feedback in the form of visual and audio cues, but not haptic cues. The proposed project is designed to provide individuals with all three forms of cues for interacting with a computer-generated harp. This computer-generated harp is modeled as a realistic harp and includes physics for string vibrations to provide the individuals a traditional instrument-like interaction. This prototype will be applied towards interactive musical experiences and development of skills during music therapy for individuals with disabilities.


2021 ◽  
Author(s):  
Nina Rohrbach ◽  
Joachim Hermsdörfer ◽  
Lisa-Marie Huber ◽  
Annika Thierfelder ◽  
Gavin Buckingham

AbstractAugmented reality, whereby computer-generated images are overlaid onto the physical environment, is becoming significant part of the world of education and training. Little is known, however, about how these external images are treated by the sensorimotor system of the user – are they fully integrated into the external environmental cues, or largely ignored by low-level perceptual and motor processes? Here, we examined this question in the context of the size–weight illusion (SWI). Thirty-two participants repeatedly lifted and reported the heaviness of two cubes of unequal volume but equal mass in alternation. Half of the participants saw semi-transparent equally sized holographic cubes superimposed onto the physical cubes through a head-mounted display. Fingertip force rates were measured prior to lift-off to determine how the holograms influenced sensorimotor prediction, while verbal reports of heaviness after each lift indicated how the holographic size cues influenced the SWI. As expected, participants who lifted without augmented visual cues lifted the large object at a higher rate of force than the small object on early lifts and experienced a robust SWI across all trials. In contrast, participants who lifted the (apparently equal-sized) augmented cubes used similar force rates for each object. Furthermore, they experienced no SWI during the first lifts of the objects, with a SWI developing over repeated trials. These results indicate that holographic cues initially dominate physical cues and cognitive knowledge, but are dismissed when conflicting with cues from other senses.


2017 ◽  
Vol 26 (1) ◽  
pp. 16-41 ◽  
Author(s):  
Jonny Collins ◽  
Holger Regenbrecht ◽  
Tobias Langlotz

Virtual and augmented reality, and other forms of mixed reality (MR), have become a focus of attention for companies and researchers. Before they can become successful in the market and in society, those MR systems must be able to deliver a convincing, novel experience for the users. By definition, the experience of mixed reality relies on the perceptually successful blending of reality and virtuality. Any MR system has to provide a sensory, in particular visually coherent, set of stimuli. Therefore, issues with visual coherence, that is, a discontinued experience of a MR environment, must be avoided. While it is very easy for a user to detect issues with visual coherence, it is very difficult to design and implement a system for coherence. This article presents a framework and exemplary implementation of a systematic enquiry into issues with visual coherence and possible solutions to address those issues. The focus is set on head-mounted display-based systems, notwithstanding its applicability to other types of MR systems. Our framework, together with a systematic discussion of tangible issues and solutions for visual coherence, aims at guiding developers of mixed reality systems for better and more effective user experiences.


2006 ◽  
Vol 5 (3) ◽  
pp. 33-39 ◽  
Author(s):  
Seokhee Jeon ◽  
Hyeongseop Shim ◽  
Gerard J. Kim

In this paper, we have investigated the comparative usability among three different viewing configurations of augmented reality (AR) system that uses a desktop monitor instead of a head mounted display. In many cases, due to operational or cost reasons, the use of head mounted displays may not be viable. Such a configuration is bound to cause usability problems because of the mismatch in the user's proprioception, scale, hand eye coordination, and the reduced 3D depth perception. We asked a pool of subjects to carry out an object manipulation task in three different desktop AR set ups. We measured the subject's task performance and surveyed for the perceived usability and preference. Our results indicated that placing a fixed camera in the back of the user was the best option for convenience and attaching a camera on the user�s head for task performance. The results should provide a valuable guide for designing desktop augmented reality systems without head mounted displays


Sign in / Sign up

Export Citation Format

Share Document