scholarly journals A framework for designing head-related transfer function distance metrics that capture localization perception

2021 ◽  
Vol 1 (4) ◽  
pp. 044401
Author(s):  
Ishwarya Ananthabhotla ◽  
Vamsi Krishna Ithapu ◽  
W. Owen Brimijoin
2020 ◽  
Vol 10 (15) ◽  
pp. 5257
Author(s):  
Nathan Berwick ◽  
Hyunkook Lee

This study examined whether the spatial unmasking effect operates on speech reception thresholds (SRTs) in the median plane. SRTs were measured using an adaptive staircase procedure, with target speech sentences and speech-shaped noise maskers presented via loudspeakers at −30°, 0°, 30°, 60° and 90°. Results indicated a significant median plane spatial unmasking effect, with the largest SRT gain obtained for the −30° elevation of the masker. Head-related transfer function analysis suggests that the result is associated with the energy weighting of the ear-input signal of the masker at upper-mid frequencies relative to the maskee.


2007 ◽  
Vol 50 (3) ◽  
pp. 267-280 ◽  
Author(s):  
BoSun Xie ◽  
XiaoLi Zhong ◽  
Dan Rao ◽  
ZhiQiang Liang

2016 ◽  
Vol 41 (3) ◽  
pp. 437-447
Author(s):  
Dominik Storek ◽  
Frantisek Rund ◽  
Petr Marsalek

Abstract This paper analyses the performance of Differential Head-Related Transfer Function (DHRTF), an alternative transfer function for headphone-based virtual sound source positioning within a horizontal plane. This experimental one-channel function is used to reduce processing and avoid timbre affection while preserving signal features important for sound localisation. The use of positioning algorithm employing the DHRTF is compared to two other common positioning methods: amplitude panning and HRTF processing. Results of theoretical comparison and quality assessment of the methods by subjective listening tests are presented. The tests focus on distinctive aspects of the positioning methods: spatial impression, timbre affection, and loudness fluctuations. The results show that the DHRTF positioning method is applicable with very promising performance; it avoids perceptible channel coloration that occurs within the HRTF method, and it delivers spatial impression more successfully than the simple amplitude panning method.


Author(s):  
David Murphy ◽  
Flaithrí Neff

In this chapter, we discuss spatial sound within the context of Virtual Reality and other synthetic environments such as computer games. We review current audio technologies, sound constraints within immersive multi-modal spaces, and future trends. The review process takes into consideration the wide-varying levels of audio sophistication in the gaming and VR industries, ranging from standard stereo output to Head Related Transfer Function implementation. The level of sophistication is determined mostly by hardware/system constraints (such as mobile devices or network limitations), however audio practitioners are developing novel and diverse methods to overcome many of these challenges. No matter what approach is employed, the primary objectives are very similar—the enhancement of the virtual scene and the enrichment of the user experience. We discuss how successful various audio technologies are in achieving these objectives, how they fall short, and how they are aligned to overcome these shortfalls in future implementations.


Sign in / Sign up

Export Citation Format

Share Document