spatial audio
Recently Published Documents


TOTAL DOCUMENTS

335
(FIVE YEARS 68)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
pp. 1-1
Author(s):  
Yulin Wu ◽  
Ruimin Hu ◽  
Xiaochen Wang ◽  
Chenhao Hu ◽  
Shanfa Ke

2021 ◽  
Vol 11 (23) ◽  
pp. 11242
Author(s):  
Bosun Xie ◽  
Guangzheng Yu

One purpose of spatial audio is to create perceived virtual sources at various spatial positions in terms of direction and distance with respect to the listener. The psychoacoustic principle of spatial auditory perception is essential for creating perceived virtual sources. Currently, the technical means for recreating virtual sources in different directions of various spatial audio techniques are relatively mature. However, perceived distance control in spatial audio remains a challenging task. This article reviews the psychoacoustic principle, methods, and problems with perceived distance control and compares them with the principles and methods of directional localization control in spatial audio, showing that the validation of various methods for perceived distance control depends on the principle and method used for spatial audio. To improve perceived distance control, further research on the detailed psychoacoustic mechanisms of auditory distance perception is required.


2021 ◽  
Vol 69 (7/8) ◽  
pp. 557-575
Author(s):  
Johannes M. Arend ◽  
Sebastià V. Amengual Garí ◽  
Carl Schissler ◽  
Florian Klein ◽  
Philip W. Robinson

2021 ◽  
Author(s):  
Hansung Kim ◽  
Luca Remaggi ◽  
Aloisio Dourado ◽  
Teofilo de Campos ◽  
Philip J. B. Jackson ◽  
...  

AbstractAs personalised immersive display systems have been intensely explored in virtual reality (VR), plausible 3D audio corresponding to the visual content is required to provide more realistic experiences to users. It is well known that spatial audio synchronised with visual information improves a sense of immersion but limited research progress has been achieved in immersive audio-visual content production and reproduction. In this paper, we propose an end-to-end pipeline to simultaneously reconstruct 3D geometry and acoustic properties of the environment from a pair of omnidirectional panoramic images. A semantic scene reconstruction and completion method using a deep convolutional neural network is proposed to estimate the complete semantic scene geometry in order to adapt spatial audio reproduction to the scene. Experiments provide objective and subjective evaluations of the proposed pipeline for plausible audio-visual VR reproduction of real scenes.


Author(s):  
Pranay Manocha ◽  
Anurag Kumar ◽  
Buye Xu ◽  
Anjali Menon ◽  
Israel D. Gebru ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Yasuhide Hyodo ◽  
Chihiro Sugai ◽  
Junya Suzuki ◽  
Masafumi Takahashi ◽  
Masahiko Koizumi ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document