audio rendering
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 16)

H-INDEX

8
(FIVE YEARS 1)

Acta Acustica ◽  
2021 ◽  
Vol 5 ◽  
pp. 20
Author(s):  
Matthias Blochberger ◽  
Franz Zotter

Six-Degree-of-Freedom (6DoF) audio rendering interactively synthesizes spatial audio signals for a variable listener perspective based on surround recordings taken at multiple perspectives distributed across the listening area in the acoustic scene. Methods that rely on recording-implicit directional information and interpolate the listener perspective without the attempt of localizing and extracting sounds often yield high audio quality, but are limited in spatial definition. Methods that perform sound localization, extraction, and rendering typically operate in the time-frequency domain and risk introducing artifacts such as musical noise. We propose to take advantage of the rich spatial information recorded in the broadband time-domain signals of the multitude of distributed first-order (B-format) recording perspectives. Broadband time-variant signal extraction retrieving direct signals and leaving residuals to approximate diffuse and spacious sounds is less of a quality risk, and likewise is the broadband re-encoding to enhance spatial definition of both signal types. To detect and track direct sound objects in this process, we combine the directional data recorded at the single perspectives into a volumetric multi-perspective activity map for particle-filter tracking. Our technical and perceptual evaluation confirms that this kind of processing enhances the otherwise limited spatial definition of direct-sound objects of other broadband but signal-independent virtual loudspeaker object (VLO) or Vector-Based Intensity Panning (VBIP) interpolation approaches.


2020 ◽  
Vol 27 (3) ◽  
pp. 287-308 ◽  
Author(s):  
Nawel Khenak ◽  
Jeanne Vézien ◽  
David Théry ◽  
Patrick Bourdot

This article presents a user experiment that assesses the feeling of spatial presence, defined as the sense of “being there” in both a real and a remote environment (respectively the so-called “natural presence” and “telepresence”). Twenty-eight participants performed a 3D-pointing task while being either physically located in a real office or remotely transported by a teleoperation system. The evaluation also included the effect of combining audio and visual rendering. Spatial presence and its components were evaluated using the ITC-SOPI questionnaire (Lessiter, Freeman, Keogh, & Davidoff, 2001 ). In addition, objective metrics based on user performance and behavioral indicators were logged. Results indicate that participants experienced a higher sense of spatial presence in the remote environment (hyper-presence), and a higher ecological validity. In contrast, objective metrics prove higher in the real environment, which highlights the absence of correlation between spatial presence and the objective metrics used in the experiment. Moreover, results show the benefit of adding audio rendering in both environments to increase the sense of spatial presence, the performance of participants, and their engagement during the task.


2020 ◽  
Vol 26 (5) ◽  
pp. 1991-2001 ◽  
Author(s):  
Zhenyu Tang ◽  
Nicholas J. Bryan ◽  
Dingzeyu Li ◽  
Timothy R. Langlois ◽  
Dinesh Manocha

Sign in / Sign up

Export Citation Format

Share Document