scholarly journals Localizing nearby sound sources in a classroom: Binaural room impulse responses

2005 ◽  
Vol 117 (5) ◽  
pp. 3100-3115 ◽  
Author(s):  
Barbara G. Shinn-Cunningham ◽  
Norbert Kopco ◽  
Tara J. Martin
2020 ◽  
Vol 27 (3) ◽  
pp. 235-252 ◽  
Author(s):  
Dario D’Orazio ◽  
Giulia Fratoni ◽  
Anna Rovigatti ◽  
Massimo Garai

Italian Historical Opera Houses are private or public spaces built around a cavea, with tiers of boxes on the surrounding walls. At the early age – from 16th to 18th century – boxes were private properties of the richest class, typically the financial responsible of the whole building. The stalls hosted the middle class, that gradually increased its social position and for this reason the wooden seats were progressively replaced by chairs. The gallery was reserved to lower classes. Does this social division correspond to a different acoustic comfort? The present work tries to answer this question using subjective preference models provided by scholars. With this aim, the room criteria defined by different authors and in distinct times are lined up with the ISO 3382 standards and analysed depending on the acoustic peculiarities of an Italian Historical Opera House selected as case study. Calibrated impulse responses were handled through the numerical simulations of a whole orchestra of virtual sound sources in the pit.


2019 ◽  
Vol 9 (3) ◽  
pp. 460 ◽  
Author(s):  
Song Li ◽  
Roman Schlieper ◽  
Jürgen Peissig

Several studies show that the reverberation and spectral details in direct sounds are two essential cues for perceived externalization of virtual sound sources in reverberant environments. The present study investigated the role of these two cues in contralateral and ipsilateral ear signals on perceived externalization of headphone-reproduced binaural sound images at different azimuth angles. For this purpose, seven pairs of non-individual binaural room impulse responses (BRIRs) were measured at azimuth angles of −90°, −60°, −30°, 0°, 30°, 60°, and 90° in a listening room. The magnitude spectra of direct parts were smoothed, and the reverberation was removed, either in left or right ear BRIRs. Such modified BRIRs were convolved with a speech signal, and the resulting binaural sounds were presented over headphones. Subjects were asked to assess the degree of perceived externalization for the presented stimuli. The result of the subjective listening experiment revealed that the magnitude spectra of direct parts in ipsilateral ear signals and the reverberation in contralateral ear signals are important for perceived externalization of virtual lateral sound sources.


10.14311/1444 ◽  
2011 ◽  
Vol 51 (5) ◽  
Author(s):  
M. Kunkemoeller ◽  
P. Dietrich ◽  
M. Pollow

Every acoustic source, e.g. a speaker, a musical instrument or a loudspeaker, generally has a frequency dependent characteristic radiation pattern, which is preeminent at higher frequencies. Room acoustic measurements nowadays only account for omnidirectional source characteristics. This motivates a measurement method that is capable of obtaining room impulse responses for these specific radiation patterns by using a superposition approach of several measurements with technically well-defined sound sources. We propose a method based on measurements with a 12-channel independentlydriven dodecahedron loudspeaker array rotated by an automatically controlled turntable.Radiation patterns can be efficiently described with the use of spherical harmonics representation. We propose a method that uses this representation for the spherical loudspeaker array used for the measurements and the target radiation pattern to be used for the synthesis.We show validating results for a deterministic test sound source inside in a small lecture hall.


2015 ◽  
Vol 40 (4) ◽  
pp. 575-584
Author(s):  
Piotr Kleczkowski ◽  
Aleksandra Król ◽  
Paweł Małecki

AbstractIn virtual acoustics or artificial reverberation, impulse responses can be split so that direct and reflected components of the sound field are reproduced via separate loudspeakers. The authors had investigated the perceptual effect of angular separation of those components in commonly used 5.0 and 7.0 multichannel systems, with one and three sound sources respectively (Kleczkowski et al., 2015, J. Audio Eng. Soc. 63, 428-443). In that work, each of the front channels of the 7.0 system was fed with only one sound source. In this work a similar experiment is reported, but with phantom sound sources between the front loud- speakers. The perceptual advantage of separation was found to be more consistent than in the condition of discrete sound sources. The results were analysed both for pooled listeners and in three groups, according to experience. The advantage of separation was the highest in the group of experienced listeners.


2016 ◽  
Vol 113 (48) ◽  
pp. E7856-E7865 ◽  
Author(s):  
James Traer ◽  
Josh H. McDermott

In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


Sign in / Sign up

Export Citation Format

Share Document