spatial unmasking
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 5)

H-INDEX

13
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Ravinderjit Singh ◽  
Hari Bharadwaj

The auditory system has exquisite temporal coding in the periphery which is transformed into a rate-based code in central auditory structures like auditory cortex. However, the cortex is still able to synchronize, albeit at lower modulation rates, to acoustic fluctuations. The perceptual significance of this cortical synchronization is unknown. We estimated physiological synchronization limits of cortex (in humans with electroencephalography) and brainstem neurons (in chinchillas) to dynamic binaural cues using a novel system-identification technique, along with parallel perceptual measurements. We find that cortex can synchronize to dynamic binaural cues up to approximately 10 Hz, which aligns well with our measured limits of perceiving dynamic spatial information and utilizing dynamic binaural cues for spatial unmasking, i.e. measures of binaural sluggishness. We also find the tracking limit for frequency modulation (FM) is similar to the limit for spatial tracking, demonstrating that this sluggish tracking is a more general perceptual limit that can be accounted for by cortical temporal integration limits.


2020 ◽  
Vol 10 (15) ◽  
pp. 5257
Author(s):  
Nathan Berwick ◽  
Hyunkook Lee

This study examined whether the spatial unmasking effect operates on speech reception thresholds (SRTs) in the median plane. SRTs were measured using an adaptive staircase procedure, with target speech sentences and speech-shaped noise maskers presented via loudspeakers at −30°, 0°, 30°, 60° and 90°. Results indicated a significant median plane spatial unmasking effect, with the largest SRT gain obtained for the −30° elevation of the masker. Head-related transfer function analysis suggests that the result is associated with the energy weighting of the ear-input signal of the masker at upper-mid frequencies relative to the maskee.


2020 ◽  
Vol 24 ◽  
pp. 233121652094698
Author(s):  
Sara M. Misurelli ◽  
Matthew J. Goupell ◽  
Emily A. Burg ◽  
Rachael Jocewicz ◽  
Alan Kan ◽  
...  

The ability to attend to target speech in background noise is an important skill, particularly for children who spend many hours in noisy environments. Intelligibility improves as a result of spatial or binaural unmasking in the free-field for normal-hearing children; however, children who use bilateral cochlear implants (BiCIs) demonstrate little benefit in similar situations. It was hypothesized that poor auditory attention abilities might explain the lack of unmasking observed in children with BiCIs. Target and interferer speech stimuli were presented to either or both ears of BiCI participants via their clinical processors. Speech reception thresholds remained low when the target and interferer were in opposite ears, but they did not show binaural unmasking when the interferer was presented to both ears and the target only to one ear. These results demonstrate that, in the most extreme cases of stimulus separation, children with BiCIs can ignore an interferer and attend to target speech, but there is weak or absent binaural unmasking. It appears that children with BiCIs mostly experience poor encoding of binaural cues rather than deficits in ability to selectively attend to target speech.


10.29007/r6r4 ◽  
2019 ◽  
Author(s):  
Alexander Vilkaitis ◽  
Bruce Wiggins

This paper discusses ambisonic sound design for a theatrical production of King Lear. Sound, and its use in theatre, has taken a back-seat in comparison to the development of other theatre technologies such as lighting, projection and automation in recent years. Spatial audio implementations in theatre give the sound designer and the artistic team much greater scope for creativity, along with improvements in source separation and intelligibility of sources due to spatial unmasking. A 360 degree video was also recorded, with first and third order ambisonic binaural reproductions of the sound design stitched on to the video to create a virtual reality experience. The project was successful, whilst highlighting some practical and perceptual limitations in spatial audio for theatre.


2019 ◽  
Vol 23 ◽  
pp. 233121651984828
Author(s):  
Lars Bramsløw ◽  
Marianna Vatti ◽  
Rikke Rossing ◽  
Gaurav Naithani ◽  
Niels Henrik Pontoppidan

People with hearing impairment find competing voices scenarios to be challenging, both with respect to switching attention from one talker to the other, as well as maintaining attention. With the Danish competing voices test (CVT) presented here, the dual-attention skills can be assessed. The CVT provides sentences spoken by three male and three female talkers, played in sentence pairs. The task of the listener is to repeat the target sentence from the sentence pair based on cueing either before or after playback. One potential way of assisting segregation of two talkers is to take advantage of spatial unmasking by presenting one talker per ear after application of time-frequency masks for separating the mixture. Using the CVT, this study evaluated four spatial conditions in 14 moderate-to-severely hearing-impaired listeners to establish benchmark results for this type of algorithm applied to hearing-impaired listeners. The four spatial conditions were as follows: summed (diotic), separate, the ideal ratio mask, and the ideal binary mask. The results show that the test is sensitive to the change in spatial condition. The temporal position of the cue has a large impact, as cueing the target talker before playback focuses the attention toward the target, whereas cueing after playback requires equal attention to the two talkers, which is more difficult. Furthermore, both applied ideal masks show test scores very close to the ideal separate spatial condition, suggesting that this technique is useful for future separation algorithms using estimated rather than ideal masks.


2014 ◽  
Vol 135 (4) ◽  
pp. 2160-2160
Author(s):  
Thibaud Leclére ◽  
Mathieu Lavandier ◽  
Mickael L. Deroche

Sign in / Sign up

Export Citation Format

Share Document