informational masking
Recently Published Documents


TOTAL DOCUMENTS

239
(FIVE YEARS 40)

H-INDEX

33
(FIVE YEARS 2)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Min Zhang ◽  
Rachel N Denison ◽  
Denis G Pelli ◽  
Thuy Tien C Le ◽  
Antje Ihlefeld

AbstractSensory cortical mechanisms combine auditory or visual features into perceived objects. This is difficult in noisy or cluttered environments. Knowing that individuals vary greatly in their susceptibility to clutter, we wondered whether there might be a relation between an individual’s auditory and visual susceptibilities to clutter. In auditory masking, background sound makes spoken words unrecognizable. When masking arises due to interference at central auditory processing stages, beyond the cochlea, it is called informational masking. A strikingly similar phenomenon in vision, called visual crowding, occurs when nearby clutter makes a target object unrecognizable, despite being resolved at the retina. We here compare susceptibilities to auditory informational masking and visual crowding in the same participants. Surprisingly, across participants, we find a negative correlation (R = –0.7) between susceptibility to informational masking and crowding: Participants who have low susceptibility to auditory clutter tend to have high susceptibility to visual clutter, and vice versa. This reveals a tradeoff in the brain between auditory and visual processing.


2021 ◽  
Vol 64 (10) ◽  
pp. 4014-4029
Author(s):  
Kathy R. Vander Werff ◽  
Christopher E. Niemczak ◽  
Kenneth Morse

Purpose Background noise has been categorized as energetic masking due to spectrotemporal overlap of the target and masker on the auditory periphery or informational masking due to cognitive-level interference from relevant content such as speech. The effects of masking on cortical and sensory auditory processing can be objectively studied with the cortical auditory evoked potential (CAEP). However, whether effects on neural response morphology are due to energetic spectrotemporal differences or informational content is not fully understood. The current multi-experiment series was designed to assess the effects of speech versus nonspeech maskers on the neural encoding of speech information in the central auditory system, specifically in terms of the effects of speech babble noise maskers varying by talker number. Method CAEPs were recorded from normal-hearing young adults in response to speech syllables in the presence of energetic maskers (white or speech-shaped noise) and varying amounts of informational maskers (speech babble maskers). The primary manipulation of informational masking was the number of talkers in speech babble, and results on CAEPs were compared to those of nonspeech maskers with different temporal and spectral characteristics. Results Even when nonspeech noise maskers were spectrally shaped and temporally modulated to speech babble maskers, notable changes in the typical morphology of the CAEP in response to speech stimuli were identified in the presence of primarily energetic maskers and speech babble maskers with varying numbers of talkers. Conclusions While differences in CAEP outcomes did not reach significance by number of talkers, neural components were significantly affected by speech babble maskers compared to nonspeech maskers. These results suggest an informational masking influence on neural encoding of speech information at the sensory cortical level of auditory processing, even without active participation on the part of the listener.


2021 ◽  
Vol 150 (4) ◽  
pp. A144-A144
Author(s):  
Sarah Villard ◽  
Ayesha Alam ◽  
Tyler K. Perrachione ◽  
Gerald Kidd

Author(s):  
Verena Müller ◽  
Ruth Lang-Roth

Purpose The aim of the study was to assess the susceptibility to energetic and informational masking in patients with single-sided deafness (SSD) with one normal-hearing (NH) ear and a cochlear implant (CI) in the contralateral ear, understand the effect on speech recognition when spatially separating noise and speech maskers, and investigate the influence of the CI in situations with energetic and informational masking. Method Speech recognition was measured in the presence of either a modulated speech-shaped noise or one of two competing speech maskers in 11 SSD-CI listeners. The speech maskers were manipulated with respect to fundamental frequency to consider the effect of different voices. Measurements were conducted in the unaided (NH) and aided (NHCI) conditions. Spatial release from masking (SRM) was calculated for each masker type and both listening conditions (NH and NHCI) by subtracting scores of the colocated target and masker condition (S 0 N 0 ) from the spatially separated target and masker conditions (S 0 N ≠0 ). Results Speech recognition was highly variable depending on the type of masker. SRM occurred in the unaided (NH) and aided (NHCI) conditions when the speech masker had the same gender as the target talker. Adding the CI improved speech recognition when this speech masker was ipsilateral to the NH ear. Conclusions The amount of informational masking is substantial in SSD-CI listeners with both colocated and spatially separated target and masker signals. The contribution of SRM to better speech recognition largely depends on the masker and is considerable when no differences in voices between the target and the competing talker occur. There is only a slight improvement in speech recognition by adding the CI.


2021 ◽  
Vol 15 ◽  
Author(s):  
Min Zhang ◽  
Nima Alamatsaz ◽  
Antje Ihlefeld

Suppressing unwanted background sound is crucial for aural communication. A particularly disruptive type of background sound, informational masking (IM), often interferes in social settings. However, IM mechanisms are incompletely understood. At present, IM is identified operationally: when a target should be audible, based on suprathreshold target/masker energy ratios, yet cannot be heard because target-like background sound interferes. We here confirm that speech identification thresholds differ dramatically between low- vs. high-IM background sound. However, speech detection thresholds are comparable across the two conditions. Moreover, functional near infrared spectroscopy recordings show that task-evoked blood oxygenation changes near the superior temporal gyrus (STG) covary with behavioral speech detection performance for high-IM but not low-IM background sound, suggesting that the STG is part of an IM-dependent network. Moreover, listeners who are more vulnerable to IM show increased hemodynamic recruitment near STG, an effect that cannot be explained based on differences in task difficulty across low- vs. high-IM. In contrast, task-evoked responses near another auditory region of cortex, the caudal inferior frontal sulcus (cIFS), do not predict behavioral sensitivity, suggesting that the cIFS belongs to an IM-independent network. Results are consistent with the idea that cortical gating shapes individual vulnerability to IM.


Author(s):  
Megan C. Fitzhugh ◽  
Arianna N. LaCroix ◽  
Corianne Rogalsky

Purpose Sentence comprehension deficits are common following a left hemisphere stroke and have primarily been investigated under optimal listening conditions. However, ample work in neurotypical controls indicates that background noise affects sentence comprehension and the cognitive resources it engages. The purpose of this study was to examine how background noise affects sentence comprehension poststroke using both energetic and informational maskers. We further sought to identify whether sentence comprehension in noise abilities are related to poststroke cognitive abilities, specifically working memory and/or attentional control. Method Twenty persons with chronic left hemisphere stroke completed a sentence–picture matching task where they listened to sentences presented in three types of maskers: multispeakers, broadband noise, and silence (control condition). Working memory, attentional control, and hearing thresholds were also assessed. Results A repeated-measures analysis of variance identified participants to have the greatest difficulty with the multispeakers condition, followed by broadband noise and then silence. Regression analyses, after controlling for age and hearing ability, identified working memory as a significant predictor of listening engagement (i.e., mean reaction time) in broadband noise and multispeakers and attentional control as a significant predictor of informational masking effects (computed as a reaction time difference score where broadband noise is subtracted from multispeakers). Conclusions The results from this study indicate that background noise impacts sentence comprehension abilities poststroke and that these difficulties may arise due to deficits in the cognitive resources supporting sentence comprehension and not other factors such as age or hearing. These findings also highlight a relationship between working memory abilities and sentence comprehension in background noise. We further suggest that attentional control abilities contribute to sentence comprehension by supporting the additional demands associated with informational masking. Supplemental Material https://doi.org/10.23641/asha.14984511


2021 ◽  
Vol 149 (5) ◽  
pp. 3665-3673
Author(s):  
Christopher Conroy ◽  
Gerald Kidd

2021 ◽  
Vol 149 (4) ◽  
pp. 2353-2366
Author(s):  
Niek J. Versfeld ◽  
Sisi Lie ◽  
Sophia E. Kramer ◽  
Adriana A. Zekveld

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Yang Wenyi Liu ◽  
Bing Wang ◽  
Bing Chen ◽  
John J. Galvin ◽  
Qian-Jie Fu

AbstractMany tinnitus patients report difficulties understanding speech in noise or competing talkers, despite having “normal” hearing in terms of audiometric thresholds. The interference caused by tinnitus is more likely central in origin. Release from informational masking (more central in origin) produced by competing speech may further illuminate central interference due to tinnitus. In the present study, masked speech understanding was measured in normal hearing listeners with or without tinnitus. Speech recognition thresholds were measured for target speech in the presence of multi-talker babble or competing speech. For competing speech, speech recognition thresholds were measured for different cue conditions (i.e., with and without target-masker sex differences and/or with and without spatial cues). The present data suggest that tinnitus negatively affected masked speech recognition even in individuals with no measurable hearing loss. Tinnitus severity appeared to especially limit listeners’ ability to segregate competing speech using talker sex differences. The data suggest that increased informational masking via lexical interference may tax tinnitus patients’ central auditory processing resources.


Sign in / Sign up

Export Citation Format

Share Document