Orienting Auditory Attention through Vision: the Impact of Monaural Listening

2021 ◽  
pp. 1-28
Author(s):  
Silvia Turri ◽  
Mehdi Rizvi ◽  
Giuseppe Rabini ◽  
Alessandra Melonio ◽  
Rosella Gennari ◽  
...  

Abstract The understanding of linguistic messages can be made extremely complex by the simultaneous presence of interfering sounds, especially when they are also linguistic in nature. In two experiments, we tested if visual cues directing attention to spatial or temporal components of speech in noise can improve its identification. The hearing-in-noise task required identification of a five-digit sequence (target) embedded in a stream of time-reversed speech. Using a custom-built device located in front of the participant, we delivered visual cues to orient attention to the location of target sounds and/or their temporal window. In Exp. 1 (), we validated this visual-to-auditory cueing method in normal-hearing listeners, tested under typical binaural listening conditions. In Exp. 2 (), we assessed the efficacy of the same visual cues in normal-hearing listeners wearing a monaural ear plug, to study the effects of simulated monaural and conductive hearing loss on visual-to-auditory attention orienting. While Exp. 1 revealed a benefit of both spatial and temporal visual cues for hearing in noise, Exp. 2 showed that only the temporal visual cues remained effective during monaural listening. These findings indicate that when the acoustic experience is altered, visual-to-auditory attention orienting is more robust for temporal compared to spatial attributes of the auditory stimuli. These findings have implications for the relation between spatial and temporal attributes of sound objects, and when planning devices to orient audiovisual attention for subjects suffering from hearing loss.

2021 ◽  
Author(s):  
Marlies Gillis ◽  
Lien Decruy ◽  
Jonas Vanthornhout ◽  
Tom Francart

AbstractWe investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers.Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both population; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed.Our results support that the neural response latency indicates the efficiency of neural speech processing. Hearing-impaired listeners process speech in silence less efficiently then normal-hearing listeners. Our results suggest that this reduction in neural speech processing efficiency is a gradual effect which occurs as hearing deteriorates. Moreover, the efficiency of neural speech processing in hearing-impaired listeners is already at its lowest level when listening to speech in quiet, while normal-hearing listeners show a further decrease in efficiently when the noise level increases.From our results, it is apparent that sound amplification does not solve hearing loss. Even when intelligibility is apparently perfect, hearing-impaired listeners process speech less efficiently.


2010 ◽  
Vol 21 (08) ◽  
pp. 493-511
Author(s):  
Amanda J. Ortmann ◽  
Catherine V. Palmer ◽  
Sheila R. Pratt

Background: A possible voicing cue used to differentiate voiced and voiceless cognate pairs is envelope onset asynchrony (EOA). EOA is the time between the onsets of two frequency bands of energy (in this study one band was high-pass filtered at 3000 Hz, the other low-pass filtered at 350 Hz). This study assessed the perceptual impact of manipulating EOA on voicing perception of initial stop consonants, and whether normal-hearing and hearing-impaired listeners were sensitive to changes in EOA as a cue for voicing. Purpose: The purpose of this study was to examine the effect of spectrally asynchronous auditory delay on the perception of voicing associated with initial stop consonants by normal-hearing and hearing-impaired listeners. Research Design: Prospective experimental study comparing the perceptual differences of manipulating the EOA cues for two groups of listeners. Study Sample: Thirty adults between the ages of 21 and 60 yr completed the study: 17 listeners with normal hearing and 13 listeners with mild-moderate sensorineural hearing loss. Data Collection and Analysis: The participants listened to voiced and voiceless stop consonants within a consonant-vowel syllable structure. The EOA of each syllable was varied along a continuum, and identification and discrimination tasks were used to determine if the EOA manipulation resulted in categorical shifts in voicing perception. In the identification task the participants identified the consonants as belonging to one of two categories (voiced or voiceless cognate). They also completed a same-different discrimination task with the same set of stimuli. Categorical perception was confirmed with a d-prime sensitivity measure by examining how accurately the results from the identification task predicted the discrimination results. The influence of EOA manipulations on the perception of voicing was determined from shifts in the identification functions and discrimination peaks along the EOA continuum. The two participant groups were compared in order to determine the impact of EOA on voicing perception as a function of syllable and hearing status. Results: Both groups of listeners demonstrated a categorical shift in voicing perception with manipulation of EOA for some of the syllables used in this study. That is, as the temporal onset asynchrony between low- and high-frequency bands of speech was manipulated, the listeners' perception of consonant voicing changed between voiced and voiceless categories. No significant differences were found between listeners with normal hearing and listeners with hearing loss as a result of the EOA manipulation. Conclusions: The results of this study suggested that both normal-hearing and hearing-impaired listeners likely use spectrally asynchronous delays found in natural speech as a cue for voicing distinctions. While delays in modern hearing aids are less than those used in this study, possible implications are that additional asynchronous delays from digital signal processing or open-fitting amplification schemes might cause listeners with hearing loss to misperceive voicing cues.


Author(s):  
Ying Yang ◽  
Yanan Xiao ◽  
Yulu Liu ◽  
Qiong Li ◽  
Changshuo Shan ◽  
...  

Background: This study compares the mental health and psychological response of students with or without hearing loss during the recurrence of the COVID-19 pandemic in Beijing, the capital of China. It explores the relevant factors affecting mental health and provides evidence-driven strategies to reduce adverse psychological impacts during the COVID-19 pandemic. Methods: We used the Chinese version of depression, anxiety, and stress scale 21 (DASS-21) to assess the mental health and the impact of events scale—revised (IES-R) to assess the COVID-19 psychological impact. Results: The students with hearing loss are frustrated with their disability and particularly vulnerable to stress symptoms, but they are highly endurable in mitigating this negative impact on coping with their well-being and responsibilities. They are also more resilient psychologically but less resistant mentally to the pandemic impacts than the students with normal hearing. Their mental and psychological response to the pandemic is associated with more related factors and variables than that of the students with normal hearing is. Conclusions: To safeguard the welfare of society, timely information on the pandemic, essential services for communication disorders, additional assistance and support in mental counseling should be provided to the vulnerable persons with hearing loss that are more susceptible to a public health emergency.


2018 ◽  
Vol 115 (14) ◽  
pp. E3286-E3295 ◽  
Author(s):  
Lengshi Dai ◽  
Virginia Best ◽  
Barbara G. Shinn-Cunningham

Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing.


2015 ◽  
Vol 25 (2) ◽  
pp. 60-69 ◽  
Author(s):  
Peter M. Vila ◽  
Judith E. C. Lieu

Unilateral hearing loss (UHL) in children is only recently beginning to be widely appreciated as having a negative impact. We now understand that simply having one normal-hearing ear may not be sufficient for typical child development, and leads to impairments in speech and language outcomes. Unfortunately, UHL is not a rare problem among children in the United States, and is present among more than 1 out of every 10 of adolescents in this country. How UHL specifically affects development of speech and language, however, is currently not well understood. While we know that children with UHL are more likely than their normal-hearing siblings to have speech therapy and individualized education plans at school, we do not yet understand the mechanism through which UHL causes speech and language problems. The objective of this review is to describe what is currently known about the impact of UHL on speech and language development in children. Furthermore, we discuss some of the potential pathways through which the impact of unilateral hearing loss on speech and language might be mediated.


2014 ◽  
Vol 23 (4) ◽  
pp. 385-393 ◽  
Author(s):  
Heekyung J. Han ◽  
Robert S. Schlauch ◽  
Aparna Rao

Purpose During routine clinical speech assessment, if the person being tested were to write down what he or she heard, it would not always match what the audiologist heard while scoring the listener's vocal responses (Nelson & Chaiklin, 1970). This study demonstrated a method to assess examiner accuracy and whether speechreading cues reduce writedown–talkback errors. Method Examiners were divided into 3 categories: normal hearing native speakers of English, normal hearing nonnative speakers of English, and native speakers with hearing loss. Each examiner assessed 4 normal-hearing listeners. Two NU-6 lists were presented to each listener; one was scored without visual cues and one with visual cues. Lists were presented at 50 dB HL in the presence of speech noise at 0 dB signal-to-noise ratio (SNR). Results Results analyzed by percentage of correct phonemes and words revealed fewer writedown–talkback discrepancies for all 3 examiner groups when visual cues were added, with a substantial improvement for examiners with hearing loss. Conclusion The finding of errors between talkback versus writedown scoring of lists for all of the examiners, even with visual cues, suggests a need for modification of the clinical word-recognition procedure for applications that potentially affect diagnosis, rehabilitation choices, or financial compensation.


1994 ◽  
Vol 37 (3) ◽  
pp. 510-521 ◽  
Author(s):  
Maureen B. Higgins ◽  
Arlene E. Carney ◽  
Laura Schulte

The purpose of this investigation was to study the impact of hearing loss on phonatory, velopharyngeal, and articulatory functioning using a comprehensive physiological approach. Electroglottograph (EGG), nasal/oral air flow, and intraoral air pressure signals were recorded simultaneously from adults with impaired and normal hearing as they produced syllables and words of varying physiological difficulty. The individuals with moderate-to-profound hearing loss had good to excellent oral communication skills. Intraoral pressure, nasal air flow, durations of lip, velum, and vocal fold articulations, estimated subglottal pressure, mean phonatory air flow, fundamental frequency, and EGG abduction quotient were compared between the two subject groups. Data from the subjects with hearing loss also were compared across aided and unaided conditions to investigate the influence of auditory feedback on speech motor control. The speakers with hearing loss had significantly higher intraoral pressures, subglottal pressures, laryngeal resistances, and fundamental frequencies than those with normal hearing. There was notable between-subject variability. All of the individuals with profound hearing loss had at least one speech/voice physiology measure that fell outside of the normal range, and most of the subjects demonstrated unique clusters of abnormal behaviors. Abnormal behaviors were more evident in the phonatory than articulatory or velopharyngeal systems and were generally consistent with vocal fold hyperconstriction. There was evidence from individual data that vocal fold posturing influenced articulatory timing. The results did not support the idea that the speech production skills of adults with moderate-to-profound hearing loss who are good oral communicators deteriorate when there are increased motoric demands on the velopharyngeal and phonatory mechanism. Although no significant differences were found between the aided and unaided conditions, 7 of 10 subjects showed the same direction of change for subglottal pressure, intraoral pressure, nasal air flow, and the duration of lip and vocal fold articulations. We conclude that physiological assessments provide important information about the speech/voice production abilities of individuals with moderate-to-profound hearing loss and are a valuable addition to standard assessment batteries.


2017 ◽  
Vol 26 (3) ◽  
pp. 318-327
Author(s):  
Andrea L. Pittman ◽  
Elizabeth C. Stewart ◽  
Ian S. Odgear ◽  
Amanda P. Willman

Purpose Lexical acquisition was examined in children and adults to determine if the skills needed to detect and learn new words are retained in the adult years. In addition to advancing age, the effects of hearing loss were also examined. Method Measures of word recognition, detection of nonsense words within sentences, and novel word learning were obtained in quiet for 20 children with normal hearing and 21 with hearing loss (8–12 years) as well as for 15 adults with normal hearing and 17 with hearing loss (58–79 years). Listeners with hearing loss were tested with and without high-frequency acoustic energy to identify the type of amplification (narrowband, wideband, or frequency lowering) that yielded optimal performance. Results No differences were observed between the adults and children with normal hearing except for the adults' better nonsense word detection. The poorest performance was observed for the listeners with hearing loss in the unaided condition. Performance improved significantly with amplification to levels at or near that of their counterparts with normal hearing. With amplification, the adults performed as well as the children on all tasks except for word recognition. Conclusions Adults retain the skills necessary for lexical acquisition regardless of hearing status. However, uncorrected hearing loss nearly eliminates these skills.


1990 ◽  
Vol 55 (3) ◽  
pp. 439-453 ◽  
Author(s):  
J. L. Stouffer ◽  
Richard S. Tyler

A questionnaire was administered to 528 tinnitus patients to obtain data on their reactions to tinnitus. Results include a discussion of: (a) population characteristics, (b) perceptual characteristics, (c) the impact of tinnitus on daily life, and (d) etiology. Significant gender differences are also discussed. Tinnitus was not an occasional phenomenon, but was present for more than 26 days per month in 74% of the patients. Other important findings about tinnitus include: (a) Hearing levels at 1000 and 4000 Hz were ≤ 25 dB HL for 18% of the tinnitus patients, which suggests that some patients had normal hearing or mild hearing losses; (b) the prevalence of tinnitus in patients with noise-induced hearing loss (NIHL) was 30% for males and only 3% for females; (c) about 25% of the patients reported tinnitus severity had increased since tinnitus onset; (d) the effects of tinnitus were more severe in patients who reported tinnitus as their primary complaint and in patients diagnosed as having Meniere's syndrome tinnitus; and (e) some patients reported that noise exacerbated their tinnitus, whereas others reported that a quiet background exacerbated their tinnitus.


2021 ◽  
Author(s):  
Hye Yoon Seol ◽  
Soojin Kang ◽  
Ji Hyun Lim ◽  
Sung Hwa Hong ◽  
Il Joon Moon

UNSTRUCTURED It has been noted in literature that, there is a gap between clinical assessment and real-world performance. Real-world conversations entail visual and information and yet there are not any audiological assessment tools that include visual information. Virtual reality (VR) technology has been applied to various areas including audiology. However, the use of VR in speech in noise perception has not been investigated yet. The purpose of this study is to investigate the impact of virtual space on speech performance and its feasibility to be used as a speech test instrument. Thirty individuals with normal hearing and twenty-five individuals with hearing loss completed puretone audiometry and the Korean version of the Hearing in Noise Test (K-HINT) in conventional K-HINT, VS on PC, and VS on head-mounted display at -10, -5, 0 and +5dB signal-to-noise ratios. Participants listened to target speech and repeated back to the tester for all conditions. Hearing aid users in the hearing loss group completed testing in unaided and aided conditions. A questionnaire was administered after testing. Provision of visual information had a significant impact on speech performance between the normal hearing and hearing impairment groups. Hearing aid use led to better integration of audio and visual cues. Statistical significance was observed for some conditions in each group and between hearing aid and non-hearing aid users. Participants reported positive responses across almost all items on the questionnaire except for the weight of the headset. Participants preferred a test method with visual imagery, but the headset was heavy. Findings are in line with previous literature that visual cues are beneficial for communication. This is the first study to include hearing aid users with a more naturalistic stimulus and a relatively “simple” test environment, suggesting the feasibility of virtual reality audiological testing in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document