speech understanding
Recently Published Documents


TOTAL DOCUMENTS

666
(FIVE YEARS 108)

H-INDEX

41
(FIVE YEARS 4)

2022 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Rasmus Sönnichsen ◽  
Gerard Llorach Tó ◽  
Sabine Hochmuth ◽  
Volker Hohmann ◽  
Andreas Radeloff

Author(s):  
E. Artukarslan ◽  
F. Matin ◽  
F. Donnerstag ◽  
L. Gärtner ◽  
T. Lenarz ◽  
...  

Abstract Introduction Superficial hemosiderosis is a sub-form of hemosiderosis in which the deposits of hemosiderin in the central nervous system damage the nerve cells. This form of siderosis is caused by chronic cerebral hemorrhages, especially subarachnoid hemorrhages. The diversity of symptoms depends on the respective damage to the brain, but in most of the cases it shows up as incipient unilateral or bilateral hearing loss, ataxia and signs of pyramidal tracts. We are investigating the question of whether cochlear implantation is a treatment option for patients with superficial hemosiderosis and which strategy of diagnostic procedure has to be ruled out preoperatively. Materials and methods In a tertiary hospital between 2009 and 2018, we examined (N = 5) patients with radiologically confirmed central hemosiderosis who suffered from profound hearing loss to deafness were treated with a cochlear implant (CI). We compared pre- and postoperative speech comprehension (Freiburg speech intelligibility test for monosyllables and HSM sentence test). Results Speech understanding improved on average by 20% (monosyllabic test in the Freiburg speech intelligibility test) and by 40% in noise (HSM sentence test) compared to preoperative speech understanding with optimized hearing aids. Discussion The results show that patients with superficial siderosis benefit from CI with better speech understanding. The results are below the average for all postlingual deaf CI patients. Superficial siderosis causes neural damages, which explains the reduced speech understanding based on central hearing loss. It is important to correctly weigh the patient's expectations preoperatively and to include neurologists within the therapy procedure.


2021 ◽  
pp. 1-12
Author(s):  
Manaal Faruqui ◽  
Dilek Hakkani-Tür

Abstract As more users across the world are interacting with dialog agents in their daily life, there is a need for better speech understanding that calls for renewed attention to the dynamics between research in automatic speech recognition (ASR) and natural language understanding (NLU). We briefly review these research areas and lay out the current relationship between them. In light of the observations we make in this paper, we argue that (1) NLU should be cognizant of the presence of ASR models being used upstream in a dialog system’s pipeline, (2) ASR should be able to learn from errors found in NLU, (3) there is a need for end-to-end datasets that provide semantic annotations on spoken input, (4) there should be stronger collaboration between ASR and NLU research communities.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261295
Author(s):  
Florian Langner ◽  
Julie G. Arenberg ◽  
Andreas Büchner ◽  
Waldo Nogueira

Objectives The relationship between electrode-nerve interface (ENI) estimates and inter-subject differences in speech performance with sequential and simultaneous channel stimulation in adult cochlear implant listeners were explored. We investigated the hypothesis that individuals with good ENIs would perform better with simultaneous compared to sequential channel stimulation speech processing strategies than those estimated to have poor ENIs. Methods Fourteen postlingually deaf implanted cochlear implant users participated in the study. Speech understanding was assessed with a sentence test at signal-to-noise ratios that resulted in 50% performance for each user with the baseline strategy F120 Sequential. Two simultaneous stimulation strategies with either two (Paired) or three sets of virtual channels (Triplet) were tested at the same signal-to-noise ratio. ENI measures were estimated through: (I) voltage spread with electrical field imaging, (II) behavioral detection thresholds with focused stimulation, and (III) slope (IPG slope effect) and 50%-point differences (dB offset effect) of amplitude growth functions from electrically evoked compound action potentials with two interphase gaps. Results A significant effect of strategy on speech understanding performance was found, with Triplets showing a trend towards worse speech understanding performance than sequential stimulation. Focused thresholds correlated positively with the difference required to reach most comfortable level (MCL) between Sequential and Triplet strategies, an indirect measure of channel interaction. A significant offset effect (difference in dB between 50%-point for higher eCAP growth function slopes with two IPGs) was observed. No significant correlation was observed between the slopes for the two IPGs tested. None of the measures used in this study correlated with the differences in speech understanding scores between strategies. Conclusions The ENI measure based on behavioral focused thresholds could explain some of the difference in MCLs, but none of the ENI measures could explain the decrease in speech understanding with increasing pairs of simultaneously stimulated electrodes in processing strategies.


2021 ◽  
Author(s):  
William Marslen-Wilson

Human listeners understand spoken language literally as they hear it, reflecting a perceptually seamless process of real-time comprehension of what the speaker is saying. This remarkable experience of immediacy is rooted in the exceptional earliness with which information carried by successive words is integrated into the interpretation of the current utterance. But despite 50 years of research, there has been no accepted mechanistic neurobiological account of the brain systems that support this process. Only recently have scientific tools emerged that allow us to probe the real-time activity of these brain systems, telling us where and when such activity can be detected and what their neurocomputational content might be. The resulting research enables us, first, to reject the historically dominant account of early speech interpretation as a linguistically stratified computational hierarchy, centered around the notion of a phoneme, and based on sequential transitions between successive representational states.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alexandra Annemarie Ludwig ◽  
Sylvia Meuret ◽  
Rolf-Dieter Battmer ◽  
Marc Schönwiesner ◽  
Michael Fuchs ◽  
...  

Spatial hearing is crucial in real life but deteriorates in participants with severe sensorineural hearing loss or single-sided deafness. This ability can potentially be improved with a unilateral cochlear implant (CI). The present study investigated measures of sound localization in participants with single-sided deafness provided with a CI. Sound localization was measured separately at eight loudspeaker positions (4°, 30°, 60°, and 90°) on the CI side and on the normal-hearing side. Low- and high-frequency noise bursts were used in the tests to investigate possible differences in the processing of interaural time and level differences. Data were compared to normal-hearing adults aged between 20 and 83. In addition, the benefit of the CI in speech understanding in noise was compared to the localization ability. Fifteen out of 18 participants were able to localize signals on the CI side and on the normal-hearing side, although performance was highly variable across participants. Three participants always pointed to the normal-hearing side, irrespective of the location of the signal. The comparison with control data showed that participants had particular difficulties localizing sounds at frontal locations and on the CI side. In contrast to most previous results, participants were able to localize low-frequency signals, although they localized high-frequency signals more accurately. Speech understanding in noise was better with the CI compared to testing without CI, but only at a position where the CI also improved sound localization. Our data suggest that a CI can, to a large extent, restore localization in participants with single-sided deafness. Difficulties may remain at frontal locations and on the CI side. However, speech understanding in noise improves when wearing the CI. The treatment with a CI in these participants might provide real-world benefits, such as improved orientation in traffic and speech understanding in difficult listening situations.


2021 ◽  
Author(s):  
Arefeh Sherafati ◽  
Noel Dwyer ◽  
Aahana Bajracharya ◽  
Mahlega S Hassanpour ◽  
Adam T Eggebrecht ◽  
...  

Cochlear implants are neuroprosthetic devices that can restore hearing in individuals with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of spectral detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left dorsolateral prefrontal cortex, overlapping with functionally-defined domain-general processing seen in a spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment in the left dorsolateral prefrontal cortex.


2021 ◽  
pp. 1-14
Author(s):  
Sarah M. Theodoroff ◽  
Frederick J. Gallun ◽  
Garnett P. McMillan ◽  
Michelle Molis ◽  
Nirmal Srinivasan ◽  
...  

Purpose Type 2 diabetes mellitus (DM2) is associated with impaired hearing. However, the evidence is less clear if DM2 can lead to difficulty understanding speech in complex acoustic environments, independently of age and hearing loss effects. The purpose of this study was to estimate the magnitude of DM2-related effects on speech understanding in the presence of competing speech after adjusting for age and hearing. Method A cross-sectional study design was used to investigate the relationship between DM2 and speech understanding in 190 Veterans ( M age = 47 years, range: 25–76). Participants were classified as having no diabetes ( n = 74), prediabetes ( n = 19), or DM2 that was well controlled ( n = 24) or poorly controlled ( n = 73). A test of spatial release from masking (SRM) was presented in a virtual acoustical simulation over insert earphones with multiple talkers using sentences from the coordinate response measure corpus to determine the target-to-masker ratio (TMR) required for 50% correct identification of target speech. A linear mixed model of the TMR results was used to estimate SRM and separate effects of diabetes group, age, and low-frequency pure-tone average (PTA-low) and high-frequency pure-tone average. A separate model estimated the effects of DM2 on PTA-low. Results After adjusting for hearing and age, diabetes-related effects remained among those whose DM2 was well controlled, showing an SRM loss of approximately 0.5 dB. Results also showed effects of hearing loss and age, consistent with the literature on people without DM2. Low-frequency hearing loss was greater among those with DM2. Conclusions In a large cohort of Veterans, low-frequency hearing loss and older age negatively impact speech understanding. Compared with nondiabetics, individuals with controlled DM2 have additional auditory deficits beyond those associated with hearing loss or aging. These results provide a potential explanation for why individuals who have diabetes and/or are older often report difficulty understanding speech in real-world listening environments. Supplemental Material https://doi.org/10.23641/asha.16746475


2021 ◽  
Vol 150 (4) ◽  
pp. A311-A311
Author(s):  
Toni Smith ◽  
Yi Shen ◽  
Gary R. Kidd ◽  
Anusha Mamidipaka ◽  
J Devin McAuley

Author(s):  
Shayna P. Cooperman ◽  
Ksenia A. Aaron ◽  
Ayman Fouad ◽  
Emma Tran ◽  
Nikolas H. Blevins ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document