scholarly journals The Clear-Speech Benefit for School-Age Children: Speech-in-Noise and Speech-in-Speech Recognition

2020 ◽  
Vol 63 (12) ◽  
pp. 4265-4276
Author(s):  
Lauren Calandruccio ◽  
Heather L. Porter ◽  
Lori J. Leibold ◽  
Emily Buss

Purpose Talkers often modify their speech when communicating with individuals who struggle to understand speech, such as listeners with hearing loss. This study evaluated the benefit of clear speech in school-age children and adults with normal hearing for speech-in-noise and speech-in-speech recognition. Method Masked sentence recognition thresholds were estimated for school-age children and adults using an adaptive procedure. In Experiment 1, the target and masker were summed and presented over a loudspeaker located directly in front of the listener. The masker was either speech-shaped noise or two-talker speech, and target sentences were produced using a clear or conversational speaking style. In Experiment 2, stimuli were presented over headphones. The two-talker speech masker was diotic (M 0 ). Clear and conversational target sentences were presented either in-phase (T 0 ) or out-of-phase (T π ) between the two ears. The M 0 T π condition introduces a segregation cue that was expected to improve performance. Results For speech presented over a single loudspeaker (Experiment 1), the clear-speech benefit was independent of age for the noise masker, but it increased with age for the two-talker masker. Similar age effects for the two-talker speech masker were seen under headphones with diotic presentation (M 0 T 0 ), but comparable clear-speech benefit as a function of age was observed with a binaural cue to facilitate segregation (M 0 T π ). Conclusions Consistent with prior research, children showed a robust clear-speech benefit for speech-in-noise recognition. Immaturity in the ability to segregate target from masker speech may limit young children's ability to benefit from clear-speech modifications for speech-in-speech recognition under some conditions. When provided with a cue that facilitates segregation, children as young as 4–7 years of age derived a clear-speech benefit in a two-talker masker that was similar to the benefit experienced by adults.

2018 ◽  
Vol 61 (2) ◽  
pp. 420-427 ◽  
Author(s):  
Carla L. Youngdahl ◽  
Eric W. Healy ◽  
Sarah E. Yoho ◽  
Frédéric Apoux ◽  
Rachael Frush Holt

Purpose Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults. Method Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups. Results It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects. Conclusions In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.


2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.


2017 ◽  
Vol 21 ◽  
pp. 233121651668678 ◽  
Author(s):  
Tina M. Grieco-Calub ◽  
Kristina M. Ward ◽  
Laurel Brehm

2008 ◽  
Vol 19 (02) ◽  
pp. 135-146 ◽  
Author(s):  
Andrew Stuart

Sentence recognition in noise was employed to investigate the development of temporal resolution in school-age children. Eighty children aged 6 to 15 years and 16 young adults participated. Reception thresholds for sentences (RTSs) were determined in quiet and in backgrounds of competing continuous and interrupted noise. In the noise conditions, RTSs were determined with a fixed noise level. RTSs were higher in quiet for six- to seven-year-old children (p = .006). Performance was better in the interrupted noise evidenced by lower RTS signal-to-noise ratios (S/Ns) relative to continuous noise (p < .0001). An effect of age was found in noise (p < .0001) where RTS S/Ns decreased with increasing age. Specifically, children under 14 years performed worse than adults. "Release from masking" was computed by subtracting RTS S/Ns in interrupted noise from continuous noise for each participant. There was no significant difference in RTS S/N difference scores as a function of age (p = .057). Children were more adversely affected by noise and needed greater S/Ns in order to perform as well as adults. Since there was no effect of age on the amount of release from masking, one can suggest that school-age children have inherently poorer processing efficiency rather than temporal resolution. Se utilizó el reconocimiento de frases en ruido para investigar el desarrollo de la resolución temporal en niños de edad escolar. Dieciocho niños con edades entre 6 y 15 años y 16 adultos jóvenes participaron. Los umbrales de recepción de frases (RTS) se determinaron en silencio y ante ruidos de fondo de competencia, continuos o interrumpidos. En condiciones de ruido, los RTS se determinaron contra un nivel fijo de ruido. Los RTS fueron más alto en silencio para los niños de seis a siete años de edad (p = .006). El desempeño fue mejor en medio de ruido interrumpido, con una tasa señal/ruido (S/N) menor para RTS, en relación al ruido continuo (p < .0001). Un efecto de la edad se encontró en medio de ruido (p < .0001) donde la S/N para RTS disminuyó conforma aumentó la edad. Específicamente, los niños menores de 14 años de edad funcionaron peor que los adultos. Se computó "liberación del enmascaramiento" sustrayendo las S/N para RTS en ruido interrumpido, de las de ruido continuo para cada participante. No existieron diferencias significativas en los puntajes de diferencia de las S/N para RTS como función de la edad (p = .057). Los niños se vieron más adversamente afectados por el ruido y necesitaron de mayores S/N para rendir tan bien como los adultos. Dado que no existió un efecto de la edad en la cantidad de liberación del enmascaramiento, uno puede sugerir que los niños de edad escolar tienen una eficiencia de procesamiento inherentemente más pobre que su resolución temporal.


Author(s):  
Brandi Jett ◽  
Emily Buss ◽  
Virginia Best ◽  
Jacob Oleson ◽  
Lauren Calandruccio

Purpose Three experiments were conducted to better understand the role of between-word coarticulation in masked speech recognition. Specifically, we explored whether naturally coarticulated sentences supported better masked speech recognition as compared to sentences derived from individually spoken concatenated words. We hypothesized that sentence recognition thresholds (SRTs) would be similar for coarticulated and concatenated sentences in a noise masker but would be better for coarticulated sentences in a speech masker. Method Sixty young adults participated ( n = 20 per experiment). An adaptive tracking procedure was used to estimate SRTs in the presence of noise or two-talker speech maskers. Targets in Experiments 1 and 2 were matrix-style sentences, while targets in Experiment 3 were semantically meaningful sentences. All experiments included coarticulated and concatenated targets; Experiments 2 and 3 included a third target type, concatenated keyword-intensity–matched (KIM) sentences, in which the words were concatenated but individually scaled to replicate the intensity contours of the coarticulated sentences. Results Regression analyses evaluated the main effects of target type, masker type, and their interaction. Across all three experiments, effects of target type were small (< 2 dB). In Experiment 1, SRTs were slightly poorer for coarticulated than concatenated sentences. In Experiment 2, coarticulation facilitated speech recognition compared to the concatenated KIM condition. When listeners had access to semantic context (Experiment 3), a coarticulation benefit was observed in noise but not in the speech masker. Conclusions Overall, differences between SRTs for sentences with and without between-word coarticulation were small. Beneficial effects of coarticulation were only observed relative to the concatenated KIM targets; for unscaled concatenated targets, it appeared that consistent audibility across the sentence offsets any benefit of coarticulation. Contrary to our hypothesis, effects of coarticulation generally were not more pronounced in speech maskers than in noise maskers.


2014 ◽  
Vol 57 (5) ◽  
pp. 1908-1918 ◽  
Author(s):  
Kristin J. Van Engen ◽  
Jasmine E. B. Phelps ◽  
Rajka Smiljanic ◽  
Bharath Chandrasekaran

Purpose The authors sought to investigate interactions among intelligibility-enhancing speech cues (i.e., semantic context, clearly produced speech, and visual information) across a range of masking conditions. Method Sentence recognition in noise was assessed for 29 normal-hearing listeners. Testing included semantically normal and anomalous sentences, conversational and clear speaking styles, auditory-only (AO) and audiovisual (AV) presentation modalities, and 4 different maskers (2-talker babble, 4-talker babble, 8-talker babble, and speech-shaped noise). Results Semantic context, clear speech, and visual input all improved intelligibility but also interacted with one another and with masking condition. Semantic context was beneficial across all maskers in AV conditions but only in speech-shaped noise in AO conditions. Clear speech provided the most benefit for AV speech with semantically anomalous targets. Finally, listeners were better able to take advantage of visual information for meaningful versus anomalous sentences and for clear versus conversational speech. Conclusion Because intelligibility-enhancing cues influence each other and depend on masking condition, multiple maskers and enhancement cues should be used to accurately assess individuals' speech-in-noise perception.


2013 ◽  
Vol 0 (0) ◽  
pp. 1-5 ◽  
Author(s):  
Michal Kedmy ◽  
Topaz Topper ◽  
Ravit Cohen-Mimran ◽  
Karen Banai

Sign in / Sign up

Export Citation Format

Share Document