scholarly journals Multitasking During Degraded Speech Recognition in School-Age Children

2017 ◽  
Vol 21 ◽  
pp. 233121651668678 ◽  
Author(s):  
Tina M. Grieco-Calub ◽  
Kristina M. Ward ◽  
Laurel Brehm
2020 ◽  
Vol 63 (12) ◽  
pp. 4265-4276
Author(s):  
Lauren Calandruccio ◽  
Heather L. Porter ◽  
Lori J. Leibold ◽  
Emily Buss

Purpose Talkers often modify their speech when communicating with individuals who struggle to understand speech, such as listeners with hearing loss. This study evaluated the benefit of clear speech in school-age children and adults with normal hearing for speech-in-noise and speech-in-speech recognition. Method Masked sentence recognition thresholds were estimated for school-age children and adults using an adaptive procedure. In Experiment 1, the target and masker were summed and presented over a loudspeaker located directly in front of the listener. The masker was either speech-shaped noise or two-talker speech, and target sentences were produced using a clear or conversational speaking style. In Experiment 2, stimuli were presented over headphones. The two-talker speech masker was diotic (M 0 ). Clear and conversational target sentences were presented either in-phase (T 0 ) or out-of-phase (T π ) between the two ears. The M 0 T π condition introduces a segregation cue that was expected to improve performance. Results For speech presented over a single loudspeaker (Experiment 1), the clear-speech benefit was independent of age for the noise masker, but it increased with age for the two-talker masker. Similar age effects for the two-talker speech masker were seen under headphones with diotic presentation (M 0 T 0 ), but comparable clear-speech benefit as a function of age was observed with a binaural cue to facilitate segregation (M 0 T π ). Conclusions Consistent with prior research, children showed a robust clear-speech benefit for speech-in-noise recognition. Immaturity in the ability to segregate target from masker speech may limit young children's ability to benefit from clear-speech modifications for speech-in-speech recognition under some conditions. When provided with a cue that facilitates segregation, children as young as 4–7 years of age derived a clear-speech benefit in a two-talker masker that was similar to the benefit experienced by adults.


2018 ◽  
Vol 61 (2) ◽  
pp. 420-427 ◽  
Author(s):  
Carla L. Youngdahl ◽  
Eric W. Healy ◽  
Sarah E. Yoho ◽  
Frédéric Apoux ◽  
Rachael Frush Holt

Purpose Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults. Method Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups. Results It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects. Conclusions In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.


Author(s):  
Carly B. Fox ◽  
Megan Israelsen-Augenstein ◽  
Sharad Jones ◽  
Sandra Laing Gillam

Purpose This study examined the accuracy and potential clinical utility of two expedited transcription methods for narrative language samples elicited from school-age children (7;5–11;10 [years;months]) with developmental language disorder. Transcription methods included real-time transcription produced by speech-language pathologists (SLPs) and trained transcribers (TTs) as well as Google Cloud Speech automatic speech recognition. Method The accuracy of each transcription method was evaluated against a gold-standard reference corpus. Clinical utility was examined by determining the reliability of scores calculated from the transcripts produced by each method on several language sample analysis (LSA) measures. Participants included seven certified SLPs and seven TTs. Each participant was asked to produce a set of six transcripts in real time, out of a total 42 language samples. The same 42 samples were transcribed using Google Cloud Speech. Transcription accuracy was evaluated through word error rate. Reliability of LSA scores was determined using correlation analysis. Results Results indicated that Google Cloud Speech was significantly more accurate than real-time transcription in transcribing narrative samples and was not impacted by speech rate of the narrator. In contrast, SLP and TT transcription accuracy decreased as a function of increasing speech rate. LSA metrics generated from Google Cloud Speech transcripts were also more reliably calculated. Conclusions Automatic speech recognition showed greater accuracy and clinical utility as an expedited transcription method than real-time transcription. Though there is room for improvement in the accuracy of speech recognition for the purpose of clinical transcription, it produced highly reliable scores on several commonly used LSA metrics. Supplemental Material https://doi.org/10.23641/asha.15167355


Sign in / Sign up

Export Citation Format

Share Document