Individuals With Congenital Amusia Show Degraded Speech Perception but Preserved Statistical Learning for Tone Languages

Author(s):  
Jiaqiang Zhu ◽  
Xiaoxiang Chen ◽  
Fei Chen ◽  
Seth Wiener

Purpose: Individuals with congenital amusia exhibit degraded speech perception. This study examined whether adult Chinese Mandarin listeners with amusia were still able to extract the statistical regularities of Mandarin speech sounds, despite their degraded speech perception. Method: Using the gating paradigm with monosyllabic syllable–tone words, we tested 19 Mandarin-speaking amusics and 19 musically intact controls. Listeners heard increasingly longer fragments of the acoustic signal across eight duration-blocked gates. The stimuli varied in syllable token frequency and syllable–tone co-occurrence probability. The correct syllable–tone word, correct syllable-only, correct tone-only, and correct syllable–incorrect tone responses were compared respectively between the two groups using mixed-effects models. Results: Amusics were less accurate than controls in terms of the correct word, correct syllable-only, and correct tone-only responses. Amusics, however, showed consistent patterns of top-down processing, as indicated by more accurate responses to high-frequency syllables, high-probability tones, and tone errors all in manners similar to those of the control listeners. Conclusions: Amusics are able to learn syllable and tone statistical regularities from the language input. This extends previous work by showing that amusics can track phonological segment and pitch cues despite their degraded speech perception. The observed speech deficits in amusics are therefore not due to an abnormal statistical learning mechanism. These results support rehabilitation programs aimed at improving amusics' sensitivity to pitch.

2020 ◽  
Author(s):  
Stephen Charles Van Hedger ◽  
Ingrid Johnsrude ◽  
Laura Batterink

Listeners are adept at extracting regularities from the environment, a process known as statistical learning (SL). SL has been generally assumed to be a form of “context-free” learning that occurs independently of prior knowledge, and SL experiments typically involve exposing participants to presumed novel regularities, such as repeating nonsense words. However, recent work has called this assumption into question, demonstrating that learners’ previous language experience can considerably influence SL performance. In the present experiment, we tested whether previous knowledge also shapes SL in a non-linguistic domain, using a paradigm that involves extracting regularities over tone sequences. Participants learned novel tone sequences, which consisted of pitch intervals not typically found in Western music. For one group of participants, the tone sequences used artificial, computerized instrument sounds. For the other group, the same tone sequences used familiar instrument sounds (piano or violin). Knowledge of the statistical regularities was assessed using both trained sounds (measuring specific learning) and sounds that differed in pitch range and/or instrument (measuring transfer learning). In a follow-up experiment, two additional testing sessions were administered to gauge retention of learning (one day and approximately one-week post-training). Compared to artificial instruments, training on sequences played by familiar instruments resulted in reduced correlations among test items, reflecting more idiosyncratic performance. Across all three testing sessions, learning of novel regularities presented with familiar instruments was worse compared to unfamiliar instruments, suggesting that prior exposure to music produced by familiar instruments interfered with new sequence learning. Overall, these results demonstrate that real-world experience influences SL in a non-linguistic domain, supporting the view that SL involves the continuous updating of existing representations, rather than the establishment of entirely novel ones.


2018 ◽  
Vol 4 (1) ◽  
Author(s):  
Jona Sassenhagen ◽  
Ryan Blything ◽  
Elena V. M. Lieven ◽  
Ben Ambridge

How are verb-argument structure preferences acquired? Children typically receive very little negative evidence, raising the question of how they come to understand the restrictions on grammatical constructions. Statistical learning theories propose stochastic patterns in the input contain sufficient clues. For example, if a verb is very common, but never observed in transitive constructions, this would indicate that transitive usage of that verb is illegal. Ambridge et al. (2008) have shown that in offline grammaticality judgements of intransitive verbs used in transitive constructions, low-frequency verbs elicit higher acceptability ratings than high-frequency verbs, as predicted if relative frequency is a cue during statistical learning. Here, we investigate if the same pattern also emerges in on-line processing of English sentences. EEG was recorded while healthy adults listened to sentences featuring transitive uses of semantically matched verb pairs of differing frequencies. We replicate the finding of higher acceptabilities of transitive uses of low- vs. high-frequency intransitive verbs. Event-Related Potentials indicate a similar result: early electrophysiological signals distinguish between misuse of high- vs low-frequency verbs. This indicates online processing shows a similar sensitivity to frequency as off-line judgements, consistent with a parser that reflects an original acquisition of grammatical constructions via statistical cues. However, the nature of the observed neural responses was not of the expected, or an easily interpretable, form, motivating further work into neural correlates of online processing of syntactic constructions.


2021 ◽  
Vol 37 (1) ◽  
Author(s):  
Mai M. El Ghazaly ◽  
Mona I. Mourad ◽  
Nesrine H. Hamouda ◽  
Mohamed A. Talaat

Abstract Background Speech perception in cochlear implants (CI) is affected by frequency resolution, exposure time, and working memory. Frequency discrimination is especially difficult in CI. Working memory is important for speech and language development and is expected to contribute to the vast variability in CI speech reception and expression outcome. The aim of this study is to evaluate CI patients’ consonants discrimination that varies in voicing, manner, and place of articulation imparting differences in pitch, time, and intensity, and also to evaluate working memory status and its possible effect on consonant discrimination. Results Fifty-five CI patients were included in this study. Their aided thresholds were less than 40 dBHL. Consonant speech discrimination was assessed using Arabic consonant discrimination words. Working memory was assessed using Test of Memory and Learning-2 (TOMAL-2). Subjects were divided according to the onset of hearing loss into prelingual children and postlingual adults and teenagers. Consonant classes studied were fricatives, stops, nasals, and laterals. Performance on the high frequency CVC words was 64.23% ± 17.41 for prelinguals and 61.70% ± 14.47 for postlinguals. These scores were significantly lower than scores on phonetically balanced word list (PBWL) of 79.94% ± 12.69 for prelinguals and 80.80% ± 11.36 for postlinguals. The lowest scores were for the fricatives. Working memory scores were strongly and positively correlated with speech discrimination scores. Conclusions Consonant discrimination using high frequency weighted words can provide a realistic tool for assessment of CI speech perception. Working memory skills showed a strong positive relationship with speech discrimination abilities in CI.


2012 ◽  
Vol 1252 (1) ◽  
pp. 361-366 ◽  
Author(s):  
Isabelle Peretz ◽  
Jenny Saffran ◽  
Daniele Schön ◽  
Nathalie Gosselin

1991 ◽  
Vol 89 (4B) ◽  
pp. 1866-1866
Author(s):  
E. A. Strickland ◽  
N. F. Viemeister ◽  
D. J. van Tasell

2017 ◽  
Vol 28 (10) ◽  
pp. 913-919 ◽  
Author(s):  
Margaret A. Meredith ◽  
Jay T. Rubinstein ◽  
Kathleen C. Y. Sie ◽  
Susan J. Norton

Background: Children with steeply sloping sensorineural hearing loss (SNHL) lack access to critical high-frequency cues despite the use of advanced hearing aid technology. In addition, their auditory-only aided speech perception abilities often meet Food and Drug Administration criteria for cochlear implantation. Purpose: The objective of this study was to describe hearing preservation and speech perception outcomes in a group of young children with steeply sloping SNHL who received a cochlear implant (CI). Research Design: Retrospective case series. Study Sample: Eight children with steeply sloping postlingual progressive SNHL who received a unilateral traditional CI at Seattle Children’s Hospital between 2009 and 2013 and had follow-up data available up to 24 mo postimplant were included. Data Collection and Analysis: A retrospective chart review was completed. Medical records were reviewed for demographic information, preoperative and postoperative behavioral hearing thresholds, and speech perception scores. Paired t tests were used to analyze speech perception data. Hearing preservation results are reported. Results: Rapid improvement of speech perception scores was observed within the first month postimplant for all participants. Mean monosyllabic word scores were 76% and mean phoneme scores were 86.7% at 1-mo postactivation compared to mean preimplant scores of 19.5% and 31.0%, respectively. Hearing preservation was observed in five participants out to 24-mo postactivation. Two participants lost hearing in both the implanted and unimplanted ear, and received a sequential bilateral CI in the other ear after progression of the hearing loss. One participant had a total loss of hearing in only the implanted ear. Results reported in this article are from the ear implanted first. Bilateral outcomes are not reported. Conclusions: CIs provided benefit for children with steeply sloping bilateral hearing loss for whom hearing aids did not provide adequate auditory access. In our cohort, significant improvements in speech understanding occurred rapidly postactivation. Preservation of residual hearing in children with a traditional CI electrode is possible.


Author(s):  
Sehchang Hah

The objective of this experiment was to quantify and localize the effects of wearing the nuclear, biological, and chemical (NBC) M40 protective mask and hood on speech production and perception. A designated speaker's vocalizations of 192 monosyllables while wearing an M40 mask with hood were digitized and used as speech stimuli. Another set of speech stimuli was produced by recording the same individual's vocalizing the same monosyllables without the mask and hood. Participants listened to one set of stimuli during two sessions, one session while wearing an M40 mask with hood and another session without the mask and hood. The results showed that wearing the mask with hood gave most detrimental effects on the sustention dimension acoustically for both speech perception and production. The results also showed that wearing it was detrimental on vocalizing and listening to fricatives and unvoiced-stops. These results may be due to the muffling effect of the voicemitter in speech production and the filtering effects of the voicemitter and the hood material on high frequency components during both speech production and perception. This information will be useful for designing better masks and hoods. This methodology also can be used to evaluate other speech communication systems.


Author(s):  
Dirk Kerzel ◽  
Stanislas Huynh Cong

AbstractVisual search may be disrupted by the presentation of salient, but irrelevant stimuli. To reduce the impact of salient distractors, attention may suppress their processing below baseline level. While there are many studies on the attentional suppression of distractors with features distinct from the target (e.g., a color distractor with a shape target), there is little and inconsistent evidence for attentional suppression with distractors sharing the target feature. In this study, distractor and target were temporally separated in a cue–target paradigm, where the cue was shown briefly before the target display. With target-matching cues, RTs were shorter when the cue appeared at the target location (valid cues) compared with when it appeared at a nontarget location (invalid cues). To induce attentional suppression, we presented the cue more frequently at one out of four possible target positions. We found that invalid cues appearing at the high-frequency cue position produced less interference than invalid cues appearing at a low-frequency cue position. Crucially, target processing was also impaired at the high-frequency cue position, providing strong evidence for attentional suppression of the cued location. Overall, attentional suppression of the frequent distractor location could be established through feature-based attention, suggesting that feature-based attention may guide attentional suppression just as it guides attentional enhancement.


2017 ◽  
Vol 28 (09) ◽  
pp. 810-822 ◽  
Author(s):  
Benjamin J. Kirby ◽  
Judy G. Kopun ◽  
Meredith Spratford ◽  
Clairissa M. Mollak ◽  
Marc A. Brennan ◽  
...  

AbstractSloping hearing loss imposes limits on audibility for high-frequency sounds in many hearing aid users. Signal processing algorithms that shift high-frequency sounds to lower frequencies have been introduced in hearing aids to address this challenge by improving audibility of high-frequency sounds.This study examined speech perception performance, listening effort, and subjective sound quality ratings with conventional hearing aid processing and a new frequency-lowering signal processing strategy called frequency composition (FC) in adults and children.Participants wore the study hearing aids in two signal processing conditions (conventional processing versus FC) at an initial laboratory visit and subsequently at home during two approximately six-week long trials, with the order of conditions counterbalanced across individuals in a double-blind paradigm.Children (N = 12, 7 females, mean age in years = 12.0, SD = 3.0) and adults (N = 12, 6 females, mean age in years = 56.2, SD = 17.6) with bilateral sensorineural hearing loss who were full-time hearing aid users.Individual performance with each type of processing was assessed using speech perception tasks, a measure of listening effort, and subjective sound quality surveys at an initial visit. At the conclusion of each subsequent at-home trial, participants were retested in the laboratory. Linear mixed effects analyses were completed for each outcome measure with signal processing condition, age group, visit (prehome versus posthome trial), and measures of aided audibility as predictors.Overall, there were few significant differences in speech perception, listening effort, or subjective sound quality between FC and conventional processing, effects of listener age, or longitudinal changes in performance. Listeners preferred FC to conventional processing on one of six subjective sound quality metrics. Better speech perception performance was consistently related to higher aided audibility.These results indicate that when high-frequency speech sounds are made audible with conventional processing, speech recognition ability and listening effort are similar between conventional processing and FC. Despite the lack of benefit to speech perception, some listeners still preferred FC, suggesting that qualitative measures should be considered when evaluating candidacy for this signal processing strategy.


Sign in / Sign up

Export Citation Format

Share Document