scholarly journals Effects of musical ear training on lexical tone perception

2016 ◽  
Vol 1 ◽  
pp. 4
Author(s):  
Evan D Bradley ◽  
Janet G Van Hell

The effect of short term musical experience on lexical tone perception was examined by administering four hours of daily musical ear training to non-tone language speakers. After training, participants showed some improvement in a tone labeling task, but not a tone discrimination task; however, this improvement did not differ reliably from controls indicating that short-term musical training is thus far not able to replicate language effects observed among lifelong musicians, but some linguistic differences between musicians and nonmusicians may likely be due to experience, rather than individual differences or other factors.

2019 ◽  
Vol 42 (1) ◽  
pp. 33-59 ◽  
Author(s):  
Ricky KW Chan ◽  
Janny HC Leung

AbstractL2 sounds present different kinds of challenges to learners at the phonetic, phonological, and lexical levels, but previous studies on L2 tone learning mostly focused on the phonetic and lexical levels. The present study employs an innovative technique to examine the role of prior tonal experience and musical training on forming novel abstract syllable-level tone categories. Eighty Cantonese and English musicians and nonmusicians completed two tasks: (a) AX tone discrimination and (b) incidental learning of artificial tone-segment connections (e.g., words beginning with an aspirated stop always carry a rising tone) with synthesized stimuli modeled on Thai. Although the four participant groups distinguished the target tones similarly well, Cantonese speakers showed abstract and implicit knowledge of the target tone-segment mappings after training but English speakers did not, regardless of their musical experience. This suggests that tone language experience, but not musical experience, is crucial for forming novel abstract syllable-level tone categories.


2020 ◽  
Vol 63 (2) ◽  
pp. 487-498
Author(s):  
Puisan Wong ◽  
Man Wai Cheng

Purpose Theoretical models and substantial research have proposed that general auditory sensitivity is a developmental foundation for speech perception and language acquisition. Nonetheless, controversies exist about the effectiveness of general auditory training in improving speech and language skills. This research investigated the relationships among general auditory sensitivity, phonemic speech perception, and word-level speech perception via the examination of pitch and lexical tone perception in children. Method Forty-eight typically developing 4- to 6-year-old Cantonese-speaking children were tested on the discrimination of the pitch patterns of lexical tones in synthetic stimuli, discrimination of naturally produced lexical tones, and identification of lexical tone in familiar words. Results The findings revealed that accurate lexical tone discrimination and identification did not necessarily entail the accurate discrimination of nonlinguistic stimuli that followed the pitch levels and pitch shapes of lexical tones. Although pitch discrimination and tone discrimination abilities were strongly correlated, accuracy in pitch discrimination was lower than that in tone discrimination, and nonspeech pitch discrimination ability did not precede linguistic tone discrimination in the developmental trajectory. Conclusions Contradicting the theoretical models, the findings of this study suggest that general auditory sensitivity and speech perception may not be causally or hierarchically related. The finding that accuracy in pitch discrimination is lower than that in tone discrimination suggests that comparable nonlinguistic auditory perceptual ability may not be necessary for accurate speech perception and language learning. The results cast doubt on the use of nonlinguistic auditory perceptual training to improve children's speech, language, and literacy abilities.


2018 ◽  
Vol 18 (1-2) ◽  
pp. 104-123
Author(s):  
Robert E. Graham ◽  
Usha Lakshmanan

Abstract A debate is underway regarding the perceptual and cognitive benefits of bilingualism and musical experience. This study contributes to the debate by investigating auditory inhibitory control in English-speaking monolingual musicians, non-musicians, tone language bilinguals, and non-tone language bilinguals. We predicted that musicians and tone language bilinguals would demonstrate enhanced processing relative to monolinguals and other bilinguals. Groups of monolinguals (N = 22), monolingual musicians (N = 19), non-tone language bilinguals (N = 20) and tone language bilinguals (N = 18) were compared on auditory Stroop tasks to assess domain-transferable processing benefits (e.g. auditory inhibitory control) resulting from potentially shared underlying cognitive mechanisms (Patel, 2003; Bialystok & DePape, 2009). In one task, participants heard the words “high” and “low” presented in high or low pitches, and responded regarding the pitch of the stimuli as quickly as possible. In another task, participants heard the words “rise” or “fall” presented in rising or falling pitch contours, and responded regarding the contour of the stimuli as quickly as possible. Results suggest transferable auditory inhibitory control benefits for musicians across pitch and contour processing, but any possible enhanced processing for speakers of tone languages may be task-dependent, as lexical tone activation may interfere with pitch contour processing.


2019 ◽  
Author(s):  
Aeron Laffere ◽  
Fred Dick ◽  
Adam Tierney

AbstractHow does the brain follow a sound that is mixed with others in a noisy environment? A possible strategy is to allocate attention to task-relevant time intervals while suppressing irrelevant intervals - a strategy that could be implemented by aligning neural modulations with critical moments in time. Here we tested whether selective attention to non-verbal sound streams is linked to shifts in the timing of attentional modulations of EEG activity, and investigated whether this neural mechanism can be enhanced by short-term training and musical experience. Participants performed a memory task on a target auditory stream presented at 4 Hz while ignoring a distractor auditory stream also presented at 4 Hz, but with a 180-degree shift in phase. The two attention conditions were linked to a roughly 180-degree shift in phase in the EEG signal at 4 Hz. Moreover, there was a strong relationship between performance on the 1-back task and the timing of the EEG modulation with respect to the attended band. EEG modulation timing was also enhanced after several days of training on the selective attention task and enhanced in experienced musicians. These results support the hypothesis that modulation of neural timing facilitates attention to particular moments in time and indicate that phase timing is a robust and reliable marker of individual differences in auditory attention. Moreover, these results suggest that nonverbal selective attention can be enhanced in the short term by only a few hours of practice and in the long term by years of musical training.


2019 ◽  
Vol 62 (1) ◽  
pp. 190-205 ◽  
Author(s):  
Jing Shao ◽  
Rebecca Yick Man Lau ◽  
Phyllis Oi Ching Tang ◽  
Caicai Zhang

Purpose Congenital amusia is an inborn neurogenetic disorder of fine-grained pitch processing. This study attempted to pinpoint the impairment mechanism of speech processing in tonal language speakers with amusia. We designed a series of perception tasks aiming at selectively probing low-level pitch processing and relatively high-level phonological processing of lexical tones, with an aim to illuminate the deficiency mechanism underlying tone perception in amusia. Method Sixteen Cantonese-speaking amusics and 16 matched controls were tested on the effects of acoustic (talker/syllable) variations on the identification and discrimination of Cantonese tones in two conditions. In the low-variation condition, tones were always associated with the same talker or syllable; in the high-variation condition, tones were associated with either different talkers (with the syllable controlled) or different syllables (with the talker controlled). Results Largely similar results were obtained in talker and syllable variation conditions. Amusics exhibited overall poorer performance than controls in tone identification. Although amusics also demonstrated poorer performance in tone discrimination, the group difference was more obvious in low-variation conditions, where more acoustic constancy was provided. Besides, controls exhibited a greater increase in discrimination sensitivity from high- to low-variation conditions, implying a stronger benefit of acoustic constancy. Conclusions The findings suggested that amusics' lexical tone perception abilities, in terms of both low-level pitch processing and high-level phonological processing, as measured in low- and high-variation conditions, are impaired. Importantly, amusics were more impaired in taking advantage of low acoustic variation contexts and thus less efficiently sharpened their perception of tones when perceptual anchors in talker/syllable were provided, suggesting a possible “anchoring deficit” in congenital amusia. Supplemental Material https://doi.org/10.23641/asha.7616555


2014 ◽  
Vol 43 (6) ◽  
pp. 881-897 ◽  
Author(s):  
Denis Burnham ◽  
Ron Brooker ◽  
Amanda Reid

2015 ◽  
Vol 37 (2) ◽  
pp. 335-357 ◽  
Author(s):  
Catherine L. Caldwell-Harris ◽  
Alia Lancaster ◽  
D. Robert Ladd ◽  
Dan Dediu ◽  
Morten H. Christiansen

This study examined whether musical training, ethnicity, and experience with a natural tone language influenced sensitivity to tone while listening to an artificial tone language. The language was designed with three tones, modeled after level-tone African languages. Participants listened to a 15-min random concatenation of six 3-syllable words. Sensitivity to tone was assessed using minimal pairs differing only in one syllable (nonword task: e.g., to-kà-su compared to ca-fí-to) or only in tone (tone task: e.g., to-kà-su compared to to-ká-su). Proficiency in an East Asian heritage language was the strongest predictor of success on the tone task. Asians without tone language experience were no better than other ethnic groups. We conclude by considering implications for research on second language learning, especially as approached through artificial language learning.


2020 ◽  
pp. 136216882097179
Author(s):  
Seth Wiener ◽  
Evan D. Bradley

Lexical tone languages like Mandarin Chinese require listeners to discriminate among different pitch patterns. A syllable spoken with a rising pitch (e.g. bí ‘nose’) carries a different meaning than the same syllable spoken with a falling pitch (e.g. bì ‘arm’). For native speakers (L1) of a non-tonal language, accurate perception of tones in a second language (L2) is notoriously difficult. Musicians, however, have typically shown an aptitude for lexical tone learning due to the unique perceptual demands of music. This study tested whether musical effects can be exploited to improve linguistic abilities in the general population. A pre-test, 8-week training, post-test design was used to measure L1 English participants’ sensitivity to tone. Individual Differences Scaling was used to measure participants’ weighting of pitch height and movement cues. Participants took part in classroom Mandarin learning only (+L2), musical ear training only (+Music), or classroom learning combined with musical training (+L2+Music). An L1 Mandarin group served as a baseline. At pre-test, mean sensitivity to tone and multidimensional scaling results were similar across all three L1 English groups. After training, all three L1 English groups improved in mean sensitivity, though only the +L2+Music group did so at a significant rate. Multidimensional scaling revealed that all groups increased their weighting of the more informative pitch movement cue at roughly equal rates. Short-term musical training thus affected change in cue weighting of linguistic pitch in a manner comparable to that occurring after a semester of L2 classroom learning. When combined with classroom learning, short-term musical training resulted in even greater sensitivity to pitch movement cues. These results contribute to models of music-language interaction and suggest that focused application of non-linguistic acoustic training can improve phonetic perception in ways that are relevant to language learning.


Sign in / Sign up

Export Citation Format

Share Document