Relationships between music training, speech processing, and word learning: a network perspective

2018 ◽  
Vol 1423 (1) ◽  
pp. 10-18 ◽  
Author(s):  
Stefan Elmer ◽  
Lutz Jäncke
Author(s):  
Stefan Elmer ◽  
Eva Dittinger ◽  
Mireille Besson

This chapter takes a new look at an old riddle by discussing putative parallelisms between speech and music processing, as well as music to speech transfer effects. Interestingly, both speech and music rely on comparable acoustic information (i.e. timing, pitch, and timbre) and share, at least partially, perceptive, cognitive, motivational, and motor components. Importantly, in this context, the chapter introduces a cortical framework of speech processing, as well as the perceptual and cognitive operations underlying speech learning mechanisms. It then reviews previous literature pinpointing the influence of music training on segmental and suprasegmental aspects of speech processing as well as on cognitive functions. Two complementary hypotheses underlying such transfer effects—cascade effects and multidimensional processes—are proposed. The issue of shared versus distinct neural networks for music and speech processing is also discussed. Finally, the chapter integrates the often observed perceptual and cognitive advantages of musicians in a holistic framework by considering their interrelations during an ecological, valid, word-learning task.


2019 ◽  
Vol 46 (3) ◽  
pp. 606-616
Author(s):  
Marinella MAJORANO ◽  
Tamara BASTIANELLO ◽  
Marika MORELLI ◽  
Manuela LAVELLI ◽  
Marilyn M. VIHMAN

AbstractPrevious studies have demonstrated an effect of early vocal production on infants’ speech processing and later vocabulary. This study focuses on the relationship between vocal production and new word learning. Thirty monolingual Italian-learning infants were recorded at about 11 months, to establish the extent of their consonant production. In parallel, the infants were trained on novel word–object pairs, two consisting of early learned consonants (ELC), two consisting of late learned consonants (LLC). Word learning was assessed through Preferential Looking. The results suggest that vocal production supports word learning: Only children with higher, consistent consonant production attended more to the trained ELC images.


2021 ◽  
Vol 33 (1) ◽  
pp. 8-27
Author(s):  
Mylène Barbaroux ◽  
Arnaud Norena ◽  
Maud Rasamimanana ◽  
Eric Castet ◽  
Mireille Besson

Musical expertise has been shown to positively influence high-level speech abilities such as novel word learning. This study addresses the question whether low-level enhanced perceptual skills causally drives successful novel word learning. We used a longitudinal approach with psychoacoustic procedures to train 2 groups of nonmusicians either on pitch discrimination or on intensity discrimination, using harmonic complex sounds. After short (approximately 3 hr) psychoacoustic training, discrimination thresholds were lower on the specific feature (pitch or intensity) that was trained. Moreover, compared to the intensity group, participants trained on pitch were faster to categorize words varying in pitch. Finally, although the N400 components in both the word learning phase and in the semantic task were larger in the pitch group than in the intensity group, no between-group differences were found at the behavioral level in the semantic task. Thus, these results provide mixed evidence that enhanced perception of relevant features through a few hours of acoustic training with harmonic sounds causally impacts the categorization of speech sounds as well as novel word learning. These results are discussed within the framework of near and far transfer effects from music training to speech processing.


2019 ◽  
Vol 40 (1) ◽  
pp. 3-20
Author(s):  
Katherine S. White ◽  
Elizabeth S. Nilsen ◽  
Taylor Deglint ◽  
Janel Silva

Disfluencies, such as ‘um’ or ‘uh’, can cause adults to attribute uncertainty to speakers, but may also facilitate speech processing. To understand how these different functions affect children’s learning, we asked whether (dis)fluency affects children’s decision to select information from speakers (an explicit behavior) and their learning of specific words (an implicit behavior). In Experiment 1a, 31 3- to 4-year-olds heard two puppets provide fluent or disfluent descriptions of familiar objects. Each puppet then labeled a different novel object with the same novel word (again, fluently or disfluently). Children more frequently endorsed the object referred to by the fluent speaker. We replicated this finding with a separate group of 4-year-olds in Experiment 1b ( N = 31) and a modified design. In Experiment 2, 62 3- to 4-year-olds were trained on new words, produced following a disfluency or not, and were subsequently tested on their recognition of the words. Children were equally accurate for the two types of words. These results suggest that while children may prefer information from fluent speakers, they learn words equally well regardless of fluency, at least in some contexts.


2021 ◽  
Vol 12 ◽  
Author(s):  
David W. Gow ◽  
Adriana Schoenhaut ◽  
Enes Avcu ◽  
Seppo P. Ahlfors

Processes governing the creation, perception and production of spoken words are sensitive to the patterns of speech sounds in the language user’s lexicon. Generative linguistic theory suggests that listeners infer constraints on possible sound patterning from the lexicon and apply these constraints to all aspects of word use. In contrast, emergentist accounts suggest that these phonotactic constraints are a product of interactive associative mapping with items in the lexicon. To determine the degree to which phonotactic constraints are lexically mediated, we observed the effects of learning new words that violate English phonotactic constraints (e.g., srigin) on phonotactic perceptual repair processes in nonword consonant-consonant-vowel (CCV) stimuli (e.g., /sre/). Subjects who learned such words were less likely to “repair” illegal onset clusters (/sr/) and report them as legal ones (/∫r/). Effective connectivity analyses of MRI-constrained reconstructions of simultaneously collected magnetoencephalography (MEG) and EEG data showed that these behavioral shifts were accompanied by changes in the strength of influences of lexical areas on acoustic-phonetic areas. These results strengthen the interpretation of previous results suggesting that phonotactic constraints on perception are produced by top-down lexical influences on speech processing.


2013 ◽  
Vol 17 (2) ◽  
pp. 384-395 ◽  
Author(s):  
PAOLA ESCUDERO ◽  
ELLEN SIMON ◽  
KAREN E. MULAK

Previous studies have shown that orthography is activated during speech processing and that it may have positive and negative effects for non-native listeners. The present study examines whether the effect of orthography on non-native word learning depends on the relationship between the grapheme–phoneme correspondences across the native and non-native orthographic systems. Specifically, congruence between grapheme–phoneme correspondences across the listeners’ languages is predicted to aid word recognition, while incongruence is predicted to hinder it. Native Spanish listeners who were Dutch learners or naïve listeners (with no exposure to Dutch) were taught Dutch pseudowords and their visual referents. They were trained with only auditory forms or with auditory and orthographic forms. During testing, non-native listeners were less accurate when the target and distractor pseudowords formed a minimal pair (differing in only one vowel) than when they formed a non-minimal pair, and performed better on perceptually easy than on perceptually difficult minimal pairs. For perceptually difficult minimal pairs, Dutch learners performed better than naïve listeners and Dutch proficiency predicted learners’ word recognition accuracy. Most importantly and as predicted, exposure to orthographic forms during training aided performance on minimal pairs with congruent orthography, while it hindered performance on minimal pairs with incongruent orthography.


2016 ◽  
Vol 28 (10) ◽  
pp. 1584-1602 ◽  
Author(s):  
Eva Dittinger ◽  
Mylène Barbaroux ◽  
Mariapaola D'Imperio ◽  
Lutz Jäncke ◽  
Stefan Elmer ◽  
...  

On the basis of previous results showing that music training positively influences different aspects of speech perception and cognition, the aim of this series of experiments was to test the hypothesis that adult professional musicians would learn the meaning of novel words through picture–word associations more efficiently than controls without music training (i.e., fewer errors and faster RTs). We also expected musicians to show faster changes in brain electrical activity than controls, in particular regarding the N400 component that develops with word learning. In line with these hypotheses, musicians outperformed controls in the most difficult semantic task. Moreover, although a frontally distributed N400 component developed in both groups of participants after only a few minutes of novel word learning, in musicians this frontal distribution rapidly shifted to parietal scalp sites, as typically found for the N400 elicited by known words. Finally, musicians showed evidence for better long-term memory for novel words 5 months after the main experimental session. Results are discussed in terms of cascading effects from enhanced perception to memory as well as in terms of multifaceted improvements of cognitive processing due to music training. To our knowledge, this is the first report showing that music training influences semantic aspects of language processing in adults. These results open new perspectives for education in showing that early music training can facilitate later foreign language learning. Moreover, the design used in the present experiment can help to specify the stages of word learning that are impaired in children and adults with word learning difficulties.


Sign in / Sign up

Export Citation Format

Share Document