Learning and Retention of Novel Words in Musicians and Nonmusicians

Author(s):  
Elizabeth C. Stewart ◽  
Andrea L. Pittman

Purpose The purpose of this study was to determine whether long-term musical training enhances the ability to perceive and learn new auditory information. Listeners with extensive musical experience were expected to detect, learn, and retain novel words more effectively than participants without musical training. Advantages of musical training were expected to be greater for words learned in multitalker babble compared to quiet. Method Participants consisted of 20 young adult musicians and 20 age-matched nonmusicians, all with normal hearing. In addition to completing word recognition and nonword detection tasks, each participant learned 10 novel words in a rapid word-learning paradigm. All tasks were completed in quiet and in multitalker babble. Next-day retention of the learned words was examined in isolation (recall) and in the context of continuous discourse (detection). Performance was compared across groups and listening conditions. Results Performance was significantly poorer in babble than in quiet on word recognition and nonword detection, but not on word learning, learned-word recall, or learned-word detection. No differences were observed between groups (musicians vs. nonmusicians) on any of the tasks. Conclusions For young normal-hearing adults, auditory experience resulting from long-term music training did not enhance their learning of new auditory information in either favorable (quiet) or unfavorable (babble) listening conditions. This suggests that the formation of semantic and musical representations in memory may be supported by the same underlying auditory processes, such that musical training is simply an extension of an auditory expertise that both musicians and nonmusicians possess.

2021 ◽  
Author(s):  
Paola Escudero ◽  
Eline Adrianne Smit ◽  
Anthony Angwin

In recent years, cross-situational word learning (CSWL) paradigms have shown that novel words can be learned through implicit statistical learning. So far, CSWL studies using adult populations have focused on the presentation of spoken words (auditory information), however, words can also be learned through their written form (orthographic information). This study compares auditory and orthographic presentation of novel words with different degrees of phonological overlap using the CSWL paradigm. Additionally, we also present a lab-based and online-based approach to testing behavioural experiments. Due to the COVID-19 pandemic, lab testing was prematurely terminated, and testing was continued online using a newly created online testing protocol. Analyses first compared accuracy and response times across modalities, with our findings showing better and faster recognition performance for CSWL when novel words are presented through their written (orthographic condition) than through their spoken forms (auditory condition). As well, Bayesian modelling found that accuracy for the auditory condition was higher online compared to the lab-based experiment, whereas performance in the orthography condition was high in both experiments and generally outperformed the auditory condition. We discuss the implications of our findings for modality of presentation, as well as the benefits of our online testing protocol and its implementation for future research.


2015 ◽  
Vol 112 (40) ◽  
pp. 12522-12527 ◽  
Author(s):  
Evangelos Paraskevopoulos ◽  
Anja Kraneburg ◽  
Sibylle Cornelia Herholz ◽  
Panagiotis D. Bamidis ◽  
Christo Pantev

The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity.


2020 ◽  
Author(s):  
Matthew HC Mak ◽  
Yaling Hsiao ◽  
Kate Nation

Lexical processing is influenced by a word’s semantic diversity, as estimated by corpus-derived metrics. Although this suggests that contextual variation shapes verbal learning and memory, it is not clear what semantic diversity represents and why this influences lexical processing. Word learning experiments and simulations offer an opportunity to manipulate contextual variation directly and measure the effects on processing. In Experiment 1, adults read novel words in six naturalistic passages spanning one familiar topic (low semantic diversity) or six familiar topics (high semantic diversity). Words experienced in the low-diversity condition showed better learning, an effect replicated by simulating spreading activation in lexical networks differing in semantic diversity. We attributed these findings to “anchoring”, a process of stabilizing novel word representations by securing them onto a familiar topic in long-term memory. Simulation 2 and Experiment 2 tested whether word learning might be better placed to take advantage of diversity if novel words were first anchored before diversity was introduced. Simulations and behavioural data both showed that after an anchoring opportunity, novel words forms were better learned in the high-diversity condition. These findings show that anchoring and contextual variation both influence the early stages of word learning.


2017 ◽  
Vol 60 (10) ◽  
pp. 2891-2905 ◽  
Author(s):  
Karla K. McGregor ◽  
Katherine Gordon ◽  
Nichole Eden ◽  
Tim Arbisi-Kelm ◽  
Jacob Oleson

Purpose The aim of this study was to determine whether the word-learning challenges associated with developmental language disorder (DLD) result from encoding or retention deficits. Method In Study 1, 59 postsecondary students with DLD and 60 with normal development (ND) took the California Verbal Learning Test–Second Edition, Adult Version (Delis, Kramer, Kaplan, & Ober, 2000). In Study 2, 23 postsecondary students with DLD and 24 with ND attempted to learn 9 novel words in each of 3 training conditions: uncued test, cued test, and no test (passive study). Retention was measured 1 day and 1 week later. Results By the end of training, students with DLD had encoded fewer familiar words (Study 1) and fewer novel words (Study 2) than their ND peers as evinced by word recall. They also demonstrated poorer encoding as evinced by slower growth in recall from Trials 1 to 2 (Studies 1 and 2), less semantic clustering of recalled words, and poorer recognition (Study 1). The DLD and ND groups were similar in the relative amount of information they could recall after retention periods of 5 and 20 min (Study 1). After a 1-day retention period, the DLD group recalled less information that had been encoded via passive study, but they performed as well as their ND peers when recalling information that had been encoded via tests (Study 2). Compared to passive study, encoding via tests also resulted in more robust lexical engagement after a 1-week retention for DLD and ND groups. Conclusions Encoding, not retention, is the problematic stage of word learning for adults with DLD. Self-testing with feedback lessens the deficit. Supplemental Materials https://doi.org/10.23641/asha.5435200


1996 ◽  
Vol 83 (3) ◽  
pp. 779-787 ◽  
Author(s):  
Sandra L. Terrell ◽  
Raymond Daniloff

This study compared the effectiveness of computer video display tube, videotape, and live adult reading modes of instruction in teaching children vocabulary. The same pictured story was implemented in three modes, computer VDT display of still story pictures in color with an accompanying sound track, videotape presentation of the fully animated story, and a picture book whose pictures and narrative matched those of the VDT-computer mode. 78 normal preschool children were presented the story in one of three modes of instruction. The novel words to be learned were embedded in the story as nouns, verbs, and affective state adjectives. Postexposure tests of word recognition showed a small but significant advantage for live voice reading for two of three recognition tests. The VDT and videotape modes did not differ from each other in effectiveness.


2017 ◽  
Vol 26 (3) ◽  
pp. 318-327
Author(s):  
Andrea L. Pittman ◽  
Elizabeth C. Stewart ◽  
Ian S. Odgear ◽  
Amanda P. Willman

Purpose Lexical acquisition was examined in children and adults to determine if the skills needed to detect and learn new words are retained in the adult years. In addition to advancing age, the effects of hearing loss were also examined. Method Measures of word recognition, detection of nonsense words within sentences, and novel word learning were obtained in quiet for 20 children with normal hearing and 21 with hearing loss (8–12 years) as well as for 15 adults with normal hearing and 17 with hearing loss (58–79 years). Listeners with hearing loss were tested with and without high-frequency acoustic energy to identify the type of amplification (narrowband, wideband, or frequency lowering) that yielded optimal performance. Results No differences were observed between the adults and children with normal hearing except for the adults' better nonsense word detection. The poorest performance was observed for the listeners with hearing loss in the unaided condition. Performance improved significantly with amplification to levels at or near that of their counterparts with normal hearing. With amplification, the adults performed as well as the children on all tasks except for word recognition. Conclusions Adults retain the skills necessary for lexical acquisition regardless of hearing status. However, uncorrected hearing loss nearly eliminates these skills.


2018 ◽  
Vol 61 (9) ◽  
pp. 2325-2336
Author(s):  
Emily Lund

PurposeThis study investigates differences between preschool children with cochlear implants and age-matched children with normal hearing during an initial stage in word learning to evaluate whether they (a) match novel words to unfamiliar objects and (b) solicit information about unfamiliar objects during play.MethodTwelve preschool children with cochlear implants and 12 children with normal hearing matched for age completed 2 experimental tasks. In the 1st task, children were asked to point to a picture that matched either a known word or a novel word. In the 2nd task, children were presented with unfamiliar objects during play and were given the opportunity to ask questions about those objects.ResultsIn Task 1, children with cochlear implants paired novel words with unfamiliar pictures in fewer trials than children with normal hearing. In Task 2, children with cochlear implants were less likely to solicit information about new objects than children with normal hearing. Performance on the 1st task, but not the 2nd, significantly correlated with expressive vocabulary standard scores of children with cochlear implants.ConclusionThis study provides preliminary evidence that children with cochlear implants approach mapping novel words to and soliciting information about unfamiliar objects differently than children with normal hearing.


2020 ◽  
pp. 1-21
Author(s):  
Eva Dittinger ◽  
Betina Korka ◽  
Mireille Besson

Previous studies evidenced transfer effects from professional music training to novel word learning. However, it is unclear whether such an advantage is driven by cascading, bottom–up effects from better auditory perception and attention to semantic processing or by top–down influences from cognitive functions on perception. Moreover, the long-term effects of novel word learning remain an open issue. To address these questions, we used a word learning design, with four different sets of novel words, and we neutralized the potential perceptive and associative learning advantages in musicians. Under such conditions, we did not observe any advantage in musicians on the day of learning (Day 1), at neither a behavioral nor an electrophysiological level; this suggests that the previously reported advantages in musicians are likely to be related to bottom–up processes. Nevertheless, 1 month later (Day 30 [D30]) and for all types of novel words, the error increase from Day 1 to D30 was lower in musicians compared to nonmusicians. In addition, for the set of words that were perceptually difficult to discriminate, only musicians showed typical N400 effects over parietal sites on D30. These results demonstrate that music training improved long-term memory and that transfer effects from music training to word learning (i.e., semantic levels of speech processing) benefit from reinforced (long-term) memory functions. Finally, these findings highlight the positive impact of music training on the acquisition of foreign languages.


1981 ◽  
Vol 24 (2) ◽  
pp. 169-178 ◽  
Author(s):  
Gerald Zimmermann ◽  
Patricia Rettaliata

Kinematic analysis of selected articulatory gestures of an adventitiously deaf speaker is reported. High speed cinefluorography and a semiautomated analysis system were used to describe the coordination of lip, jaw, tongue tip, and tongue dorsum. The coordination of voicing and movements also was analyzed. Compared to a speaker with normal hearing, the deaf speaker showed systematic timing differences in the VC (closing) portion of each utterance. Coordination of tongue dorsum with other structures showed obvious deviations. Voice termination was consistently later for the deaf speaker. Speculations about the role of auditory information in the long-term monitoring or calibration of speech gestures are offered.


Sign in / Sign up

Export Citation Format

Share Document