scholarly journals Same or Different? Perceptual Learning for Connected Speech Induced by Brief and Longer Experiences

Author(s):  
Karen Banai ◽  
Hanin Karawani ◽  
Limor Lavie ◽  
Yizhar Lavner

Abstract Perceptual learning, defined as long-lasting changes in the ability to extract information from the environment, occurs following either brief exposure or prolonged practice. Whether these two types of experience yield qualitatively distinct patterns of learning is not clear. We used a time-compressed speech task to assess perceptual learning following either rapid exposure or additional training. We report that both experiences yielded robust and long-lasting learning. Individual differences in rapid learning explained unique variance in performance in independent speech tasks (natural-fast speech and speech-in-noise) with no additional contribution for training-induced learning (Experiment 1). Finally, it seems that similar factors influence the specificity of the two types of learning (Experiment 1 and 2). We suggest that rapid learning is key for understanding the role of perceptual learning in speech recognition under adverse conditions while longer learning could serve to strengthen and stabilize learning.

2020 ◽  
Vol 24 ◽  
pp. 233121652093054 ◽  
Author(s):  
Tali Rotman ◽  
Limor Lavie ◽  
Karen Banai

Challenging listening situations (e.g., when speech is rapid or noisy) result in substantial individual differences in speech perception. We propose that rapid auditory perceptual learning is one of the factors contributing to those individual differences. To explore this proposal, we assessed rapid perceptual learning of time-compressed speech in young adults with normal hearing and in older adults with age-related hearing loss. We also assessed the contribution of this learning as well as that of hearing and cognition (vocabulary, working memory, and selective attention) to the recognition of natural-fast speech (NFS; both groups) and speech in noise (younger adults). In young adults, rapid learning and vocabulary were significant predictors of NFS and speech in noise recognition. In older adults, hearing thresholds, vocabulary, and rapid learning were significant predictors of NFS recognition. In both groups, models that included learning fitted the speech data better than models that did not include learning. Therefore, under adverse conditions, rapid learning may be one of the skills listeners could employ to support speech recognition.


2000 ◽  
Author(s):  
Tatjana A. Nazir ◽  
Avital Deutsch ◽  
Jonathan Grainger ◽  
Ram Frost
Keyword(s):  

1995 ◽  
Vol 16 (4) ◽  
pp. 417-427 ◽  
Author(s):  
E. William Yund ◽  
Krista M. Buckles

2014 ◽  
Vol 77 (2) ◽  
pp. 493-507 ◽  
Author(s):  
Odette Scharenborg ◽  
Andrea Weber ◽  
Esther Janse
Keyword(s):  

2015 ◽  
Vol 43 (2) ◽  
pp. 310-337 ◽  
Author(s):  
MARCEL R. GIEZEN ◽  
PAOLA ESCUDERO ◽  
ANNE E. BAKER

AbstractThis study investigates the role of acoustic salience and hearing impairment in learning phonologically minimal pairs. Picture-matching and object-matching tasks were used to investigate the learning of consonant and vowel minimal pairs in five- to six-year-old deaf children with a cochlear implant (CI), and children of the same age with normal hearing (NH). In both tasks, the CI children showed clear difficulties with learning minimal pairs. The NH children also showed some difficulties, however, particularly in the picture-matching task. Vowel minimal pairs were learned more successfully than consonant minimal pairs, particularly in the object-matching task. These results suggest that the ability to encode phonetic detail in novel words is not fully developed at age six and is affected by task demands and acoustic salience. CI children experience persistent difficulties with accurately mapping sound contrasts to novel meanings, but seem to benefit from the relative acoustic salience of vowel sounds.


1970 ◽  
Vol 30 (3) ◽  
pp. 916-918 ◽  
Author(s):  
Thomas L. Bennett ◽  
Edward J. Rickert ◽  
Louis E. McAllister

Hooded rats were pre-exposed to circles and triangles in an otherwise visually sparse environment where opportunity to manipulate the forms was varied for the early experience groups. Although early experience with these stimuli enhanced their later discriminability over that shown by control animals who received no early experience, opportunity to manipulate the forms produced no additional gain in perceptual learning relative to Ss not allowed to manipulate the pre-exposed shapes. The findings restrict the generality of the tactual-kinesthetic feedback hypothesis.


2018 ◽  
Vol 40 (1) ◽  
pp. 93-109
Author(s):  
YI ZHENG ◽  
ARTHUR G. SAMUEL

AbstractIt has been documented that lipreading facilitates the understanding of difficult speech, such as noisy speech and time-compressed speech. However, relatively little work has addressed the role of visual information in perceiving accented speech, another type of difficult speech. In this study, we specifically focus on accented word recognition. One hundred forty-two native English speakers made lexical decision judgments on English words or nonwords produced by speakers with Mandarin Chinese accents. The stimuli were presented as either as videos that were of a relatively far speaker or as videos in which we zoomed in on the speaker’s head. Consistent with studies of degraded speech, listeners were more accurate at recognizing accented words when they saw lip movements from the closer apparent distance. The effect of apparent distance tended to be larger under nonoptimal conditions: when stimuli were nonwords than words, and when stimuli were produced by a speaker who had a relatively strong accent. However, we did not find any influence of listeners’ prior experience with Chinese accented speech, suggesting that cross-talker generalization is limited. The current study provides practical suggestions for effective communication between native and nonnative speakers: visual information is useful, and it is more useful in some circumstances than others.


Sign in / Sign up

Export Citation Format

Share Document