scholarly journals Tailored perception: listeners’ strategies for perceiving speech fit their individual perceptual abilities

2018 ◽  
Author(s):  
Kyle Jasmin ◽  
Fred Dick ◽  
Lori Holt ◽  
Adam Tierney

AbstractIn speech, linguistic information is conveyed redundantly by many simultaneously present acoustic dimensions, such as fundamental frequency, duration and amplitude. Listeners show stable tendencies to prioritize these acoustic dimensions differently, relative to one another, which suggests individualized speech perception ‘strategies’. However, it is unclear what drives these strategies, and more importantly, what impact they have on diverse aspects of communication. Here we show that such individualized perceptual strategies can be related to individual differences in perceptual ability. In a cue weighting experiment, we first demonstrate that individuals with a severe pitch perception deficit (congenital amusics) categorize linguistic stimuli similarly to controls when their deficit is unrelated to the main distinguishing cue for that category (in this case, durational or temporal cues). In contrast, in a prosodic task where pitch-related cues are typically more informative, amusics place less importance on this pitch-related information when categorizing speech. Instead, they relied more on duration information. Crucially, these differences in perceptual weights were observed even when pitch-related differences were large enough to be perceptually distinct to amusic listeners. In a second set of experiments involving musical and prosodic phrase interpretation, we found that this reliance on duration information allowed amusics to overcome their perceptual deficits and perceive both speech and music successfully. These results suggest that successful speech - and potentially music - comprehension is achieved through multiple perceptual strategies whose underlying weights may in part reflect individuals’ perceptual abilities.

1992 ◽  
Vol 35 (1) ◽  
pp. 192-200 ◽  
Author(s):  
Michele L. Steffens ◽  
Rebecca E. Eilers ◽  
Karen Gross-Glenn ◽  
Bonnie Jallad

Speech perception was investigated in a carefully selected group of adult subjects with familial dyslexia. Perception of three synthetic speech continua was studied: /a/-//, in which steady-state spectral cues distinguished the vowel stimuli; /ba/-/da/, in which rapidly changing spectral cues were varied; and /sta/-/sa/, in which a temporal cue, silence duration, was systematically varied. These three continua, which differed with respect to the nature of the acoustic cues discriminating between pairs, were used to assess subjects’ abilities to use steady state, dynamic, and temporal cues. Dyslexic and normal readers participated in one identification and two discrimination tasks for each continuum. Results suggest that dyslexic readers required greater silence duration than normal readers to shift their perception from /sa/ to /sta/. In addition, although the dyslexic subjects were able to label and discriminate the synthetic speech continua, they did not necessarily use the acoustic cues in the same manner as normal readers, and their overall performance was generally less accurate.


Perception ◽  
2017 ◽  
Vol 46 (12) ◽  
pp. 1412-1426 ◽  
Author(s):  
Elmeri Syrjänen ◽  
Marco Tullio Liuzza ◽  
Håkan Fischer ◽  
Jonas K. Olofsson

Disgust is a core emotion evolved to detect and avoid the ingestion of poisonous food as well as the contact with pathogens and other harmful agents. Previous research has shown that multisensory presentation of olfactory and visual information may strengthen the processing of disgust-relevant information. However, it is not known whether these findings extend to dynamic facial stimuli that changes from neutral to emotionally expressive, or if individual differences in trait body odor disgust may influence the processing of disgust-related information. In this preregistered study, we tested whether a classification of dynamic facial expressions as happy or disgusted, and an emotional evaluation of these facial expressions, would be affected by individual differences in body odor disgust sensitivity, and by exposure to a sweat-like, negatively valenced odor (valeric acid), as compared with a soap-like, positively valenced odor (lilac essence) or a no-odor control. Using Bayesian hypothesis testing, we found evidence that odors do not affect recognition of emotion in dynamic faces even when body odor disgust sensitivity was used as moderator. However, an exploratory analysis suggested that an unpleasant odor context may cause faster RTs for faces, independent of their emotional expression. Our results further our understanding of the scope and limits of odor effects on facial perception affect and suggest further studies should focus on reproducibility, specifying experimental circumstances where odor effects on facial expressions may be present versus absent.


2011 ◽  
Vol 55 (6) ◽  
pp. 563-571 ◽  
Author(s):  
M. Elsabbagh ◽  
H. Cohen ◽  
M. Cohen ◽  
S. Rosen ◽  
A. Karmiloff-Smith

2020 ◽  
Author(s):  
Nora Andermane ◽  
Jenny Bosten ◽  
Anil Seth ◽  
Jamie Ward

Prior knowledge has been shown to facilitate the incorporation of visual stimuli into awareness. We adopted an individual differences approach to explore whether a tendency to ‘see the expected’ is general or method-specific. We administered a binocular rivalry task and manipulated selective attention, as well as induced expectations via predictive context, self-generated imagery, expectancy cues, and perceptual priming. Most prior manipulations led to a facilitated awareness of the biased percept in binocular rivalry, whereas strong signal primes led to a suppressed awareness, i.e., adaptation. Correlations and factor analysis revealed that the facilitatory effect of priors on visual awareness is closely related to attentional control. We also investigated whether expectation-based biases predict perceptual abilities. Adaptation to strong primes predicted improved naturalistic change detection and the facilitatory effect of weak primes predicted the experience of perceptual anomalies. Taken together, our results indicate that the facilitatory effect of priors may be underpinned by an attentional mechanism but the tendency to ‘see the expected’ is method-specific.


2021 ◽  
Author(s):  
Ashley E Symons ◽  
Adam Tierney

Speech perception requires the integration of evidence from acoustic cues across multiple dimensions. Individuals differ in their cue weighting strategies, i.e. the weight they assign to different acoustic dimensions during speech categorization. In two experiments, we investigate musical training as one potential predictor of individual differences in prosodic cue weighting strategies. Attentional theories of speech categorization suggest that prior experience with the task-relevance of a particular acoustic dimensions leads that dimension to attract attention. Therefore, Experiment 1 tested whether musicians and non-musicians differed in their ability to selectively attend to pitch and loudness in speech. Compared to non-musicians, musicians showed enhanced dimension-selective attention to pitch but not loudness. In Experiment 2, we tested the hypothesis that musicians would show greater pitch weighting during prosodic categorization due to prior experience with the task-relevance of pitch cues in music. In this experiment, listeners categorized phrases that varied in the extent to which pitch and duration signaled the location of linguistic focus and phrase boundaries. During linguistic focus categorization only, musicians up-weighted pitch compared to non-musicians. These results suggest that musical training is linked with domain-general enhancements of the salience of pitch cues, and that this increase in pitch salience may lead to to an up-weighting of pitch during some prosodic categorization tasks. These findings also support attentional theories of cue weighting, in which more salient acoustic dimensions are given more importance during speech categorization.


2020 ◽  
Vol 24 ◽  
pp. 233121652093054 ◽  
Author(s):  
Tali Rotman ◽  
Limor Lavie ◽  
Karen Banai

Challenging listening situations (e.g., when speech is rapid or noisy) result in substantial individual differences in speech perception. We propose that rapid auditory perceptual learning is one of the factors contributing to those individual differences. To explore this proposal, we assessed rapid perceptual learning of time-compressed speech in young adults with normal hearing and in older adults with age-related hearing loss. We also assessed the contribution of this learning as well as that of hearing and cognition (vocabulary, working memory, and selective attention) to the recognition of natural-fast speech (NFS; both groups) and speech in noise (younger adults). In young adults, rapid learning and vocabulary were significant predictors of NFS and speech in noise recognition. In older adults, hearing thresholds, vocabulary, and rapid learning were significant predictors of NFS recognition. In both groups, models that included learning fitted the speech data better than models that did not include learning. Therefore, under adverse conditions, rapid learning may be one of the skills listeners could employ to support speech recognition.


Sign in / Sign up

Export Citation Format

Share Document