Individual differences in speech perception: Evidence for gradiency in the face of category-driven perceptual warping

2021 ◽  
Vol 149 (4) ◽  
pp. A54-A54
Author(s):  
Efthymia C. Kapnoula ◽  
Bob McMurray
2011 ◽  
Vol 55 (6) ◽  
pp. 563-571 ◽  
Author(s):  
M. Elsabbagh ◽  
H. Cohen ◽  
M. Cohen ◽  
S. Rosen ◽  
A. Karmiloff-Smith

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alyssa R. Roeckner ◽  
Katelyn I. Oliver ◽  
Lauren A. M. Lebois ◽  
Sanne J. H. van Rooij ◽  
Jennifer S. Stevens

AbstractResilience in the face of major life stressors is changeable over time and with experience. Accordingly, differing sets of neurobiological factors may contribute to an adaptive stress response before, during, and after the stressor. Longitudinal studies are therefore particularly effective in answering questions about the determinants of resilience. Here we provide an overview of the rapidly-growing body of longitudinal neuroimaging research on stress resilience. Despite lingering gaps and limitations, these studies are beginning to reveal individual differences in neural circuit structure and function that appear protective against the emergence of future psychopathology following a major life stressor. Here we outline a neural circuit model of resilience to trauma. Specifically, pre-trauma biomarkers of resilience show that an ability to modulate activity within threat and salience networks predicts fewer stress-related symptoms. In contrast, early post-trauma biomarkers of subsequent resilience or recovery show a more complex pattern, spanning a number of major circuits including attention and cognitive control networks as well as primary sensory cortices. This novel synthesis suggests stress resilience may be scaffolded by stable individual differences in the processing of threat cues, and further buttressed by post-trauma adaptations to the stressor that encompass multiple mechanisms and circuits. More attention and resources supporting this work will inform the targets and timing of mechanistic resilience-boosting interventions.


2020 ◽  
Vol 24 ◽  
pp. 233121652093054 ◽  
Author(s):  
Tali Rotman ◽  
Limor Lavie ◽  
Karen Banai

Challenging listening situations (e.g., when speech is rapid or noisy) result in substantial individual differences in speech perception. We propose that rapid auditory perceptual learning is one of the factors contributing to those individual differences. To explore this proposal, we assessed rapid perceptual learning of time-compressed speech in young adults with normal hearing and in older adults with age-related hearing loss. We also assessed the contribution of this learning as well as that of hearing and cognition (vocabulary, working memory, and selective attention) to the recognition of natural-fast speech (NFS; both groups) and speech in noise (younger adults). In young adults, rapid learning and vocabulary were significant predictors of NFS and speech in noise recognition. In older adults, hearing thresholds, vocabulary, and rapid learning were significant predictors of NFS recognition. In both groups, models that included learning fitted the speech data better than models that did not include learning. Therefore, under adverse conditions, rapid learning may be one of the skills listeners could employ to support speech recognition.


2012 ◽  
Vol 25 (0) ◽  
pp. 148
Author(s):  
Marcia Grabowecky ◽  
Emmanuel Guzman-Martinez ◽  
Laura Ortega ◽  
Satoru Suzuki

Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.


Author(s):  
Carol A. Fowler

The theory of speech perception as direct derives from a general direct-realist account of perception. A realist stance on perception is that perceiving enables occupants of an ecological niche to know its component layouts, objects, animals, and events. “Direct” perception means that perceivers are in unmediated contact with their niche (mediated neither by internally generated representations of the environment nor by inferences made on the basis of fragmentary input to the perceptual systems). Direct perception is possible because energy arrays that have been causally structured by niche components and that are available to perceivers specify (i.e., stand in 1:1 relation to) components of the niche. Typically, perception is multi-modal; that is, perception of the environment depends on specifying information present in, or even spanning, multiple energy arrays. Applied to speech perception, the theory begins with the observation that speech perception involves the same perceptual systems that, in a direct-realist theory, enable direct perception of the environment. Most notably, the auditory system supports speech perception, but also the visual system, and sometimes other perceptual systems. Perception of language forms (consonants, vowels, word forms) can be direct if the forms lawfully cause specifying patterning in the energy arrays available to perceivers. In Articulatory Phonology, the primitive language forms (constituting consonants and vowels) are linguistically significant gestures of the vocal tract, which cause patterning in air and on the face. Descriptions are provided of informational patterning in acoustic and other energy arrays. Evidence is next reviewed that speech perceivers make use of acoustic and cross modal information about the phonetic gestures constituting consonants and vowels to perceive the gestures. Significant problems arise for the viability of a theory of direct perception of speech. One is the “inverse problem,” the difficulty of recovering vocal tract shapes or actions from acoustic input. Two other problems arise because speakers coarticulate when they speak. That is, they temporally overlap production of serially nearby consonants and vowels so that there are no discrete segments in the acoustic signal corresponding to the discrete consonants and vowels that talkers intend to convey (the “segmentation problem”), and there is massive context-sensitivity in acoustic (and optical and other modalities) patterning (the “invariance problem”). The present article suggests solutions to these problems. The article also reviews signatures of a direct mode of speech perception, including that perceivers use cross-modal speech information when it is available and exhibit various indications of perception-production linkages, such as rapid imitation and a disposition to converge in dialect with interlocutors. An underdeveloped domain within the theory concerns the very important role of longer- and shorter-term learning in speech perception. Infants develop language-specific modes of attention to acoustic speech signals (and optical information for speech), and adult listeners attune to novel dialects or foreign accents. Moreover, listeners make use of lexical knowledge and statistical properties of the language in speech perception. Some progress has been made in incorporating infant learning into a theory of direct perception of speech, but much less progress has been made in the other areas.


Perception ◽  
2017 ◽  
Vol 47 (2) ◽  
pp. 197-215 ◽  
Author(s):  
Noreen O'Sullivan ◽  
Christophe de Bezenac ◽  
Andrea Piovesan ◽  
Hannah Cutler ◽  
Rhiannon Corcoran ◽  
...  

The experience of seeing one's own face in a mirror is a common experience in daily life. Visual feedback from a mirror is linked to a sense of identity. We developed a procedure that allowed individuals to watch their own face, as in a normal mirror, or with specific distortions (lag) for active movement or passive touch. By distorting visual feedback while the face is being observed on a screen, we document an illusion of reduced embodiment. Participants made mouth movements, while their forehead was touched with a pen. Visual feedback was either synchronous (simultaneous) with reality, as in a mirror, or asynchronous (delayed). Asynchronous feedback was exclusive to touch or movement in different conditions and incorporated both in a third condition. Following stimulation, participants rated their perception of the face in the mirror, and perception of their own face, on questions that tapped into agency and ownership. Results showed that perceptions of both agency and ownership were affected by asynchrony. Effects related to agency, in particular, were moderated by individual differences in depersonalisation and auditory hallucination-proneness, variables with theoretical links to embodiment. The illusion presents a new way of investigating the extent to which body representations are malleable.


2011 ◽  
Vol 23 (2) ◽  
pp. 382-390 ◽  
Author(s):  
Simon van Gaal ◽  
H. Steven Scholte ◽  
Victor A. F. Lamme ◽  
Johannes J. Fahrenfort ◽  
K. Richard Ridderinkhof

The presupplementary motor area (pre-SMA) is considered key in contributing to voluntary action selection during response conflict. Here we test whether individual differences in the ability to select appropriate actions in the face of strong (conscious) and weak (virtually unconscious) distracting alternatives are related to individual variability in pre-SMA anatomy. To this end, we scanned 58 participants, who performed a masked priming task in which conflicting response tendencies were elicited either consciously (through primes that were weakly masked) or virtually unconsciously (strongly masked primes), with structural magnetic resonance imaging. Voxel-based morphometry revealed that individual differences in pre-SMA gray-matter density are related to subjects' ability to voluntary select the correct action in the face of conflict, irrespective of the awareness level of conflict-inducing stimuli. These results link structural anatomy to individual differences in cognitive control ability, and provide support for the role of the pre-SMA in the selection of appropriate actions in situations of response conflict. Furthermore, these results suggest that flexible and voluntary behavior requires efficiently dealing with competing response tendencies, even those that are activated automatically and unconsciously.


Sign in / Sign up

Export Citation Format

Share Document