Categorical perception and influence of attention on neural consistency in response to speech sounds in adults with dyslexia

Author(s):  
T. M. Centanni ◽  
S. D. Beach ◽  
O. Ozernov-Palchik ◽  
S. May ◽  
D. Pantazis ◽  
...  
2009 ◽  
Vol 9 (3) ◽  
pp. 304-313 ◽  
Author(s):  
Nelli H. Salminen ◽  
Hannu Tiitinen ◽  
Patrick J. C. May

2021 ◽  
Author(s):  
Basil C Preisig ◽  
Lars Riecke ◽  
Alexis Hervais-Adelman

What processes lead to categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. We used a binaural integration task, where the inputs to the two ears were complementary so that phonemic identity emerged from their integration into a single percept. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with a meaning-differentiating acoustic feature (third formant) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in the left anterior insula (AI), the left supplementary motor cortex, the left ventral motor cortex and the right motor and somatosensory cortex (M1/S1) represent listeners' syllable report irrespective of stimulus acoustics. The same areas have been previously implicated in decision-making (AI), response selection (SMA), and response initiation and feedback (M1/S1). Our results indicate that the emergence of categorical speech sounds implicates decision-making mechanisms and auditory-motor transformations acting on sensory inputs.


1996 ◽  
Vol 39 (7) ◽  
pp. 576
Author(s):  
A. Cienfiuegos ◽  
L. March ◽  
A.M. Shelley ◽  
D.C. Javitt

1999 ◽  
Vol 45 (1) ◽  
pp. 82-88 ◽  
Author(s):  
Angel Cienfuegos ◽  
Lucy March ◽  
Anne-Marie Shelley ◽  
Daniel C Javitt

1995 ◽  
Vol 16 (1) ◽  
pp. 68-89 ◽  
Author(s):  
Anita C. Maiste ◽  
Andrew S. Wiens ◽  
Melvyn J. Hunt ◽  
Michael Scherg ◽  
Terence W. Picton

2015 ◽  
Vol 112 (6) ◽  
pp. 1892-1897 ◽  
Author(s):  
Robert F. Lachlan ◽  
Stephen Nowicki

Some of the psychological abilities that underlie human speech are shared with other species. One hallmark of speech is that linguistic context affects both how speech sounds are categorized into phonemes, and how different versions of phonemes are produced. We here confirm earlier findings that swamp sparrows categorically perceive the notes that constitute their learned songs and then investigate how categorical boundaries differ according to context. We clustered notes according to their acoustic structure, and found statistical evidence for clustering into 10 population-wide note types. Examining how three related types were perceived, we found, in both discrimination and labeling tests, that an “intermediate” note type is categorized with a “short” type when it occurs at the beginning of a song syllable, but with a “long” type at the end of a syllable. In sum, three produced note-type clusters appear to be underlain by two perceived categories. Thus, in birdsong, as in human speech, categorical perception is context-dependent, and as is the case for human phonology, there is a complex relationship between underlying categorical representations and surface forms. Our results therefore suggest that complex phonology can evolve even in the absence of rich linguistic components, like syntax and semantics.


1985 ◽  
Vol 28 (4) ◽  
pp. 594-598 ◽  
Author(s):  
M. Jane Collins ◽  
Richard R. Hurtig

The usefulness of tactile devices as aids to lipreading has been established. However, maximum usefulness in reducing the ambiguity of lipreading cues and/or use of tactile devices as a substitute for audition may be dependent on phonemic recognition via tactile signals alone. In the present study, a categorical perception paradigm was used to evaluate tactile perception of speech sounds in comparison to auditory perception. The results show that speech signals delivered by tactile stimulation can be categorically perceived on a voice-onset time (VOT) continuum. The boundary for the voiced-voiceless distinction falls at longer VOTs for tactile than for auditory perception. It is concluded that the procedure is useful for determining characteristics of tactile perception and for prosthesis evaluation.


Cognition ◽  
2005 ◽  
Vol 98 (2) ◽  
pp. B35-B44 ◽  
Author(s):  
Willy Serniclaes ◽  
Paulo Ventura ◽  
José Morais ◽  
Régine Kolinsky

2020 ◽  
Author(s):  
Md Sultan Mahmud ◽  
Mohammed Yeasin ◽  
Gavin M. Bidelman

ABSTRACTCategorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e., differentiates phonetic prototypes from ambiguous speech sounds). We recorded high density EEGs as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine (SVM) classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event related potentials (ERPs). We found that early (120 ms) whole-brain data decoded speech categories (i.e., prototypical vs. ambiguous speech tokens) with 95.16% accuracy [area under the curve (AUC) 95.14%; F1-score 95.00%]. Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more robust and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions (including auditory cortex, supramarginal gyrus, and Brocas area) that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, Broca’s area, motor cortex) were necessary to describe later decision stages (later 300 ms) of categorization but these areas were highly associated with the strength of listeners’ categorical hearing (i.e., slope of behavioral identification functions). Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.


2003 ◽  
Author(s):  
Willy Serniclaes ◽  
Liliane Sprenger-Charolles

Sign in / Sign up

Export Citation Format

Share Document