auditory word
Recently Published Documents


TOTAL DOCUMENTS

211
(FIVE YEARS 23)

H-INDEX

37
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Phoebe Gaston ◽  
Christian Brodbeck ◽  
Colin Phillips ◽  
Ellen Lau

AbstractSpeech input is often understood to trigger rapid and automatic activation of successively higher-level representations for comprehension of words. Here we show evidence from magnetoencephalography that incremental processing of speech input is limited when words are heard in isolation as compared to continuous speech. This suggests a less unified and automatic process than is often assumed. We present evidence that neural effects of phoneme-by-phoneme lexical uncertainty, quantified by cohort entropy, occur in connected speech but not isolated words. In contrast, we find robust effects of phoneme probability, quantified by phoneme surprisal, during perception of both connected speech and isolated words. This dissociation rules out models of word recognition in which phoneme surprisal and cohort entropy are common indicators of a uniform process, even though these closely related information-theoretic measures both arise from the probability distribution of wordforms consistent with the input. We propose that phoneme surprisal effects reflect automatic access of a lower level of representation of the auditory input (e.g., wordforms) while cohort entropy effects are task-sensitive, driven by a competition process or a higher-level representation that is engaged late (or not at all) during the processing of single words.


2021 ◽  
Vol 11 (8) ◽  
pp. 1063
Author(s):  
Kelly Cotosck ◽  
Jed Meltzer ◽  
Mariana Nucci ◽  
Katerina Lukasova ◽  
Letícia Mansur ◽  
...  

Functional neuroimaging studies have highlighted the roles of three networks in processing language, all of which are typically left-lateralized: a ventral stream involved in semantics, a dorsal stream involved in phonology and speech production, and a more dorsal “multiple demand” network involved in many effortful tasks. As lateralization in all networks may be affected by life factors such as age, literacy, education, and brain pathology, we sought to develop a task paradigm with which to investigate the engagement of these networks, including manipulations to selectively emphasize semantic and phonological processing within a single task performable by almost anyone regardless of literacy status. In young healthy participants, we administered an auditory word monitoring task, in which participants had to note the occurrence of a target word within a continuous story presented in either their native language, Portuguese, or the unknown language, Japanese. Native language task performance activated ventral stream language networks, left lateralized but bilateral in the anterior temporal lobe. Unfamiliar language performance, being more difficult, activated left hemisphere dorsal stream structures and the multiple demand network bilaterally, but predominantly in the right hemisphere. These findings suggest that increased demands on phonological processing to accomplish word monitoring in the absence of semantic support may result in the bilateral recruitment of networks involved in speech perception under more challenging conditions.


2021 ◽  
Author(s):  
Chuanji Gao ◽  
Svetlana V. Shinkareva ◽  
Marius Peelen

Recognizing written or spoken words involves a sequence of processing stages, transforming sensory features into lexical-semantic representations. While the later processing stages are common across modalities, the initial stages are modality-specific. In the visual modality, previous studies have shown that words with positive valence are recognized faster than neutral words. Here, we examined whether the effects of valence on word recognition are specific to the visual modality or are common across visual and auditory modalities. To address this question, we analyzed multiple large databases of visual and auditory lexical decision tasks, relating the valence of words to lexical decision times while controlling for a large number of variables, including arousal and frequency. We found that valence differentially influenced visual and auditory word recognition. Valence had an asymmetric effect on visual lexical decision times, primarily speeding up recognition of positive words. By contrast, valence had a symmetric effect on auditory lexical decision times, with both negative and positive words speeding up word recognition relative to neutral words. The modality-specificity of valence effects were consistent across databases and were observed when the same set of words were compared across modalities. We interpret these findings as indicating that valence influences word recognition partly at the sensory-perceptual stage. We relate these effects to the effects of positive (reward) and negative (punishment) reinforcers on perception.


2021 ◽  
Author(s):  
Alberto Furgoni ◽  
Antje Stoehr ◽  
Clara D. Martin

PurposeIn languages with alphabetical writing systems, the relationship between phonology and orthography is strong. Phonology-to-orthography mappings can be consistent (i.e., one phonological unit corresponds to one orthographic unit) or inconsistent (i.e., one phonological unit corresponds to multiple orthographic units). This study investigates whether the Orthographic Consistency Effect (OCE) emerges at the phonemic level during auditory word recognition, regardless of the opacity of a language’s writing system.MethodsThirty L1-French (opaque language) and 30 L1-Spanish (transparent language) listeners participated in an L1 auditory lexical decision task which included stimuli with either only consistently-spelled phonemes or both consistently-spelled and a number of inconsistently-spelled phonemes. ResultsThe results revealed that listeners were faster at recognizing consistently-spelled words than inconsistently-spelled words. This implies that consistently-spelled words are recognized more easily than inconsistent ones. As for pseudoword processing, there is a numerical trend that might indicate a higher sensibility of French listeners to phoneme-to-grapheme inconsistencies. ConclusionsThese findings have theoretical implications: inconsistent phoneme-to-grapheme mappings, like inconsistencies at the level of the syllable or rhyme, impact auditory word recognition. Moreover, our results suggest that the OCE should occur in all languages with alphabetical writing systems, regardless of their level of orthographic opacity.


2021 ◽  
Vol 9 (1) ◽  
pp. 1-7
Author(s):  
Mohammed Shafiullah ◽  
Shaira Berg ◽  
Paul van Schaik ◽  
Lorraine McDonald ◽  
John D. Allbutt

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Kelly Michaelis ◽  
Makoto Miyakoshi ◽  
Gina Norato ◽  
Andrei V. Medvedev ◽  
Peter E. Turkeltaub

AbstractA longstanding debate has surrounded the role of the motor system in speech perception, but progress in this area has been limited by tasks that only examine isolated syllables and conflate decision-making with perception. Using an adaptive task that temporally isolates perception from decision-making, we examined an EEG signature of motor activity (sensorimotor μ/beta suppression) during the perception of auditory phonemes, auditory words, audiovisual words, and environmental sounds while holding difficulty constant at two levels (Easy/Hard). Results revealed left-lateralized sensorimotor μ/beta suppression that was related to perception of speech but not environmental sounds. Audiovisual word and phoneme stimuli showed enhanced left sensorimotor μ/beta suppression for correct relative to incorrect trials, while auditory word stimuli showed enhanced suppression for incorrect trials. Our results demonstrate that motor involvement in perception is left-lateralized, is specific to speech stimuli, and it not simply the result of domain-general processes. These results provide evidence for an interactive network for speech perception in which dorsal stream motor areas are dynamically engaged during the perception of speech depending on the characteristics of the speech signal. Crucially, this motor engagement has different effects on the perceptual outcome depending on the lexicality and modality of the speech stimulus.


2020 ◽  
Vol 11 ◽  
Author(s):  
Joline M. Fan ◽  
Maria Luisa Gorno-Tempini ◽  
Nina F. Dronkers ◽  
Bruce L. Miller ◽  
Mitchel S. Berger ◽  
...  

Aphasia classifications and specialized language batteries differ across the fields of neurodegenerative disorders and lesional brain injuries, resulting in difficult comparisons of language deficits across etiologies. In this study, we present a simplified framework, in which a widely-used aphasia battery captures clinical clusters across disease etiologies and provides a quantitative and visual method to characterize and track patients over time. The framework is used to evaluate populations representing three disease etiologies: stroke, primary progressive aphasia (PPA), and post-operative aphasia. A total of 330 patients across three populations with cerebral injury leading to aphasia were investigated, including 76 patients with stroke, 107 patients meeting criteria for PPA, and 147 patients following left hemispheric resective surgery. Western Aphasia Battery (WAB) measures (Information Content, Fluency, answering Yes/No questions, Auditory Word Recognition, Sequential Commands, and Repetition) were collected across the three populations and analyzed to develop a multi-dimensional aphasia model using dimensionality reduction techniques. Two orthogonal dimensions were found to explain 87% of the variance across aphasia phenotypes and three disease etiologies. The first dimension reflects shared weighting across aphasia subscores and correlated with aphasia severity. The second dimension incorporates fluency and comprehension, thereby separating Wernicke's from Broca's aphasia, and the non-fluent/agrammatic from semantic PPA variants. Clusters representing clinical classifications, including late PPA presentations, were preserved within the two-dimensional space. Early PPA presentations were not classifiable, as specialized batteries are needed for phenotyping. Longitudinal data was further used to visualize the trajectory of aphasias during recovery or disease progression, including the rapid recovery of post-operative aphasic patients. This method has implications for the conceptualization of aphasia as a spectrum disorder across different disease etiology and may serve as a framework to track the trajectories of aphasia progression and recovery.


2020 ◽  
Vol 10 (12) ◽  
pp. 936
Author(s):  
Yujia Wu ◽  
Jingwen Ma ◽  
Lei Cai ◽  
Zengjian Wang ◽  
Miao Fan ◽  
...  

It is unclear whether the brain activity during phonological processing of second languages (L2) is similar to that of the first language (L1) in trilingual individuals, especially when the L1 is logographic, and the L2s are logographic and alphabetic, respectively. To explore this issue, this study examined brain activity during visual and auditory word rhyming tasks in Cantonese–Mandarin–English trilinguals. Thirty Chinese college students whose L1 was Cantonese and L2s were Mandarin and English were recruited. Functional magnetic resonance imaging (fMRI) was conducted while subjects performed visual and auditory word rhyming tasks in three languages (Cantonese, Mandarin, and English). The results revealed that in Cantonese–Mandarin–English trilinguals, whose L1 is logographic and the orthography of their L2 is the same as L1—i.e., Mandarin and Cantonese, which share the same set of Chinese characters—the brain regions for the phonological processing of L2 are different from those of L1; when the orthography of L2 is quite different from L1, i.e., English and Cantonese who belong to different writing systems, the brain regions for the phonological processing of L2 are similar to those of L1. A significant interaction effect was observed between language and modality in bilateral lingual gyri. Regions of interest (ROI) analysis at lingual gyri revealed greater activation of this region when using English than Cantonese and Mandarin in visual tasks.


Sign in / Sign up

Export Citation Format

Share Document