Neural Processing of What and Who Information in Speech

2011 ◽  
Vol 23 (10) ◽  
pp. 2690-2700 ◽  
Author(s):  
Bharath Chandrasekaran ◽  
Alice H. D. Chan ◽  
Patrick C. M. Wong

Human speech is composed of two types of information, related to content (lexical information, i.e., “what” is being said [e.g., words]) and to the speaker (indexical information, i.e., “who” is talking [e.g., voices]). The extent to which lexical versus indexical information is represented separately or integrally in the brain is unresolved. In the current experiment, we use short-term fMRI adaptation to address this issue. Participants performed a loudness judgment task during which single or multiple sets of words/pseudowords were repeated with single (repeat) or multiple talkers (speaker-change) conditions while BOLD responses were collected. As reflected by adaptation fMRI, the left posterior middle temporal gyrus, a crucial component of the ventral auditory stream performing sound-to-meaning computations (“what” pathway), showed sensitivity to lexical as well as indexical information. Previous studies have suggested that speaker information is abstracted during this stage of auditory word processing. Here, we demonstrate that indexical information is strongly coupled with word information. These findings are consistent with a plethora of behavioral results that have demonstrated that changes to speaker-related information can influence lexical processing.

2009 ◽  
Vol 21 (11) ◽  
pp. 2085-2099 ◽  
Author(s):  
Cathelijne M. J. Y. Tesink ◽  
Karl Magnus Petersson ◽  
Jos J. A. van Berkum ◽  
Daniëlle van den Brink ◽  
Jan K. Buitelaar ◽  
...  

When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laura Bechtold ◽  
Christian Bellebaum ◽  
Paul Hoffman ◽  
Marta Ghio

AbstractThis study aimed to replicate and validate concreteness and context effects on semantic word processing. In Experiment 1, we replicated the behavioral findings of Hoffman et al. (Cortex 63,250–266, https://doi.org/10.1016/j.cortex.2014.09.001, 2015) by applying their cueing paradigm with their original stimuli translated into German. We found concreteness and contextual cues to facilitate word processing in a semantic judgment task with 55 healthy adults. The two factors interacted in their effect on reaction times: abstract word processing profited more strongly from a contextual cue, while the concrete words’ processing advantage was reduced but still present. For accuracy, the descriptive pattern of results suggested an interaction, which was, however, not significant. In Experiment 2, we reformulated the contextual cues to avoid repetition of the to-be-processed word. In 83 healthy adults, the same pattern of results emerged, further validating the findings. Our corroborating evidence supports theories integrating representational richness and semantic control mechanisms as complementary mechanisms in semantic word processing.


2020 ◽  
Vol 15 ◽  
pp. 185-190
Author(s):  
Filiz Mergen ◽  
Gulmira Kuruoglu

A great bulk of research in the psycholinguistic literature has been dedicated to hemispheric organization of words. An overwhelming evidence suggests that the left hemisphere is primarily responsible for lexical processing. However, non-words, which look similar to real words but lack meaningful associations, is underrepresented in the laterality literature. This study investigated the lateralization of Turkish non-words. Fifty-three Turkish monolinguals performed a lexical decision task in a visual hemifield paradigm. An analysis of their response times revealed left-hemispheric dominance for non-words, adding further support to the literature. The accuracy of their answers, however, were comparable regardless of the field of presentation. The results were discussed in light of the psycholinguistic word processing views.


AAOHN Journal ◽  
1996 ◽  
Vol 44 (8) ◽  
pp. 391-401 ◽  
Author(s):  
Diane M. Dewar

This study identifies gender specific farm health and safety issues. Based on a sample from the 1988 New York Farm Family Survey, descriptive statistics and exploratory factor analysis were used to establish unique gender based profiles in terms of labor force participation, and prioritization of farm health and safety issues, concerns, and information sources. Based on the factor analysis, women's main farm health and safety issues included physical problems and occupational hazard screening needs, provider integrity, and economic incentives. Men's main issues consisted of accident related counseling needs, skin related hazards, and the farm related convenience of the services. Men and women had statistically significant differences in the types of information sources and reasons for using farm health and safety services. These differences imply that farm health and safety providers must consider both gender related information gathering and farm health and safety prioritizations to more efficiently allocate intervention resources, more effectively promote safety, and reduce the incidence of occupationally related morbidity and mortality in agriculture.


2018 ◽  
Vol 30 (8) ◽  
pp. 1130-1144 ◽  
Author(s):  
Simon Nougaret ◽  
Sabrina Ravel

Humans and animals must evaluate the costs and expected benefits of their actions to make adaptive choices. Prior studies have demonstrated the involvement of the basal ganglia in this evaluation. However, little is known about the role of the external part of the globus pallidus (GPe), which is well positioned to integrate motor and reward-related information, in this process. To investigate this role, the activity of 126 neurons was recorded in the associative and limbic parts of the GPe of two monkeys performing a behavioral task in which different levels of force were required to obtain different amounts of liquid reward. The results first revealed that the activity of associative and limbic GPe neurons could be modulated not only by cognitive and limbic but also motor information at the same time, both during a single period or during different periods throughout the trial, mainly in an independent way. Moreover, as a population, GPe neurons encoded these types of information dynamically throughout the trial, when each piece of information was the most relevant for the achievement of the action. Taken together, these results suggest that GPe neurons could be dedicated to the parallel monitoring of task parameters essential to adjusting and maintaining goal-directed behavior.


Author(s):  
Robert Fiorentino

Research in neurolinguistics examines how language is organized and processed in the human brain. The findings from neurolinguistic studies on language can inform our understanding of the basic ingredients of language and the operations they undergo. In the domain of the lexicon, a major debate concerns whether and to what extent the morpheme serves as a basic unit of linguistic representation, and in turn whether and under what circumstances the processing of morphologically complex words involves operations that identify, activate, and combine morpheme-level representations during lexical processing. Alternative models positing some role for morphemes argue that complex words are processed via morphological decomposition and composition in the general case (full-decomposition models), or only under certain circumstances (dual-route models), while other models do not posit a role for morphemes (non-morphological models), instead arguing that complex words are related to their constituents not via morphological identity, but either via associations among whole-word representations or via similarity in formal and/or semantic features. Two main approaches to investigating the role of morphemes from a neurolinguistic perspective are neuropsychology, in which complex word processing is typically investigated in cases of brain insult or neurodegenerative disease, and brain imaging, which makes it possible to examine the temporal dynamics and neuroanatomy of complex word processing as it occurs in the brain. Neurolinguistic studies on morphology have examined whether the processing of complex words involves brain mechanisms that rapidly segment the input into potential morpheme constituents, how and under what circumstances morpheme representations are accessed from the lexicon, and how morphemes are combined to form complex morphosyntactic and morpho-semantic representations. Findings from this literature broadly converge in suggesting a role for morphemes in complex word processing, although questions remain regarding the precise time course by which morphemes are activated, the extent to which morpheme access is constrained by semantic or form properties, as well as regarding the brain mechanisms by which morphemes are ultimately combined into complex representations.


2019 ◽  
Vol 40 (1) ◽  
pp. 231-248
Author(s):  
Andrew Wedel ◽  
Adam Ussishkin ◽  
Adam King

AbstractListeners incrementally process words as they hear them, progressively updating inferences about what word is intended as the phonetic signal unfolds in time. As a consequence, phonetic cues positioned early in the signal for a word are on average more informative about word-identity because they disambiguate the intended word from more lexical alternatives than cues late in the word. In this contribution, we review two new findings about structure in lexicons and phonological grammars, and argue that both arise through the same biases on phonetic reduction and enhancement resulting from incremental processing.(i) Languages optimize their lexicons over time with respect to the amount of signal allocated to words relative to their predictability: words that are on average less predictable in context tend to be longer, while those that are on average more predictable tend to be shorter. However, the fact that phonetic material earlier in the word plays a larger role in word identification suggests that languages should also optimize the distribution of that information across the word. In this contribution we review recent work on a range of different languages that supports this hypothesis: less frequent words are not only on average longer, but also contain more highly informative segments early in the word.(ii) All languages are characterized by phonological grammars of rules describing predictable modifications of pronunciation in context. Because speakers appear to pronounce informative phonetic cues more carefully than less informative cues, it has been predicted that languages should be less likely to evolve phonological rules that reduce lexical contrast at word beginnings. A recent investigation through a statistical analysis of a cross-linguistic dataset of phonological rules strongly supports this hypothesis. Taken together, we argue that these findings suggest that the incrementality of lexical processing has wide-ranging effects on the evolution of phonotactic patterns.


2015 ◽  
Vol 6 (1) ◽  
pp. 227-234 ◽  
Author(s):  
Mei Jiang ◽  
Li-Xia Yang ◽  
Lin Jia ◽  
Xin Shi ◽  
Hong Wang ◽  
...  

AbstractObjective: The aim of this study is to evaluate variations in cortical activation in early and late Uygur-Chinese bilinguals from the Xinjiang Uygur Autonomous Region of China. Methodology: During a semantic judgment task with visual stimulation by a single Chinese or Uygur word, functional magnetic resonance imaging (fMRI) was performed. The fMRI data regarding activated cortical areas and volumes by both languages were analyzed. Results: The first language (L1) and second language (L2) activated language-related hemispheric regions, including the left inferior frontal and parietal cortices, and L1 specifically activated the left middle temporal gyrus. For both L1 and L2, cortical activation was greater in the left hemisphere, and there was no significant difference in the lateralization index (LI) between the two languages (p > 0.05). Although the total activated cortical areas were larger in early than late bilinguals, the activation volumes were not significantly different. Conclusion: Activated brains areas in early and late fluent bilinguals largely overlapped. However, these areas were more scattered upon presentation of L2 than L1, and L1 had a more specific pattern of activation than L2. For both languages, the left hemisphere was dominant. We found that L2 proficiency level rather than age of acquisition had a greater influence on which brain areas were activated with semantic processing.


2005 ◽  
Vol 26 (4) ◽  
pp. 479-504 ◽  
Author(s):  
PAVEL TROFIMOVICH

The present study investigated whether and to what extent auditory word priming, which is one mechanism of spoken-word processing and learning, is involved in a second language (L2). The objectives of the study were to determine whether L2 learners use auditory word priming as monolinguals do when they are acquiring an L2, how attentional processing orientation influences the extent to which they do so, and what L2 learners actually “learn” as they use auditory word priming. Results revealed that L2 learners use auditory word priming, that the extent to which they do so depends little on attention to the form of spoken input, and that L2 learners overrely on detailed context-specific information available in spoken input as they use auditory word priming.


Sign in / Sign up

Export Citation Format

Share Document