Similarity of Computations Across Domains Does Not Imply Shared Implementation: The Case of Language Comprehension

2021 ◽  
Vol 30 (6) ◽  
pp. 526-534
Author(s):  
Evelina Fedorenko ◽  
Cory Shain

Understanding language requires applying cognitive operations (e.g., memory retrieval, prediction, structure building) that are relevant across many cognitive domains to specialized knowledge structures (e.g., a particular language’s lexicon and syntax). Are these computations carried out by domain-general circuits or by circuits that store domain-specific representations? Recent work has characterized the roles in language comprehension of the language network, which is selective for high-level language processing, and the multiple-demand (MD) network, which has been implicated in executive functions and linked to fluid intelligence and thus is a prime candidate for implementing computations that support information processing across domains. The language network responds robustly to diverse aspects of comprehension, but the MD network shows no sensitivity to linguistic variables. We therefore argue that the MD network does not play a core role in language comprehension and that past findings suggesting the contrary are likely due to methodological artifacts. Although future studies may reveal some aspects of language comprehension that require the MD network, evidence to date suggests that those will not be related to core linguistic processes such as lexical access or composition. The finding that the circuits that store linguistic knowledge carry out computations on those representations aligns with general arguments against the separation of memory and computation in the mind and brain.

2021 ◽  
Author(s):  
Tamar I Regev ◽  
Josef Affourtit ◽  
Xuanyi Chen ◽  
Abigail E Schipper ◽  
Leon Bergen ◽  
...  

A network of left frontal and temporal brain regions supports 'high-level' language processing-including the processing of word meanings, as well as word-combinatorial processing-across presentation modalities. This 'core' language network has been argued to store our knowledge of words and constructions as well as constraints on how those combine to form sentences. However, our linguistic knowledge additionally includes information about sounds (phonemes) and how they combine to form clusters, syllables, and words. Is this knowledge of phoneme combinatorics also represented in these language regions? Across five fMRI experiments, we investigated the sensitivity of high-level language processing brain regions to sub-lexical linguistic sound patterns by examining responses to diverse nonwords-sequences of sounds/letters that do not constitute real words (e.g., punes, silory, flope). We establish robust responses in the language network to visually (Experiment 1a, n=605) and auditorily (Experiments 1b, n=12, and 1c, n=13) presented nonwords relative to baseline. In Experiment 2 (n=16), we find stronger responses to nonwords that obey the phoneme-combinatorial constraints of English. Finally, in Experiment 3 (n=14) and a post-hoc analysis of Experiment 2, we provide suggestive evidence that the responses in Experiments 1 and 2 are not due to the activation of real words that share some phonology with the nonwords. The results suggest that knowledge of phoneme combinatorics and representations of sub-lexical linguistic sound patterns are stored within the same fronto-temporal network that stores higher-level linguistic knowledge and supports word and sentence comprehension.


1996 ◽  
Vol 55 ◽  
pp. 175-186
Author(s):  
Richard J. Towell

In this article it is argued first that linguistic knowledge consists of two components, linguistic competence and learned linguistic knowledge, and that these components are created in the mind of the second language learner by different processes. It is further argued that these two kinds of knowledge must be stored in the mind as proceduralised knowledge, through a process of automatization or proceduralisation, in order to permit fluent language processing. Using evidence gathered from undergraduate learners of French, these two hypotheses are investigated. The acquisition of competence is investigated through grammaticality judgement tests, the acquisition of proceduralised knowledge is investigated through the measurement of temporal variables. In relation to the acquisition of linguistic competence, the results suggest that learners do not re-set parameters even after a lengthy period of exposure to the L2, but that they may mimic the L2 on the basis of the LI. In relation to the proceduralisation of linguistic knowledge, the results suggest that learners do not possess the L2 knowledge in the same way as the LI knowledge but that specific aspects of the knowledge are proceduralised over time. It is expected that further investigation of the data set will enable more detailed statements about exactly what kind of knowledge has been acquired and proceduralised and what has not.


2019 ◽  
Author(s):  
Cory Shain ◽  
Idan Asher Blank ◽  
Marten van Schijndel ◽  
William Schuler ◽  
Evelina Fedorenko

AbstractMuch research in cognitive neuroscience supports prediction as a canonical computation of cognition across domains. Is such predictive coding implemented by feedback from higher-order domain-general circuits, or is it locally implemented in domain-specific circuits? What information sources are used to generate these predictions? This study addresses these two questions in the context of language processing. We present fMRI evidence from a naturalistic comprehension paradigm (1) that predictive coding in the brain’s response to language is domain-specific, and (2) that these predictions are sensitive both to local word co-occurrence patterns and to hierarchical structure. Using a recently developed continuous-time deconvolutional regression technique that supports data-driven hemodynamic response function discovery from continuous BOLD signal fluctuations in response to naturalistic stimuli, we found effects of prediction measures in the language network but not in the domain-general multiple-demand network, which supports executive control processes and has been previously implicated in language comprehension. Moreover, within the language network, surface-level and structural prediction effects were separable. The predictability effects in the language network were substantial, with the model capturing over 37% of explainable variance on held-out data. These findings indicate that human sentence processing mechanisms generate predictions about upcoming words using cognitive processes that are sensitive to hierarchical structure and specialized for language processing, rather than via feedback from high-level executive control mechanisms.


2012 ◽  
Vol 2 (4) ◽  
pp. 31-44
Author(s):  
Mohamed H. Haggag ◽  
Bassma M. Othman

Context processing plays an important role in different Natural Language Processing applications. Sentence ordering is one of critical tasks in text generation. Following the same order of sentences in the row sources of text is not necessarily to be applied for the resulted text. Accordingly, a need for chronological sentence ordering is of high importance in this regard. Some researches followed linguistic syntactic analysis and others used statistical approaches. This paper proposes a new model for sentence ordering based on sematic analysis. Word level semantics forms a seed to sentence level sematic relations. The model introduces a clustering technique based on sentences senses relatedness. Following to this, sentences are chronologically ordered through two main steps; overlap detection and chronological cause-effect rules. Overlap detection drills down into each cluster to step through its sentences in chronological sequence. Cause-effect rules forms the linguistic knowledge controlling sentences relations. Evaluation of the proposed algorithm showed the capability of the proposed model to process size free texts, non-domain specific and open to extend the cause-effect rules for specific ordering needs.


2018 ◽  
Vol 120 (5) ◽  
pp. 2555-2570 ◽  
Author(s):  
Brianna L. Pritchett ◽  
Caitlyn Hoeflin ◽  
Kami Koldewyn ◽  
Eyal Dechter ◽  
Evelina Fedorenko

A set of left frontal, temporal, and parietal brain regions respond robustly during language comprehension and production (e.g., Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177–1194, 2010; Menenti L, Gierhan SM, Segaert K, Hagoort P. Psychol Sci 22: 1173–1182, 2011). These regions have been further shown to be selective for language relative to other cognitive processes, including arithmetic, aspects of executive function, and music perception (e.g., Fedorenko E, Behr MK, Kanwisher N. Proc Natl Acad Sci USA 108: 16428–16433, 2011; Monti MM, Osherson DN. Brain Res 1428: 33–42, 2012). However, one claim about overlap between language and nonlinguistic cognition remains prominent. In particular, some have argued that language processing shares computational demands with action observation and/or execution (e.g., Rizzolatti G, Arbib MA. Trends Neurosci 21: 188–194, 1998; Koechlin E, Jubault T. Neuron 50: 963–974, 2006; Tettamanti M, Weniger D. Cortex 42: 491–494, 2006). However, the evidence for these claims is indirect, based on observing activation for language and action tasks within the same broad anatomical areas (e.g., on the lateral surface of the left frontal lobe). To test whether language indeed shares machinery with action observation/execution, we examined the responses of language brain regions, defined functionally in each individual participant (Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177–1194, 2010) to action observation ( experiments 1, 2, and 3a) and action imitation ( experiment 3b). With the exception of the language region in the angular gyrus, all language regions, including those in the inferior frontal gyrus (within “Broca’s area”), showed little or no response during action observation/imitation. These results add to the growing body of literature suggesting that high-level language regions are highly selective for language processing (see Fedorenko E, Varley R. Ann NY Acad Sci 1369: 132–153, 2016 for a review). NEW & NOTEWORTHY Many have argued for overlap in the machinery used to interpret language and others’ actions, either because action observation was a precursor to linguistic communication or because both require interpreting hierarchically-structured stimuli. However, existing evidence is indirect, relying on group analyses or reverse inference. We examined responses to action observation in language regions defined functionally in individual participants and found no response. Thus language comprehension and action observation recruit distinct circuits in the modern brain.


Author(s):  
Nicolás José Fernández-Martínez ◽  
Carlos Periñán-Pascual

Location-based systems require rich geospatial data in emergency and crisis-related situations (e.g. earthquakes, floods, terrorist attacks, car accidents or pandemics) for the geolocation of not only a given incident but also the affected places and people in need of immediate help, which could potentially save lives and prevent further damage to urban or environmental areas. Given the sparsity of geotagged tweets, geospatial data must be obtained from the locative references mentioned in textual data such as tweets. In this context, we introduce nLORE (neural LOcative Reference Extractor), a deep-learning system that serves to detect locative references in English tweets by making use of the linguistic knowledge provided by LORE. nLORE, which captures fine-grained complex locative references of any type, outperforms not only LORE, but also well-known general-purpose or domain-specific off-the-shelf entity-recognizer systems, both qualitatively and quantitatively. However, LORE shows much better runtime efficiency, which is especially important in emergency-based and crisis-related scenarios that demand quick intervention to send first responders to affected areas and people. This highlights the often undervalued yet very important role of rule-based models in natural language processing for real-life and real-time scenarios.


Author(s):  
Leila Wehbe ◽  
Idan Asher Blank ◽  
Cory Shain ◽  
Richard Futrell ◽  
Roger Levy ◽  
...  

AbstractWhat role do domain-general executive functions play in human language comprehension? To address this question, we examine the relationship between behavioral measures of comprehension and neural activity in the domain-general “multiple demand” (MD) network, which has been linked to constructs like attention, working memory, inhibitory control, and selection, and implicated in diverse goal-directed behaviors. Specifically, fMRI data collected during naturalistic story listening are compared to theory-neutral measures of online comprehension difficulty and incremental processing load (reading times and eye-fixation durations). Critically, to ensure that variance in these measures is driven by features of the linguistic stimulus rather than reflecting participant-or trial-level variability, the neuroimaging and behavioral datasets were collected in non-overlapping samples. We find no behavioral-neural link in functionally localized MD regions; instead, this link is found in the domain-specific, fronto-temporal “core language network”, in both left hemispheric areas and their right hemispheric homologues. These results argue against strong involvement of domain-general executive circuits in language comprehension.


2021 ◽  
Author(s):  
Cory Shain ◽  
Idan A. Blank ◽  
Evelina Fedorenko ◽  
Edward Gibson ◽  
William Schuler

AbstractA standard view of human language processing is that comprehenders build richly structured mental representations of natural language utterances, word by word, using computationally costly memory operations supported by domain-general working memory resources. However, three core claims of this view have been questioned, with some prior work arguing that (1) rich word-by-word structure building is not a core function of the language comprehension system, (2) apparent working memory costs are underlyingly driven by word predictability (surprisal), and/or (3) language comprehension relies primarily on domain-general rather than domain-specific working memory resources. In this work, we simultaneously evaluate all three of these claims using naturalistic comprehension in fMRI. In each participant, we functionally localize (a) a language-selective network and (b) a ‘multiple-demand’ network that supports working memory across domains, and we analyze the responses in these two networks of interest during naturalistic story listening with respect to a range of theory-driven predictors of working memory demand under rigorous surprisal controls. Results show robust surprisal-independent effects of word-by-word memory demand in the language network and no effect of working memory demand in the multiple demand network. Our findings thus support the view that language comprehension (1) entails word-by-word structure building using (2) computationally intensive memory operations that are not explained by surprisal. However, these results challenge (3) the domain-generality of the resources that support these operations, instead indicating that working memory operations for language comprehension are carried out by the same neural resources that store linguistic knowledge.Significance StatementThis study uses fMRI to investigate signatures of working memory (WM) demand during naturalistic story listening, using a broad range of theoretically motivated estimates of WM demand. Results support a strong effect of WM demand in language-selective brain regions but no effect of WM demand in “multiple demand” regions that have previously been associated with WM in non-linguistic domains. We further show evidence that WM effects in language regions are distinct from effects of word predictability. Our findings support a core role for WM in incremental language processing, using WM resources that are specialized for language.


2016 ◽  
Author(s):  
Idan A. Blank ◽  
Melissa C. Duff ◽  
Sarah Brown-Schmidt ◽  
Evelina Fedorenko

AbstractLanguage processing requires us to encode linear relations between acoustic forms and map them onto hierarchical relations between meaning units. Such relational binding of linguistic elements might recruit the hippocampus given its engagement by similar operations in other cognitive domains. Historically, hippocampal engagement in online language use has received little attention because patients with hippocampal damage are not aphasic. However, recent studies have found that these patients exhibit language impairments when the demands on flexible relational binding are high, suggesting that the hippocampus does, in fact, contribute to linguistic processing. A fundamental question is thus whether language processing engages domain-general hippocampal mechanisms that are also recruited across other cognitive processes or whether, instead, it relies on certain language-selective areas within the hippocampus. To address this question, we conducted the first systematic analysis of hippocampal engagement during comprehension in healthy adults (n=150 across three experiments) using fMRI. Specifically, we functionally localized putative “language-regions” within the hippocampus using a language comprehension task, and found that these regions (i) were selectively engaged by language but not by six non-linguistic tasks; and (ii) were coupled in their activity with the cortical language network during both “rest” and especially story comprehension, but not with the domain-general “multiple-demand (MD)” network. This functional profile did not generalize to other hippocampal regions that were localized using a non-linguistic, working memory task. These findings suggest that some hippocampal mechanisms that maintain and integrate information during language comprehension are not domain-general but rather belong to the language-specific brain network.Significance statementAccording to popular views, language processing is exclusively supported by neocortical mechanisms. However, recent patient studies suggest that language processing may also require the hippocampus, especially when relations among linguistic elements have to be flexibly integrated and maintained. Here, we address a core question about the place of the hippocampus in the cognitive architecture of language: are certain hippocampal operations language-specific rather than domain-general? By extensively characterizing hippocampal recruitment during language comprehension in healthy adults using fMRI, we show that certain hippocampal subregions exhibit signatures of language specificity in both their response profiles and their patterns of activity synchronization with known functional regions in the neocortex. We thus suggest that the hippocampus is a satellite constituent of the language network.


2016 ◽  
Vol 20 (3) ◽  
Author(s):  
Emily M. Bender

AbstractThis paper explores the ways in which the field of natural language processing (NLP) can and does benefit from work in linguistic typology. I describe the recent increase in interest in multilingual natural language processing and give a high-level overview of the field. I then turn to a discussion of how linguistic knowledge in general is incorporated in NLP technology before describing how typological results in particular are used. I consider both rule-based and machine learning approaches to NLP and review literature on predicting typological features as well as that which leverages such features.


Sign in / Sign up

Export Citation Format

Share Document