scholarly journals Voxelwise encoding models show that cerebellar language representations are highly conceptual

2021 ◽  
Author(s):  
Amanda LeBel ◽  
Shailee Jain ◽  
Alexander G. Huth

AbstractThere is a growing body of research demonstrating that the cerebellum is involved in language understanding. Early theories assumed that the cerebellum is involved in low-level language processing. However, those theories are at odds with recent work demonstrating cerebellar activation during cognitive tasks. Using natural language stimuli and an encoding model framework, we performed an fMRI experiment where subjects passively listened to five hours of natural language stimuli which allowed us to analyze language processing in the cerebellum with higher precision than previous work. We used this data to fit voxelwise encoding models with five different feature spaces that span the hierarchy of language processing from acoustic input to high-level conceptual processing. Examining the prediction performance of these models on separate BOLD data shows that cerebellar responses to language are almost entirely explained by high-level conceptual language features rather than low-level acoustic or phonemic features. Additionally, we found that the cerebellum has a higher proportion of voxels that represent social semantic categories, which include “social” and “people” words, and lower representations of all other semantic categories, including “mental”, “concrete”, and “place” words, than cortex. This suggests that the cerebellum is representing language at a conceptual level with a preference for social information.Significance StatementRecent work has demonstrated that, beyond its typical role in motor planning, the cerebellum is implicated in a wide variety of tasks including language. However, little is known about the language representations in the cerebellum, or how those representations compare to cortex. Using voxelwise encoding models and natural language fMRI data, we demonstrate here that language representations are significantly different in the cerebellum as compared to cortex. Cerebellum language representations are almost entirely semantic, and the cerebellum contains over-representation of social semantic information as compared to cortex. These results suggest that the cerebellum is not involved in language processing per se, but cognitive processing more generally.

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Isis Truck ◽  
Mohammed-Amine Abchir

In the geolocation field where high-level programs and low-level devices coexist, it is often difficult to find a friendly user interface to configure all the parameters. The challenge addressed in this paper is to propose intuitive and simple, thus natural language interfaces to interact with low-level devices. Such interfaces contain natural language processing (NLP) and fuzzy representations of words that facilitate the elicitation of business-level objectives in our context. A complete methodology is proposed, from the lexicon construction to a dialogue software agent including a fuzzy linguistic representation, based on synonymy.


Author(s):  
Abraham Sanders ◽  
Rachael White ◽  
Lauren Severson ◽  
Rufeng Ma ◽  
Richard McQueen ◽  
...  

In this exploratory study, we scrutinize a database of over 1 million tweets collected across the first five months of 2020 to draw conclusions about public attitudes towards the preventative measure of mask usage during the COVID-19 pandemic. In recent months, a body of literature has emerged to suggest the robustness of trends in online activity as proxies for the epidemiological and sociological impact of COVID-19. We employ natural language processing, clustering and sentiment analysis techniques to organize tweets relating to mask-wearing into high-level themes, then relay narratives for individual clusters through automatic text summarization. We find that topic clustering and visualization based on mask-related Twitter data offers revealing insights into societal perceptions of COVID-19 and techniques for its prevention. We observe that the volume and polarity of mask related tweets has greatly increased. Importantly, the analysis pipeline presented can be leveraged by the health community for the assessment of public response to health interventions in the ongoing global health crisis.


2010 ◽  
Vol 36 (3) ◽  
pp. 341-387 ◽  
Author(s):  
Nitin Madnani ◽  
Bonnie J. Dorr

The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language—words, phrases, and sentences—is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation.


Author(s):  
Davide Picca ◽  
Dominique Jaccard ◽  
Gérald Eberlé

In the last decades, Natural Language Processing (NLP) has obtained a high level of success. Interactions between NLP and Serious Games have started and some of them already include NLP techniques. The objectives of this paper are twofold: on the one hand, providing a simple framework to enable analysis of potential uses of NLP in Serious Games and, on the other hand, applying the NLP framework to existing Serious Games and giving an overview of the use of NLP in pedagogical Serious Games. In this paper we present 11 serious games exploiting NLP techniques. We present them systematically, according to the following structure:  first, we highlight possible uses of NLP techniques in Serious Games, second, we describe the type of NLP implemented in the each specific Serious Game and, third, we provide a link to possible purposes of use for the different actors interacting in the Serious Game.


2013 ◽  
Vol 846-847 ◽  
pp. 1239-1242
Author(s):  
Yang Yang ◽  
Hui Zhang ◽  
Yong Qi Wang

This paper presents our recent work towards the development of a voice calculator based on speech error correction and natural language processing. The calculator enhances the accuracy of speech recognition by classifying and summarizing recognition errors on numerical calculation speech recognition area, then constructing Pinyin-text-mapping library and replacement rules, and combing priority correction mechanism and memory correction mechanism of Pinyin-text-mapping. For the expression after correctly recognizing, the calculator uses recursive-descent parsing algorithm and synthesized attribute computing algorithm to calculate the final result and output the result using TTS engine. The implementation of this voice calculator makes a calculator more humane and intelligent.


2013 ◽  
Vol 13 (4-5) ◽  
pp. 487-501 ◽  
Author(s):  
ROLF SCHWITTER

AbstractIn this paper we take on Stuart C. Shapiro's challenge of solving the Jobs Puzzle automatically and do this via controlled natural language processing. Instead of encoding the puzzle in a formal language that might be difficult to use and understand, we employ a controlled natural language as a high-level specification language that adheres closely to the original notation of the puzzle and allows us to reconstruct the puzzle in a machine-processable way and add missing and implicit information to the problem description. We show how the resulting specification can be translated into an answer set program and be processed by a state-of-the-art answer set solver to find the solutions to the puzzle.


2020 ◽  
Vol 9 (2) ◽  
pp. 55-62
Author(s):  
Michael Holsworth ◽  

A fundamental skill required for vocabulary development is word recognition ability. According to Perfetti (1985), word recognition ability relies on low-level cognitive processing skill to be automatic and efficient in order for cognitive resources to be allocated to high-level processes such as inferencing and schemata activation needed for reading comprehension. The low-level processes include orthographic knowledge, semantic knowledge, and phonological awareness. These low-level processes must be efficient, fluent, and automatic in second language readers in order for them to achieve the ultimate goal of reading comprehension. This article briefly describes the concept of word recognition, its relation to vocabulary, and three tests that were designed to measure the three components of word recognition (orthographic, semantic, and phonological knowledge) in a longitudinal study that investigated the effects of word recognition training on reading comprehension.


Author(s):  
S. Jeffrey ◽  
J. Richards ◽  
F. Ciravegna ◽  
S. Waller ◽  
S. Chapman ◽  
...  

This paper describes ‘Archaeotools’, a major e-Science project in archaeology. The aim of the project is to use faceted classification and natural language processing to create an advanced infrastructure for archaeological research. The project aims to integrate over 1×10 6 structured database records referring to archaeological sites and monuments in the UK, with information extracted from semi-structured grey literature reports, and unstructured antiquarian journal accounts, in a single faceted browser interface. The project has illuminated the variable level of vocabulary control and standardization that currently exists within national and local monument inventories. Nonetheless, it has demonstrated that the relatively well-defined ontologies and thesauri that exist in archaeology mean that a high level of success can be achieved using information extraction techniques. This has great potential for unlocking and making accessible the information held in grey literature and antiquarian accounts, and has lessons for allied disciplines.


2008 ◽  
Vol 71 (9) ◽  
pp. 1768-1773 ◽  
Author(s):  
JOSEPH M. BOSILEVAC ◽  
MOHAMMAD KOOHMARAIE

Recent work from our laboratory revealed that tryptic soy broth (TSB) was a superior enrichment medium for use in test-and-hold Escherichia coli O157:H7 methods at levels down to a ratio of three volumes of medium to one volume of sample. Lower ratios were examined for their effect on results obtained from culture isolation, the BAX E. coli O157:H7 MP assay, and the Assurance GDS E. coli O157:H7 assay. Ground beef and boneless beef trim were inoculated with a high level (170 CFU/65 g of ground beef and 43 CFU/65 g of trim) and a low level (17 CFU/65 g of ground beef and 4 CFU/65 g of trim) of E. coli O157:H7 and enriched in 3, 1, 0.5, and 0 volumes of TSB. The volume of TSB used did not affect E. coli O157:H7 detection by culture isolation, Assurance GDS detection in ground beef or trim, or the BAX MP assay detection in ground beef. However, BAX MP assay detection of E. coli O157:H7 in beef trim was 50, 42, and 33% positive when enrichment volumes of 0.5×, 1×, and 3×, respectively, were used. Optimum results with all methods were obtained using 1 volume of TSB. We concluded that detection test results can be considered valid as long as enrichment medium is used, even when it is less than the specified 3 or 10 volumes.


2014 ◽  
Vol 112 (5) ◽  
pp. 1105-1118 ◽  
Author(s):  
Idan Blank ◽  
Nancy Kanwisher ◽  
Evelina Fedorenko

What is the relationship between language and other high-level cognitive functions? Neuroimaging studies have begun to illuminate this question, revealing that some brain regions are quite selectively engaged during language processing, whereas other “multiple-demand” (MD) regions are broadly engaged by diverse cognitive tasks. Nonetheless, the functional dissociation between the language and MD systems remains controversial. Here, we tackle this question with a synergistic combination of functional MRI methods: we first define candidate language-specific and MD regions in each subject individually (using functional localizers) and then measure blood oxygen level-dependent signal fluctuations in these regions during two naturalistic conditions (“rest” and story-comprehension). In both conditions, signal fluctuations strongly correlate among language regions as well as among MD regions, but correlations across systems are weak or negative. Moreover, data-driven clustering analyses based on these inter-region correlations consistently recover two clusters corresponding to the language and MD systems. Thus although each system forms an internally integrated whole, the two systems dissociate sharply from each other. This independent recruitment of the language and MD systems during cognitive processing is consistent with the hypothesis that these two systems support distinct cognitive functions.


Sign in / Sign up

Export Citation Format

Share Document