Parallel natural language processing on a semantic network array processor

1995 ◽  
Vol 7 (3) ◽  
pp. 391-405 ◽  
Author(s):  
Minhwa Chung ◽  
D.I. Moldovan
2017 ◽  
Vol 01 (01) ◽  
pp. 1630015 ◽  
Author(s):  
Claudio Delli Bovi ◽  
Roberto Navigli

Accurate semantic modeling lies at the very core of today’s Natural Language Processing (NLP). Getting a handle on the various phenomena that regulate the meaning of linguistic utterances can pave the way for solving many compelling and ambitious tasks in the field, from Machine Translation to Question Answering and Information Retrieval. A complete semantic model of language, however, needs first of all reliable building blocks. In the last two decades, research in lexical semantics (which focuses on the meaning of individual linguistic elements, i.e., words and expressions), has produced increasingly comprehensive and effective machine-readable dictionaries in multiple languages: like humans, NLP systems can now leverage these sources of lexical knowledge to discriminate among various senses of a given lexeme, thereby improving their performances on downstream tasks and applications. In this paper, we focus on the case study of BabelNet, a large multilingual encyclopedic dictionary and semantic network, to describe in detail how such knowledge resources are built, improved and exploited for crucial NLP tasks such as Word Sense Disambiguation, Entity Linking and Semantic Similarity.


Author(s):  
Roberto Navigli ◽  
Michele Bevilacqua ◽  
Simone Conia ◽  
Dario Montagnini ◽  
Francesco Cecconi

The intelligent manipulation of symbolic knowledge has been a long-sought goal of AI. However, when it comes to Natural Language Processing (NLP), symbols have to be mapped to words and phrases, which are not only ambiguous but also language-specific: multilinguality is indeed a desirable property for NLP systems, and one which enables the generalization of tasks where multiple languages need to be dealt with, without translating text. In this paper we survey BabelNet, a popular wide-coverage lexical-semantic knowledge resource obtained by merging heterogeneous sources into a unified semantic network that helps to scale tasks and applications to hundreds of languages. Over its ten years of existence, thanks to its promise to interconnect languages and resources in structured form, BabelNet has been employed in countless ways and directions. We first introduce the BabelNet model, its components and statistics, and then overview its successful use in a wide range of tasks in NLP as well as in other fields of AI.


2020 ◽  
pp. 3-17
Author(s):  
Peter Nabende

Natural Language Processing for under-resourced languages is now a mainstream research area. However, there are limited studies on Natural Language Processing applications for many indigenous East African languages. As a contribution to covering the current gap of knowledge, this paper focuses on evaluating the application of well-established machine translation methods for one heavily under-resourced indigenous East African language called Lumasaaba. Specifically, we review the most common machine translation methods in the context of Lumasaaba including both rule-based and data-driven methods. Then we apply a state of the art data-driven machine translation method to learn models for automating translation between Lumasaaba and English using a very limited data set of parallel sentences. Automatic evaluation results show that a transformer-based Neural Machine Translation model architecture leads to consistently better BLEU scores than the recurrent neural network-based models. Moreover, the automatically generated translations can be comprehended to a reasonable extent and are usually associated with the source language input.


Diabetes ◽  
2019 ◽  
Vol 68 (Supplement 1) ◽  
pp. 1243-P
Author(s):  
JIANMIN WU ◽  
FRITHA J. MORRISON ◽  
ZHENXIANG ZHAO ◽  
XUANYAO HE ◽  
MARIA SHUBINA ◽  
...  

Author(s):  
Pamela Rogalski ◽  
Eric Mikulin ◽  
Deborah Tihanyi

In 2018, we overheard many CEEA-AGEC members stating that they have "found their people"; this led us to wonder what makes this evolving community unique. Using cultural historical activity theory to view the proceedings of CEEA-ACEG 2004-2018 in comparison with the geographically and intellectually adjacent ASEE, we used both machine-driven (Natural Language Processing, NLP) and human-driven (literature review of the proceedings) methods. Here, we hoped to build on surveys—most recently by Nelson and Brennan (2018)—to understand, beyond what members say about themselves, what makes the CEEA-AGEC community distinct, where it has come from, and where it is going. Engaging in the two methods of data collection quickly diverted our focus from an analysis of the data themselves to the characteristics of the data in terms of cultural historical activity theory. Our preliminary findings point to some unique characteristics of machine- and human-driven results, with the former, as might be expected, focusing on the micro-level (words and language patterns) and the latter on the macro-level (ideas and concepts). NLP generated data within the realms of "community" and "division of labour" while the review of proceedings centred on "subject" and "object"; both found "instruments," although NLP with greater granularity. With this new understanding of the relative strengths of each method, we have a revised framework for addressing our original question.  


Sign in / Sign up

Export Citation Format

Share Document