Un Système Simple de Levée des Homographies

1995 ◽  
Vol 19 (1) ◽  
pp. 97-105
Author(s):  
Philippe Laval

This paper presents a software who's goal is to ease automatic analysis of natural language by solving two analysis problems: — resolution of syntactic ambiguities; — identification of fixed forms. In order to fulfil this goal, it appears that using negative rules make this system really powerful and easy to use, as long as sentences are represented by a graph structure. There are three main points to this paper: — the graph structure of the sentence; — the use of negative rules; — the identification of fixed forms based upon a new categorization of those forms. It's important to stress the industrial dimension of this work: actually, two systems are already using in software, namely a translating system for concept and a grammatical corrector for newspapers.

2019 ◽  
Author(s):  
Babak Hemmatian ◽  
Steven A. Sloman

Evans and Over (1996) made a seminal contribution to the cognitive sciences by describing two different routes humans take to reason toward their goals, one associated with intuition, the other with deliberation. We show how knowledge provided by our communities influences both routes. Many methods of outsourcing cognitive effort– taking advantage of information that one does not know but assumes that someone else can supply—show the hallmarks of intuitive reasoning. Effective outsourcing requires fast and efficient ways of identifying what must be outsourced, and which individual or group of people is most likely to have the relevant expertise. Research has identified several fallible heuristics like the degree of entrenchment of a term in a community of use that help people figure out what needs to be outsourced (content heuristics), and a different group of heuristics that allow us to find experts, for instance, through associations with certain environments and disciplines (expertise heuristics). In contrast, deliberation is primarily concerned with facilitating intentional collaboration with others toward joint goals, often using natural language. This division of labor between two interacting but distinct systems allows humans to leverage the representational and computational capacities of their communities to achieve ever more sophisticated goals.


Author(s):  
Patrick Saint-Dizier ◽  
Sharon J. Hamilton

2020 ◽  
Vol 34 (03) ◽  
pp. 3041-3048 ◽  
Author(s):  
Chuxu Zhang ◽  
Huaxiu Yao ◽  
Chao Huang ◽  
Meng Jiang ◽  
Zhenhui Li ◽  
...  

Knowledge graphs (KGs) serve as useful resources for various natural language processing applications. Previous KG completion approaches require a large number of training instances (i.e., head-tail entity pairs) for every relation. The real case is that for most of the relations, very few entity pairs are available. Existing work of one-shot learning limits method generalizability for few-shot scenarios and does not fully use the supervisory information; however, few-shot KG completion has not been well studied yet. In this work, we propose a novel few-shot relation learning model (FSRL) that aims at discovering facts of new relations with few-shot references. FSRL can effectively capture knowledge from heterogeneous graph structure, aggregate representations of few-shot references, and match similar entity pairs of reference set for every relation. Extensive experiments on two public datasets demonstrate that FSRL outperforms the state-of-the-art.


2021 ◽  
pp. 75-103
Author(s):  
Salvatore Florio ◽  
Øystein Linnebo

This chapter provides a systematic comparison of plural logic and an atomistic version of classical mereology. Since these two systems are mutually interpretable, it is formally possible to eliminate one in favor of the other. However, reasons are offered to retain both systems. In particular, mereology is a useful tool for the analysis of plurals in natural language.


2021 ◽  
Author(s):  
Trishali Banerjee ◽  
Upasana Bhattacharjee ◽  
K. R. Jansi

Data is the new gold; everything is data driven. But it is impossible for everyone to possess technical skills to be able to write queries and know different python tools used for data visualizations. The process of extracting information from a database is a mammoth task for non-technical users as it requires one to have extensive knowledge of DBMS language. But these data and visualizations are required for various everyday presentations and interactions in the professional world. This application would enable the users to overcome these obstacles. Our project aims at integrating two systems, an NLP interface to fetch data from simple English queries, and a second system where the fetched data with the help of natural language processing is used to form visualizations as demanded by the users will be created. This system would essentially help the people who are not techno-savvy or are not in the field of tech to interact with data using simple English.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 79
Author(s):  
Shengwen Li ◽  
Bing Li ◽  
Hong Yao ◽  
Shunping Zhou ◽  
Junjie Zhu ◽  
...  

WordNets organize words into synonymous word sets, and the connections between words present the semantic relationships between them, which have become an indispensable source for natural language processing (NLP) tasks. With the development and evolution of languages, WordNets need to be constantly updated manually. To address the problem of inadequate word semantic knowledge of “new words”, this study explores a novel method to automatically update the WordNet knowledge base by incorporating word-embedding techniques with sememe knowledge from HowNet. The model first characterizes the relationships among words and sememes with a graph structure and jointly learns the embedding vectors of words and sememes; finally, it synthesizes word similarities to predict concepts (synonym sets) of new words. To examine the performance of the proposed model, a new dataset connected to sememe knowledge and WordNet is constructed. Experimental results show that the proposed model outperforms the existing baseline models.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1949
Author(s):  
Chonghao Chen ◽  
Jianming Zheng ◽  
Honghui Chen

Fact verification aims to evaluate the authenticity of a given claim based on the evidence sentences retrieved from Wikipedia articles. Existing works mainly leverage the natural language inference methods to model the semantic interaction of claim and evidence, or further employ the graph structure to capture the relation features between multiple evidences. However, previous methods have limited representation ability in encoding complicated units of claim and evidences, and thus cannot support sophisticated reasoning. In addition, a limited amount of supervisory signals lead to the graph encoder could not distinguish the distinctions of different graph structures and weaken the encoding ability. To address the above issues, we propose a Knowledge-Enhanced Graph Attention network (KEGA) for fact verification, which introduces a knowledge integration module to enhance the representation of claims and evidences by incorporating external knowledge. Moreover, KEGA leverages an auxiliary loss based on contrastive learning to fine-tune the graph attention encoder and learn the discriminative features for the evidence graph. Comprehensive experiments conducted on FEVER, a large-scale benchmark dataset for fact verification, demonstrate the superiority of our proposal in both the multi-evidences and single-evidence scenarios. In addition, our findings show that the background knowledge for words can effectively improve the model performance.


Sign in / Sign up

Export Citation Format

Share Document