Natural language understanding and the perspectives of question answering

1982 ◽  
Author(s):  
Petr Sgall
2020 ◽  
Vol 34 (05) ◽  
pp. 7432-7439
Author(s):  
Yonatan Bisk ◽  
Rowan Zellers ◽  
Ronan Le bras ◽  
Jianfeng Gao ◽  
Yejin Choi

To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems. While recent pretrained models (such as BERT) have made progress on question answering over more abstract domains – such as news articles and encyclopedia entries, where text is plentiful – in more physical domains, text is inherently limited due to reporting bias. Can AI systems learn to reliably answer physical commonsense questions without experiencing the physical world?In this paper, we introduce the task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA. Though humans find the dataset easy (95% accuracy), large pretrained models struggle (∼75%). We provide analysis about the dimensions of knowledge that existing models lack, which offers significant opportunities for future research.


2021 ◽  
Vol 7 ◽  
pp. e759
Author(s):  
G. Thomas Hudson ◽  
Noura Al Moubayed

Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a single model. In this work we show how models trained to solve decaNLP fail with simple paraphrasing of the question. We contribute a crowd-sourced corpus of paraphrased questions (PQ-decaNLP), annotated with paraphrase phenomena. This enables analysis of how transformations such as swapping the class labels and changing the sentence modality lead to a large performance degradation. Training both MQAN and the newer T5 model using PQ-decaNLP improves their robustness and for some tasks improves the performance on the original questions, demonstrating the benefits of a model which is more robust to paraphrasing. Additionally, we explore how paraphrasing knowledge is transferred between tasks, with the aim of exploiting the multitask property to improve the robustness of the models. We explore the addition of paraphrase detection and paraphrase generation tasks, and find that while both models are able to learn these new tasks, knowledge about paraphrasing does not transfer to other decaNLP tasks.


2009 ◽  
Vol 15 (4) ◽  
pp. i-xvii ◽  
Author(s):  
IDO DAGAN ◽  
BILL DOLAN ◽  
BERNARDO MAGNINI ◽  
DAN ROTH

AbstractThe goal of identifying textual entailment – whether one piece of text can be plausibly inferred from another – has emerged in recent years as a generic core problem in natural language understanding. Work in this area has been largely driven by the PASCAL Recognizing Textual Entailment (RTE) challenges, which are a series of annual competitive meetings. The current work exhibits strong ties to some earlier lines of research, particularly automatic acquisition of paraphrases and lexical semantic relationships and unsupervised inference in applications such as question answering, information extraction and summarization. It has also opened the way to newer lines of research on more involved inference methods, on knowledge representations needed to support this natural language understanding challenge and on the use of learning methods in this context. RTE has fostered an active and growing community of researchers focused on the problem of applied entailment. This special issue of the JNLE provides an opportunity to showcase some of the most important work in this emerging area.


1998 ◽  
Vol 37 (04/05) ◽  
pp. 327-333 ◽  
Author(s):  
F. Buekens ◽  
G. De Moor ◽  
A. Waagmeester ◽  
W. Ceusters

AbstractNatural language understanding systems have to exploit various kinds of knowledge in order to represent the meaning behind texts. Getting this knowledge in place is often such a huge enterprise that it is tempting to look for systems that can discover such knowledge automatically. We describe how the distinction between conceptual and linguistic semantics may assist in reaching this objective, provided that distinguishing between them is not done too rigorously. We present several examples to support this view and argue that in a multilingual environment, linguistic ontologies should be designed as interfaces between domain conceptualizations and linguistic knowledge bases.


1995 ◽  
Vol 34 (04) ◽  
pp. 345-351 ◽  
Author(s):  
A. Burgun ◽  
L. P. Seka ◽  
D. Delamarre ◽  
P. Le Beux

Abstract:In medicine, as in other domains, indexing and classification is a natural human task which is used for information retrieval and representation. In the medical field, encoding of patient discharge summaries is still a manual time-consuming task. This paper describes an automated coding system of patient discharge summaries from the field of coronary diseases into the ICD-9-CM classification. The system is developed in the context of the European AIM MENELAS project, a natural-language understanding system which uses the conceptual-graph formalism. Indexing is performed by using a two-step processing scheme; a first recognition stage is implemented by a matching procedure and a secondary selection stage is made according to the coding priorities. We show the general features of the necessary translation of the classification terms in the conceptual-graph model, and for the coding rules compliance. An advantage of the system is to provide an objective evaluation and assessment procedure for natural-language understanding.


Sign in / Sign up

Export Citation Format

Share Document