Advanced Question-Answering and Discourse Semantics

Author(s):  
Patrick Saint-Dizier

In this chapter, the authors develop the paradigm of advanced question-answering that includes how-to, why, evaluative, comparative, and opinion questions. They show the different parameters at stake in answer production, involving several aspects of cooperativity. These types of questions require quite a lot of discourse semantics analysis and domain knowledge. The second part of this chapter is devoted to a short presentation of those text semantics aspects relevant to answering questions. The last part of this chapter introduces <TextCoop>, a platform the authors have developed for discourse semantics analysis that they use for answering complex questions, in particular how-to and opinion questions.

2014 ◽  
pp. 598-616
Author(s):  
Patrick Saint-Dizier

In this chapter, the authors develop the paradigm of advanced question-answering that includes how-to, why, evaluative, comparative, and opinion questions. They show the different parameters at stake in answer production, involving several aspects of cooperativity. These types of questions require quite a lot of discourse semantics analysis and domain knowledge. The second part of this chapter is devoted to a short presentation of those text semantics aspects relevant to answering questions. The last part of this chapter introduces <TextCoop>, a platform the authors have developed for discourse semantics analysis that they use for answering complex questions, in particular how-to and opinion questions.


2021 ◽  
Author(s):  
Truong-Thinh Tieu ◽  
Chieu-Nguyen Chau ◽  
Nguyen-Minh-Hoang Bui ◽  
Truong-Son Nguyen ◽  
Le-Minh Nguyen

2020 ◽  
Vol 34 (05) ◽  
pp. 7578-7585
Author(s):  
Ting-Rui Chiang ◽  
Hao-Tong Ye ◽  
Yun-Nung Chen

With a lot of work about context-free question answering systems, there is an emerging trend of conversational question answering models in the natural language processing field. Thanks to the recently collected datasets, including QuAC and CoQA, there has been more work on conversational question answering, and recent work has achieved competitive performance on both datasets. However, to best of our knowledge, two important questions for conversational comprehension research have not been well studied: 1) How well can the benchmark dataset reflect models' content understanding? 2) Do the models well utilize the conversation content when answering questions? To investigate these questions, we design different training settings, testing settings, as well as an attack to verify the models' capability of content understanding on QuAC and CoQA. The experimental results indicate some potential hazards in the benchmark datasets, QuAC and CoQA, for conversational comprehension research. Our analysis also sheds light on both what models may learn and how datasets may bias the models. With deep investigation of the task, it is believed that this work can benefit the future progress of conversation comprehension. The source code is available at https://github.com/MiuLab/CQA-Study.


1981 ◽  
Vol 13 (2) ◽  
pp. 111-129 ◽  
Author(s):  
Tom Nicholson ◽  
Robert Imlach

Young readers often seem to overlook explicitly stated causal statements In narrative texts and instead give their own versions of why a text event occurred. Some researchers would agree with Smith (1979) that children do this because they read for meaning rather than word-by-word. This is an “inside-out” (or, “schema-based”) view of text comprehension. Other researchers, however, agree with Thorndike (1917) that “errors” occur because “the mind is assailed by every word in the paragraph.” This is an “outside-in” (or, “text-based”) view of the comprehension process. The purpose of this study was to find out the relative influence of text data and prior knowledge on the kinds of inferences which children make when answering questions about stories. In Experiment 1, text structure was altered by embedding either predictable or unpredictable reasons for events in the text, and also by varying the position and distance of these reasons from the text event being asked about. Some of these stories were familiar; others less so. Text accessibility was also varied. In all, the design was a 24 × 3 factorial, using repeated measures. In Experiment 2, a causal “preference” factor was added, to take account of the fact that children seemed predisposed toward certain kinds of inferences, whether they are predictable or not. The results provide support for the notion that text data and background knowledge compete for priority in question-answering. They suggest that children may benefit from instruction which helps them to arbitrate between plausible, yet competing explanations for important text events.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 535-552 ◽  
Author(s):  
DANIELA INCLEZAN ◽  
QINGLIN ZHANG ◽  
MARCELLO BALDUCCINI ◽  
ANKUSH ISRANEY

AbstractWe describe an application of Answer Set Programming to the understanding of narratives about stereotypical activities, demonstrated via question answering. Substantial work in this direction was done by Erik Mueller, who modeled stereotypical activities as scripts. His systems were able to understand a good number of narratives, but could not process texts describing exceptional scenarios. We propose addressing this problem by using a theory of intentions developed by Blount, Gelfond, and Balduccini. We present a methodology in which we substitute scripts by activities (i.e., hierarchical plans associated with goals) and employ the concept of an intentional agent to reason about both normal and exceptional scenarios. We exemplify the application of this methodology by answering questions about a number of restaurant stories. This paper is under consideration for acceptance in TPLP.


2019 ◽  
Vol 54 (1) ◽  
pp. 34-63 ◽  
Author(s):  
Xiaoming Zhang ◽  
Mingming Meng ◽  
Xiaoling Sun ◽  
Yu Bai

Purpose With the advent of the era of Big Data, the scale of knowledge graph (KG) in various domains is growing rapidly, which holds huge amount of knowledge surely benefiting the question answering (QA) research. However, the KG, which is always constituted of entities and relations, is structurally inconsistent with the natural language query. Thus, the QA system based on KG is still faced with difficulties. The purpose of this paper is to propose a method to answer the domain-specific questions based on KG, providing conveniences for the information query over domain KG. Design/methodology/approach The authors propose a method FactQA to answer the factual questions about specific domain. A series of logical rules are designed to transform the factual questions into the triples, in order to solve the structural inconsistency between the user’s question and the domain knowledge. Then, the query expansion strategies and filtering strategies are proposed from two levels (i.e. words and triples in the question). For matching the question with domain knowledge, not only the similarity values between the words in the question and the resources in the domain knowledge but also the tag information of these words is considered. And the tag information is obtained by parsing the question using Stanford CoreNLP. In this paper, the KG in metallic materials domain is used to illustrate the FactQA method. Findings The designed logical rules have time stability for transforming the factual questions into the triples. Additionally, after filtering the synonym expansion results of the words in the question, the expansion quality of the triple representation of the question is improved. The tag information of the words in the question is considered in the process of data matching, which could help to filter out the wrong matches. Originality/value Although the FactQA is proposed for domain-specific QA, it can also be applied to any other domain besides metallic materials domain. For a question that cannot be answered, FactQA would generate a new related question to answer, providing as much as possible the user with the information they probably need. The FactQA could facilitate the user’s information query based on the emerging KG.


Sign in / Sign up

Export Citation Format

Share Document