Why users keep answering questions in online question answering communities: A theoretical and empirical investigation

2013 ◽  
Vol 33 (1) ◽  
pp. 93-104 ◽  
Author(s):  
Xiao-Ling Jin ◽  
Zhongyun Zhou ◽  
Matthew K.O. Lee ◽  
Christy M.K. Cheung
2020 ◽  
Vol 34 (05) ◽  
pp. 7578-7585
Author(s):  
Ting-Rui Chiang ◽  
Hao-Tong Ye ◽  
Yun-Nung Chen

With a lot of work about context-free question answering systems, there is an emerging trend of conversational question answering models in the natural language processing field. Thanks to the recently collected datasets, including QuAC and CoQA, there has been more work on conversational question answering, and recent work has achieved competitive performance on both datasets. However, to best of our knowledge, two important questions for conversational comprehension research have not been well studied: 1) How well can the benchmark dataset reflect models' content understanding? 2) Do the models well utilize the conversation content when answering questions? To investigate these questions, we design different training settings, testing settings, as well as an attack to verify the models' capability of content understanding on QuAC and CoQA. The experimental results indicate some potential hazards in the benchmark datasets, QuAC and CoQA, for conversational comprehension research. Our analysis also sheds light on both what models may learn and how datasets may bias the models. With deep investigation of the task, it is believed that this work can benefit the future progress of conversation comprehension. The source code is available at https://github.com/MiuLab/CQA-Study.


1981 ◽  
Vol 13 (2) ◽  
pp. 111-129 ◽  
Author(s):  
Tom Nicholson ◽  
Robert Imlach

Young readers often seem to overlook explicitly stated causal statements In narrative texts and instead give their own versions of why a text event occurred. Some researchers would agree with Smith (1979) that children do this because they read for meaning rather than word-by-word. This is an “inside-out” (or, “schema-based”) view of text comprehension. Other researchers, however, agree with Thorndike (1917) that “errors” occur because “the mind is assailed by every word in the paragraph.” This is an “outside-in” (or, “text-based”) view of the comprehension process. The purpose of this study was to find out the relative influence of text data and prior knowledge on the kinds of inferences which children make when answering questions about stories. In Experiment 1, text structure was altered by embedding either predictable or unpredictable reasons for events in the text, and also by varying the position and distance of these reasons from the text event being asked about. Some of these stories were familiar; others less so. Text accessibility was also varied. In all, the design was a 24 × 3 factorial, using repeated measures. In Experiment 2, a causal “preference” factor was added, to take account of the fact that children seemed predisposed toward certain kinds of inferences, whether they are predictable or not. The results provide support for the notion that text data and background knowledge compete for priority in question-answering. They suggest that children may benefit from instruction which helps them to arbitrate between plausible, yet competing explanations for important text events.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 535-552 ◽  
Author(s):  
DANIELA INCLEZAN ◽  
QINGLIN ZHANG ◽  
MARCELLO BALDUCCINI ◽  
ANKUSH ISRANEY

AbstractWe describe an application of Answer Set Programming to the understanding of narratives about stereotypical activities, demonstrated via question answering. Substantial work in this direction was done by Erik Mueller, who modeled stereotypical activities as scripts. His systems were able to understand a good number of narratives, but could not process texts describing exceptional scenarios. We propose addressing this problem by using a theory of intentions developed by Blount, Gelfond, and Balduccini. We present a methodology in which we substitute scripts by activities (i.e., hierarchical plans associated with goals) and employ the concept of an intentional agent to reason about both normal and exceptional scenarios. We exemplify the application of this methodology by answering questions about a number of restaurant stories. This paper is under consideration for acceptance in TPLP.


2012 ◽  
pp. 344-370
Author(s):  
Brigitte Grau

This chapter is dedicated to factual question answering, i.e., extracting precise and exact answers to question given in natural language from texts. A question in natural language gives more information than a bag of word query (i.e., a query made of a list of words), and provides clues for finding precise answers. The author first focuses on the presentation of the underlying problems mainly due to the existence of linguistic variations between questions and their answerable pieces of texts for selecting relevant passages and extracting reliable answers. The author first presents how to answer factual question in open domain. The author also presents answering questions in specialty domain as it requires dealing with semi-structured knowledge and specialized terminologies, and can lead to different applications, as information management in corporations for example. Searching answers on the Web constitutes another application frame and introduces specificities linked to Web redundancy or collaborative usage. Besides, the Web is also multilingual, and a challenging problem consists in searching answers in target language documents other than the source language of the question. For all these topics, this chapter presents main approaches and the remaining problems.


2001 ◽  
Vol 7 (4) ◽  
pp. 343-360 ◽  
Author(s):  
DEKANG LIN ◽  
PATRICK PANTEL

One of the main challenges in question-answering is the potential mismatch between the expressions in questions and the expressions in texts. While humans appear to use inference rules such as ‘X writes Y’ implies ‘X is the author of Y’ in answering questions, such rules are generally unavailable to question-answering systems due to the inherent difficulty in constructing them. In this paper, we present an unsupervised algorithm for discovering inference rules from text. Our algorithm is based on an extended version of Harris’ Distributional Hypothesis, which states that words that occurred in the same contexts tend to be similar. Instead of using this hypothesis on words, we apply it to paths in the dependency trees of a parsed corpus. Essentially, if two paths tend to link the same set of words, we hypothesize that their meanings are similar. We use examples to show that our system discovers many inference rules easily missed by humans.


2014 ◽  
Vol 4 (2) ◽  
pp. 19-40
Author(s):  
Rosy Madaan ◽  
A.K. Sharma ◽  
Ashutosh Dixit

Question answering offers a more intuitive approach to information processing. A number of approaches have been used for answering questions. In this paper, we propose a questionansweringsystem that uses blogs as its source of information. The system deals with crawling blog pages, summarizing them, indexing and then ranking the summarized content. The user asks a question and gets answer(s) in response. The answer(s) obtained are better as compared to those provided by the existing QA systems that use the general web pages for the purpose of answering. The experimental results show that the proposed system has shown promising results and the responses given by the system are better than those given by the existing QA systems.


Author(s):  
Patrick Saint-Dizier

In this chapter, the authors develop the paradigm of advanced question-answering that includes how-to, why, evaluative, comparative, and opinion questions. They show the different parameters at stake in answer production, involving several aspects of cooperativity. These types of questions require quite a lot of discourse semantics analysis and domain knowledge. The second part of this chapter is devoted to a short presentation of those text semantics aspects relevant to answering questions. The last part of this chapter introduces <TextCoop>, a platform the authors have developed for discourse semantics analysis that they use for answering complex questions, in particular how-to and opinion questions.


Author(s):  
Sanket Shah ◽  
Anand Mishra ◽  
Naganand Yadati ◽  
Partha Pratim Talukdar

Visual Question Answering (VQA) has emerged as an important problem spanning Computer Vision, Natural Language Processing and Artificial Intelligence (AI). In conventional VQA, one may ask questions about an image which can be answered purely based on its content. For example, given an image with people in it, a typical VQA question may inquire about the number of people in the image. More recently, there is growing interest in answering questions which require commonsense knowledge involving common nouns (e.g., cats, dogs, microphones) present in the image. In spite of this progress, the important problem of answering questions requiring world knowledge about named entities (e.g., Barack Obama, White House, United Nations) in the image has not been addressed in prior research. We address this gap in this paper, and introduce KVQA – the first dataset for the task of (world) knowledge-aware VQA. KVQA consists of 183K question-answer pairs involving more than 18K named entities and 24K images. Questions in this dataset require multi-entity, multi-relation, and multi-hop reasoning over large Knowledge Graphs (KG) to arrive at an answer. To the best of our knowledge, KVQA is the largest dataset for exploring VQA over KG. Further, we also provide baseline performances using state-of-the-art methods on KVQA.


Sign in / Sign up

Export Citation Format

Share Document