scholarly journals Looking Beyond Sentence-Level Natural Language Inference for Question Answering and Text Summarization

Author(s):  
Anshuman Mishra ◽  
Dhruvesh Patel ◽  
Aparna Vijayakumar ◽  
Xiang Lorraine Li ◽  
Pavan Kapanipathi ◽  
...  
2020 ◽  
Vol 34 (05) ◽  
pp. 9258-9265
Author(s):  
Rongxiang Weng ◽  
Haoran Wei ◽  
Shujian Huang ◽  
Heng Yu ◽  
Lidong Bing ◽  
...  

Transformer, based on the encoder-decoder framework, has achieved state-of-the-art performance on several natural language generation tasks. The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence. These hidden states usually correspond to the input words and focus on capturing local information. However, the global (sentence level) information is seldom explored, leaving room for the improvement of generation quality. In this paper, we propose a novel global representation enhanced Transformer (GRET) to explicitly model global representation in the Transformer network. Specifically, in the proposed model, an external state is generated for the global representation from the encoder. The global representation is then fused into the decoder during the decoding process to improve generation quality. We conduct experiments in two text generation tasks: machine translation and text summarization. Experimental results on four WMT machine translation tasks and LCSTS text summarization task demonstrate the effectiveness of the proposed approach on natural language generation1.


Author(s):  
Horacio Saggion

Free text is a main repository of human knowledge, therefore methods and techniques to access this unstructured source of knowledge are of paramount importance. In this chapter we describe natural language processing technology for the development of question answering and text summarization systems. We focus on applications aiming at mining textual resources to extract knowledge for the automatic creation of definitions and person profiles.


Author(s):  
Xin Lu ◽  
Yao Deng ◽  
Ting Sun ◽  
Yi Gao ◽  
Jun Feng ◽  
...  

AbstractSentence matching is widely used in various natural language tasks, such as natural language inference, paraphrase identification and question answering. For these tasks, we need to understand the logical and semantic relationship between two sentences. Most current methods use all information within a sentence to build a model and hence determine its relationship to another sentence. However, the information contained in some sentences may cause redundancy or introduce noise, impeding the performance of the model. Therefore, we propose a sentence matching method based on multi keyword-pair matching (MKPM), which uses keyword pairs in two sentences to represent the semantic relationship between them, avoiding the interference of redundancy and noise. Specifically, we first propose a sentence-pair-based attention mechanism sp-attention to select the most important word pair from the two sentences as a keyword pair, and then propose a Bi-task architecture to model the semantic information of these keyword pairs. The Bi-task architecture is as follows: 1. In order to understand the semantic relationship at the word level between two sentences, we design a word-pair task (WP-Task), which uses these keyword pairs to complete sentence matching independently. 2. We design a sentence-pair task (SP-Task) to understand the sentence level semantic relationship between the two sentences by sentence denoising. Through the integration of the two tasks, our model can understand sentences more accurately from the two granularities of word and sentence. Experimental results show that our model can achieve state-of-the-art performance in several tasks. Our source code is publicly available1.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Chinh Trong Nguyen ◽  
Dang Tuan Nguyen

Recently, many deep learning models have archived high results in question answering task with overall F1 scores above 0.88 on SQuAD datasets. However, many of these models have quite low F1 scores on why-questions. These F1 scores range from 0.57 to 0.7 on SQuAD v1.1 development set. This means these models are more appropriate to the extraction of answers for factoid questions than for why-questions. Why-questions are asked when explanations are needed. These explanations are possibly arguments or simply subjective opinions. Therefore, we propose an approach to finding the answer for why-question using discourse analysis and natural language inference. In our approach, natural language inference is applied to identify implicit arguments at sentence level. It is also applied in sentence similarity calculation. Discourse analysis is applied to identify the explicit arguments and the opinions at sentence level in documents. The results from these two methods are the answer candidates to be selected as the final answer for each why-question. We also implement a system with our approach. Our system can provide an answer for a why-question and a document as in reading comprehension test. We test our system with a Vietnamese translated test set which contains all why-questions of SQuAD v1.1 development set. The test results show that our system cannot beat a deep learning model in F1 score; however, our system can answer more questions (answer rate of 77.0%) than the deep learning model (answer rate of 61.0%).


2007 ◽  
Vol 33 (1) ◽  
pp. 105-133 ◽  
Author(s):  
Catalina Hallett ◽  
Donia Scott ◽  
Richard Power

This article describes a method for composing fluent and complex natural language questions, while avoiding the standard pitfalls of free text queries. The method, based on Conceptual Authoring, is targeted at question-answering systems where reliability and transparency are critical, and where users cannot be expected to undergo extensive training in question composition. This scenario is found in most corporate domains, especially in applications that are risk-averse. We present a proof-of-concept system we have developed: a question-answering interface to a large repository of medical histories in the area of cancer. We show that the method allows users to successfully and reliably compose complex queries with minimal training.


Poetics ◽  
1990 ◽  
Vol 19 (1-2) ◽  
pp. 99-120
Author(s):  
Stefan Wermter ◽  
Wendy G. Lehnert

Sign in / Sign up

Export Citation Format

Share Document