scholarly journals Ask me in your own words: paraphrasing for multitask question answering

2021 ◽  
Vol 7 ◽  
pp. e759
Author(s):  
G. Thomas Hudson ◽  
Noura Al Moubayed

Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a single model. In this work we show how models trained to solve decaNLP fail with simple paraphrasing of the question. We contribute a crowd-sourced corpus of paraphrased questions (PQ-decaNLP), annotated with paraphrase phenomena. This enables analysis of how transformations such as swapping the class labels and changing the sentence modality lead to a large performance degradation. Training both MQAN and the newer T5 model using PQ-decaNLP improves their robustness and for some tasks improves the performance on the original questions, demonstrating the benefits of a model which is more robust to paraphrasing. Additionally, we explore how paraphrasing knowledge is transferred between tasks, with the aim of exploiting the multitask property to improve the robustness of the models. We explore the addition of paraphrase detection and paraphrase generation tasks, and find that while both models are able to learn these new tasks, knowledge about paraphrasing does not transfer to other decaNLP tasks.

Author(s):  
Andrew M. Olney ◽  
Natalie K. Person ◽  
Arthur C. Graesser

The authors discuss Guru, a conversational expert ITS. Guru is designed to mimic expert human tutors using advanced applied natural language processing techniques including natural language understanding, knowledge representation, and natural language generation.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2300
Author(s):  
Rade Matic ◽  
Milos Kabiljo ◽  
Miodrag Zivkovic ◽  
Milan Cabarkapa

In recent years, gradual improvements in communication and connectivity technologies have enabled new technical possibilities for the adoption of chatbots across diverse sectors such as customer services, trade, and marketing. The chatbot is a platform that uses natural language processing, a subset of artificial intelligence, to find the right answer to all users’ questions and solve their problems. Advanced chatbot architecture that is extensible, scalable, and supports different services for natural language understanding (NLU) and communication channels for interactions of users has been proposed. The paper describes overall chatbot architecture and provides corresponding metamodels as well as rules for mapping between the proposed and two commonly used NLU metamodels. The proposed architecture could be easily extended with new NLU services and communication channels. Finally, two implementations of the proposed chatbot architecture are briefly demonstrated in the case study of “ADA” and “COVID-19 Info Serbia”.


Triangle ◽  
2018 ◽  
pp. 65
Author(s):  
Veronica Dahl

Natural Language Processing aims to give computers the power to automatically process human language sentences, mostly in written text form but also spoken, for various purposes. This sub-discipline of AI (Artificial Intelligence) is also known as Natural Language Understanding.


Author(s):  
Roberto Navigli

In this paper I look at Natural Language Understanding, an area of Natural Language Processing aimed at making sense of text, through the lens of a visionary future: what do we expect a machine should be able to understand? and what are the key dimensions that require the attention of researchers to make this dream come true?


Author(s):  
Hima Yeldo

Abstract: Natural Language Processing is the study that focuses the interplay between computer and the human languages NLP has spread its applications in various fields such as an email Spam detection, machine translation, summation, information extraction, and question answering etc. Natural Language Processing classifies two parts i.e. Natural Language Generation and Natural Language understanding which evolves the task to generate and understand the text.


2016 ◽  
Vol 4 ◽  
pp. 51-60
Author(s):  
Rocío Jiménez-Briones

This paper looks at how illocutionary meaning could be accommodated in FunGramKB, a Natural Language Processing environment designed as a multipurpose lexico-conceptual knowledge base for natural language understanding applications. To this purpose, this study concentrates on the Grammaticon, which is the module that stores constructional schemata or machine-tractable representations of linguistic constructions. In particular, the aim of this paper is to discuss how illocutionary constructions such as Can You Forgive Me (XPREP)? have been translated into the metalanguage employed in FunGramKB, namely Conceptual Representation Language (COREL). The formalization of illocutionary constructions presented here builds on previous constructionist approaches, especially on those developed within the usage-based constructionist model known as the Lexical Constructional Model (Ruiz de Mendoza 2013). To illustrate our analysis, we shall focus on the speech act of CONDOLING, which is computationally handled through two related constructional domains, each of which subsumes several illocutionary configurations under one COREL schema.


Author(s):  
Vasile Rus ◽  
Philip M. McCarthy ◽  
Danielle S. McNamara ◽  
Arthur C. Graesser

Natural language understanding and assessment is a subset of natural language processing (NLP). The primary purpose of natural language understanding algorithms is to convert written or spoken human language into representations that can be manipulated by computer programs. Complex learning environments such as intelligent tutoring systems (ITSs) often depend on natural language understanding for fast and accurate interpretation of human language so that the system can respond intelligently in natural language. These ITSs function by interpreting the meaning of student input, assessing the extent to which it manifests learning, and generating suitable feedback to the learner. To operate effectively, systems need to be fast enough to operate in the real time environments of ITSs. Delays in feedback caused by computational processing run the risk of frustrating the user and leading to lower engagement with the system. At the same time, the accuracy of assessing student input is critical because inaccurate feedback can potentially compromise learning and lower the student’s motivation and metacognitive awareness of the learning goals of the system (Millis et al., 2007). As such, student input in ITSs requires an assessment approach that is fast enough to operate in real time but accurate enough to provide appropriate evaluation. One of the ways in which ITSs with natural language understanding verify student input is through matching. In some cases, the match is between the user input and a pre-selected stored answer to a question, solution to a problem, misconception, or other form of benchmark response. In other cases, the system evaluates the degree to which the student input varies from a complex representation or a dynamically computed structure. The computation of matches and similarity metrics are limited by the fidelity and flexibility of the computational linguistics modules. The major challenge with assessing natural language input is that it is relatively unconstrained and rarely follows brittle rules in its computation of spelling, syntax, and semantics (McCarthy et al., 2007). Researchers who have developed tutorial dialogue systems in natural language have explored the accuracy of matching students’ written input to targeted knowledge. Examples of these systems are AutoTutor and Why-Atlas, which tutor students on Newtonian physics (Graesser, Olney, Haynes, & Chipman, 2005; VanLehn , Graesser, et al., 2007), and the iSTART system, which helps students read text at deeper levels (McNamara, Levinstein, & Boonthum, 2004). Systems such as these have typically relied on statistical representations, such as latent semantic analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) and content word overlap metrics (McNamara, Boonthum, et al., 2007). Indeed, such statistical and word overlap algorithms can boast much success. However, over short dialogue exchanges (such as those in ITSs), the accuracy of interpretation can be seriously compromised without a deeper level of lexico-syntactic textual assessment (McCarthy et al., 2007). Such a lexico-syntactic approach, entailment evaluation, is presented in this chapter. The approach incorporates deeper natural language processing solutions for ITSs with natural language exchanges while remaining sufficiently fast to provide real time assessment of user input.


Sign in / Sign up

Export Citation Format

Share Document