Natural Language Understanding and Assessment

Author(s):  
Vasile Rus ◽  
Philip M. McCarthy ◽  
Danielle S. McNamara ◽  
Arthur C. Graesser

Natural language understanding and assessment is a subset of natural language processing (NLP). The primary purpose of natural language understanding algorithms is to convert written or spoken human language into representations that can be manipulated by computer programs. Complex learning environments such as intelligent tutoring systems (ITSs) often depend on natural language understanding for fast and accurate interpretation of human language so that the system can respond intelligently in natural language. These ITSs function by interpreting the meaning of student input, assessing the extent to which it manifests learning, and generating suitable feedback to the learner. To operate effectively, systems need to be fast enough to operate in the real time environments of ITSs. Delays in feedback caused by computational processing run the risk of frustrating the user and leading to lower engagement with the system. At the same time, the accuracy of assessing student input is critical because inaccurate feedback can potentially compromise learning and lower the student’s motivation and metacognitive awareness of the learning goals of the system (Millis et al., 2007). As such, student input in ITSs requires an assessment approach that is fast enough to operate in real time but accurate enough to provide appropriate evaluation. One of the ways in which ITSs with natural language understanding verify student input is through matching. In some cases, the match is between the user input and a pre-selected stored answer to a question, solution to a problem, misconception, or other form of benchmark response. In other cases, the system evaluates the degree to which the student input varies from a complex representation or a dynamically computed structure. The computation of matches and similarity metrics are limited by the fidelity and flexibility of the computational linguistics modules. The major challenge with assessing natural language input is that it is relatively unconstrained and rarely follows brittle rules in its computation of spelling, syntax, and semantics (McCarthy et al., 2007). Researchers who have developed tutorial dialogue systems in natural language have explored the accuracy of matching students’ written input to targeted knowledge. Examples of these systems are AutoTutor and Why-Atlas, which tutor students on Newtonian physics (Graesser, Olney, Haynes, & Chipman, 2005; VanLehn , Graesser, et al., 2007), and the iSTART system, which helps students read text at deeper levels (McNamara, Levinstein, & Boonthum, 2004). Systems such as these have typically relied on statistical representations, such as latent semantic analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) and content word overlap metrics (McNamara, Boonthum, et al., 2007). Indeed, such statistical and word overlap algorithms can boast much success. However, over short dialogue exchanges (such as those in ITSs), the accuracy of interpretation can be seriously compromised without a deeper level of lexico-syntactic textual assessment (McCarthy et al., 2007). Such a lexico-syntactic approach, entailment evaluation, is presented in this chapter. The approach incorporates deeper natural language processing solutions for ITSs with natural language exchanges while remaining sufficiently fast to provide real time assessment of user input.

Triangle ◽  
2018 ◽  
pp. 65
Author(s):  
Veronica Dahl

Natural Language Processing aims to give computers the power to automatically process human language sentences, mostly in written text form but also spoken, for various purposes. This sub-discipline of AI (Artificial Intelligence) is also known as Natural Language Understanding.


Author(s):  
Andrew M. Olney ◽  
Natalie K. Person ◽  
Arthur C. Graesser

The authors discuss Guru, a conversational expert ITS. Guru is designed to mimic expert human tutors using advanced applied natural language processing techniques including natural language understanding, knowledge representation, and natural language generation.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2300
Author(s):  
Rade Matic ◽  
Milos Kabiljo ◽  
Miodrag Zivkovic ◽  
Milan Cabarkapa

In recent years, gradual improvements in communication and connectivity technologies have enabled new technical possibilities for the adoption of chatbots across diverse sectors such as customer services, trade, and marketing. The chatbot is a platform that uses natural language processing, a subset of artificial intelligence, to find the right answer to all users’ questions and solve their problems. Advanced chatbot architecture that is extensible, scalable, and supports different services for natural language understanding (NLU) and communication channels for interactions of users has been proposed. The paper describes overall chatbot architecture and provides corresponding metamodels as well as rules for mapping between the proposed and two commonly used NLU metamodels. The proposed architecture could be easily extended with new NLU services and communication channels. Finally, two implementations of the proposed chatbot architecture are briefly demonstrated in the case study of “ADA” and “COVID-19 Info Serbia”.


Author(s):  
Roberto Navigli

In this paper I look at Natural Language Understanding, an area of Natural Language Processing aimed at making sense of text, through the lens of a visionary future: what do we expect a machine should be able to understand? and what are the key dimensions that require the attention of researchers to make this dream come true?


2021 ◽  
Vol 7 ◽  
pp. e759
Author(s):  
G. Thomas Hudson ◽  
Noura Al Moubayed

Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a single model. In this work we show how models trained to solve decaNLP fail with simple paraphrasing of the question. We contribute a crowd-sourced corpus of paraphrased questions (PQ-decaNLP), annotated with paraphrase phenomena. This enables analysis of how transformations such as swapping the class labels and changing the sentence modality lead to a large performance degradation. Training both MQAN and the newer T5 model using PQ-decaNLP improves their robustness and for some tasks improves the performance on the original questions, demonstrating the benefits of a model which is more robust to paraphrasing. Additionally, we explore how paraphrasing knowledge is transferred between tasks, with the aim of exploiting the multitask property to improve the robustness of the models. We explore the addition of paraphrase detection and paraphrase generation tasks, and find that while both models are able to learn these new tasks, knowledge about paraphrasing does not transfer to other decaNLP tasks.


Author(s):  
Asoke Nath ◽  
Rupamita Sarkar ◽  
Swastik Mitra ◽  
Rohitaswa Pradhan

In the early days of Artificial Intelligence, it was observed that tasks which humans consider ‘natural’ and ‘commonplace’, such as Natural Language Understanding, Natural Language Generation and Vision were the most difficult task to carry over to computers. Nevertheless, attempts to crack the proverbial NLP nut were made, initially with methods that fall under ‘Symbolic NLP’. One of the products of this era was ELIZA. At present the most promising forays into the world of NLP are provided by ‘Neural NLP’, which uses Representation Learning and Deep Neural networks to model, understand and generate natural language. In the present paper the authors tried to develop a Conversational Intelligent Chatbot, a program that can chat with a user about any conceivable topic, without having domain-specific knowledge programmed into it. This is a challenging task, as it involves both ‘Natural Language Understanding’ (the task of converting natural language user input into representations that a machine can understand) and subsequently ‘Natural Language Generation’ (the task of generating an appropriate response to the user input in natural language). Several approaches exist for building conversational chatbots. In the present paper, two models have been used and their performance has been compared and contrasted. The first model is purely generative and uses a Transformer-based architecture. The second model is retrieval-based, and uses Deep Neural Networks.


2016 ◽  
Vol 4 ◽  
pp. 51-60
Author(s):  
Rocío Jiménez-Briones

This paper looks at how illocutionary meaning could be accommodated in FunGramKB, a Natural Language Processing environment designed as a multipurpose lexico-conceptual knowledge base for natural language understanding applications. To this purpose, this study concentrates on the Grammaticon, which is the module that stores constructional schemata or machine-tractable representations of linguistic constructions. In particular, the aim of this paper is to discuss how illocutionary constructions such as Can You Forgive Me (XPREP)? have been translated into the metalanguage employed in FunGramKB, namely Conceptual Representation Language (COREL). The formalization of illocutionary constructions presented here builds on previous constructionist approaches, especially on those developed within the usage-based constructionist model known as the Lexical Constructional Model (Ruiz de Mendoza 2013). To illustrate our analysis, we shall focus on the speech act of CONDOLING, which is computationally handled through two related constructional domains, each of which subsumes several illocutionary configurations under one COREL schema.


Author(s):  
Shreyashi Chowdhury ◽  
Asoke Nath

Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyse large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them.NLP combines computational linguistics—rule-based modelling of human language—with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. This paper discusses on the various scope and challenges , current trends and future scopes of Natural Language Processing.


2021 ◽  
Vol 11 (7) ◽  
pp. 3095
Author(s):  
Suhyune Son ◽  
Seonjeong Hwang ◽  
Sohyeun Bae ◽  
Soo Jun Park ◽  
Jang-Hwan Choi

Multi-task learning (MTL) approaches are actively used for various natural language processing (NLP) tasks. The Multi-Task Deep Neural Network (MT-DNN) has contributed significantly to improving the performance of natural language understanding (NLU) tasks. However, one drawback is that confusion about the language representation of various tasks arises during the training of the MT-DNN model. Inspired by the internal-transfer weighting of MTL in medical imaging, we introduce a Sequential and Intensive Weighted Language Modeling (SIWLM) scheme. The SIWLM consists of two stages: (1) Sequential weighted learning (SWL), which trains a model to learn entire tasks sequentially and concentrically, and (2) Intensive weighted learning (IWL), which enables the model to focus on the central task. We apply this scheme to the MT-DNN model and call this model the MTDNN-SIWLM. Our model achieves higher performance than the existing reference algorithms on six out of the eight GLUE benchmark tasks. Moreover, our model outperforms MT-DNN by 0.77 on average on the overall task. Finally, we conducted a thorough empirical investigation to determine the optimal weight for each GLUE task.


Sign in / Sign up

Export Citation Format

Share Document