scholarly journals An Introduction to Natural Language Processing: the Main Problems

Triangle ◽  
2018 ◽  
pp. 65
Author(s):  
Veronica Dahl

Natural Language Processing aims to give computers the power to automatically process human language sentences, mostly in written text form but also spoken, for various purposes. This sub-discipline of AI (Artificial Intelligence) is also known as Natural Language Understanding.

Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2300
Author(s):  
Rade Matic ◽  
Milos Kabiljo ◽  
Miodrag Zivkovic ◽  
Milan Cabarkapa

In recent years, gradual improvements in communication and connectivity technologies have enabled new technical possibilities for the adoption of chatbots across diverse sectors such as customer services, trade, and marketing. The chatbot is a platform that uses natural language processing, a subset of artificial intelligence, to find the right answer to all users’ questions and solve their problems. Advanced chatbot architecture that is extensible, scalable, and supports different services for natural language understanding (NLU) and communication channels for interactions of users has been proposed. The paper describes overall chatbot architecture and provides corresponding metamodels as well as rules for mapping between the proposed and two commonly used NLU metamodels. The proposed architecture could be easily extended with new NLU services and communication channels. Finally, two implementations of the proposed chatbot architecture are briefly demonstrated in the case study of “ADA” and “COVID-19 Info Serbia”.


Author(s):  
Vasile Rus ◽  
Philip M. McCarthy ◽  
Danielle S. McNamara ◽  
Arthur C. Graesser

Natural language understanding and assessment is a subset of natural language processing (NLP). The primary purpose of natural language understanding algorithms is to convert written or spoken human language into representations that can be manipulated by computer programs. Complex learning environments such as intelligent tutoring systems (ITSs) often depend on natural language understanding for fast and accurate interpretation of human language so that the system can respond intelligently in natural language. These ITSs function by interpreting the meaning of student input, assessing the extent to which it manifests learning, and generating suitable feedback to the learner. To operate effectively, systems need to be fast enough to operate in the real time environments of ITSs. Delays in feedback caused by computational processing run the risk of frustrating the user and leading to lower engagement with the system. At the same time, the accuracy of assessing student input is critical because inaccurate feedback can potentially compromise learning and lower the student’s motivation and metacognitive awareness of the learning goals of the system (Millis et al., 2007). As such, student input in ITSs requires an assessment approach that is fast enough to operate in real time but accurate enough to provide appropriate evaluation. One of the ways in which ITSs with natural language understanding verify student input is through matching. In some cases, the match is between the user input and a pre-selected stored answer to a question, solution to a problem, misconception, or other form of benchmark response. In other cases, the system evaluates the degree to which the student input varies from a complex representation or a dynamically computed structure. The computation of matches and similarity metrics are limited by the fidelity and flexibility of the computational linguistics modules. The major challenge with assessing natural language input is that it is relatively unconstrained and rarely follows brittle rules in its computation of spelling, syntax, and semantics (McCarthy et al., 2007). Researchers who have developed tutorial dialogue systems in natural language have explored the accuracy of matching students’ written input to targeted knowledge. Examples of these systems are AutoTutor and Why-Atlas, which tutor students on Newtonian physics (Graesser, Olney, Haynes, & Chipman, 2005; VanLehn , Graesser, et al., 2007), and the iSTART system, which helps students read text at deeper levels (McNamara, Levinstein, & Boonthum, 2004). Systems such as these have typically relied on statistical representations, such as latent semantic analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) and content word overlap metrics (McNamara, Boonthum, et al., 2007). Indeed, such statistical and word overlap algorithms can boast much success. However, over short dialogue exchanges (such as those in ITSs), the accuracy of interpretation can be seriously compromised without a deeper level of lexico-syntactic textual assessment (McCarthy et al., 2007). Such a lexico-syntactic approach, entailment evaluation, is presented in this chapter. The approach incorporates deeper natural language processing solutions for ITSs with natural language exchanges while remaining sufficiently fast to provide real time assessment of user input.


Author(s):  
Andrew M. Olney ◽  
Natalie K. Person ◽  
Arthur C. Graesser

The authors discuss Guru, a conversational expert ITS. Guru is designed to mimic expert human tutors using advanced applied natural language processing techniques including natural language understanding, knowledge representation, and natural language generation.


2021 ◽  
Author(s):  
Priya B ◽  
Nandhini J.M ◽  
Gnanasekaran T

Natural Language processing (NLP) dealing with Artificial Intelligence concept is a subfield of Computer Science, enabling computers to understand and process human language. Natural Language Processing being a part of artificial intelligence provides understanding of human language by computers for the purpose of extracting information or insights and create meaningful response. It involves creating algorithms that transform text in to words labeling With the emerging advancements in Machine learning and Deep Learning, NLP can contributed a lot towards health sector, education, agriculture and so on. This paper summarizes the various aspects of NLP along with case studies associated with Health Sector for Voice Automated System, prediction of Diabetes Millets, Crop Detection technique in Agriculture Sector.


Author(s):  
Roberto Navigli

In this paper I look at Natural Language Understanding, an area of Natural Language Processing aimed at making sense of text, through the lens of a visionary future: what do we expect a machine should be able to understand? and what are the key dimensions that require the attention of researchers to make this dream come true?


2021 ◽  
Vol 7 ◽  
pp. e759
Author(s):  
G. Thomas Hudson ◽  
Noura Al Moubayed

Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a single model. In this work we show how models trained to solve decaNLP fail with simple paraphrasing of the question. We contribute a crowd-sourced corpus of paraphrased questions (PQ-decaNLP), annotated with paraphrase phenomena. This enables analysis of how transformations such as swapping the class labels and changing the sentence modality lead to a large performance degradation. Training both MQAN and the newer T5 model using PQ-decaNLP improves their robustness and for some tasks improves the performance on the original questions, demonstrating the benefits of a model which is more robust to paraphrasing. Additionally, we explore how paraphrasing knowledge is transferred between tasks, with the aim of exploiting the multitask property to improve the robustness of the models. We explore the addition of paraphrase detection and paraphrase generation tasks, and find that while both models are able to learn these new tasks, knowledge about paraphrasing does not transfer to other decaNLP tasks.


2020 ◽  
Vol 4 (1) ◽  
pp. 208
Author(s):  
Albert Yakobus Chandra ◽  
Didik Kurniawan ◽  
Rahmat Musa

Some cases that are often experienced at a particular institution such as Micro Enterprise are often a staff / employee in providing information services and transactions that are carried out manually to customers related to these business activities. This cycle always repeats from one customer to another. The impact if there are conditions where the queue of customer that is quite crowded than the workload of staff/employees will be higher and the risk of error in transactions will be high too. The development of information technology in artificial intelligence on 4.0 industry era is moving forward. One of them is Machine Learning - Natural Language Processing (NLP) which is one of the sciences that focuses on how computers can understand the human language and response to it. Therefor in this research a chatbot system will be builtin providing information and conducting transaction with the customers. This chatbot will be develop using the Dialogflow tools provided by Google. This Chatbot that was build expected to be an alternative that can be implemented in various bussines to provide better service for customers


2016 ◽  
Vol 4 ◽  
pp. 51-60
Author(s):  
Rocío Jiménez-Briones

This paper looks at how illocutionary meaning could be accommodated in FunGramKB, a Natural Language Processing environment designed as a multipurpose lexico-conceptual knowledge base for natural language understanding applications. To this purpose, this study concentrates on the Grammaticon, which is the module that stores constructional schemata or machine-tractable representations of linguistic constructions. In particular, the aim of this paper is to discuss how illocutionary constructions such as Can You Forgive Me (XPREP)? have been translated into the metalanguage employed in FunGramKB, namely Conceptual Representation Language (COREL). The formalization of illocutionary constructions presented here builds on previous constructionist approaches, especially on those developed within the usage-based constructionist model known as the Lexical Constructional Model (Ruiz de Mendoza 2013). To illustrate our analysis, we shall focus on the speech act of CONDOLING, which is computationally handled through two related constructional domains, each of which subsumes several illocutionary configurations under one COREL schema.


Author(s):  
Shreyashi Chowdhury ◽  
Asoke Nath

Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyse large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them.NLP combines computational linguistics—rule-based modelling of human language—with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. This paper discusses on the various scope and challenges , current trends and future scopes of Natural Language Processing.


Sign in / Sign up

Export Citation Format

Share Document