language understanding
Recently Published Documents


TOTAL DOCUMENTS

1424
(FIVE YEARS 483)

H-INDEX

41
(FIVE YEARS 7)

2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Xianben Yang ◽  
Wei Zhang

In recent years, due to the wide application of deep learning and more modal research, the corresponding image retrieval system has gradually extended from traditional text retrieval to visual retrieval combined with images and has become the field of computer vision and natural language understanding and one of the important cross-research hotspots. This paper focuses on the research of graph convolutional networks for cross-modal information retrieval and has a general understanding of cross-modal information retrieval and the related theories of convolutional networks on the basis of literature data. Modal information retrieval is designed to combine high-level semantics with low-level visual capabilities in cross-modal information retrieval to improve the accuracy of information retrieval and then use experiments to verify the designed network model, and the result is that the model designed in this paper is more accurate than the traditional retrieval model, which is up to 90%.


2021 ◽  
Vol 4 (2) ◽  
pp. 85-90
Author(s):  
Isaac Kuria ◽  
Harrison Njoroge

University websites and online portals are the primary means through which potential students and other stakeholders find important information about an institution. University websites are essential to these organizations’ marketing and communication efforts. In this paper, focus has been put on the need to complement these websites with the use of an AI Chatbot (UniBot) in order to serve more efficiently. This study aims at performing an extensive literature survey on intelligent conversational agents and the feasibility of applying them in enhancing online communication in universities. The study utilizes an iterative – incremental methodology to aid in design and development of UniBot, using AIML (Artificial Intelligent Markup Language) Pattern matching algorithm on the Pandorabot (AIAAS) platform, to generate high quality training data, with which, the agents Natural Language Understanding (NLU) model is trained. The study also provides for training and testing the agent using data which is acquired from Online Communication, University Website department at Kenyatta University.


2021 ◽  
Author(s):  
Brandon Bennett

The Winograd Schema Challenge is a general test for Artificial Intelligence, based on problems of pronoun reference resolution. I investigate the semantics and interpretation of Winograd Schemas, concentrating on the original and most famous example. This study suggests that a rich ontology, detailed commonsense knowledge as well as special purpose inference mechanisms are all required to resolve just this one example. The analysis supports the view that a key factor in the interpretation and disambiguation of natural language is the preference for coherence. This preference guides the resolution of co-reference in relation to both explicitly mentioned entities and also implicit entities that are required to form an interpretation of what is being described. I suggest that assumed identity of implicit entities arises from the expectation of coherence and provides a key mechanism that underpins natural language understanding. I also argue that conceptual ontologies can play a decisive role not only in directly determining pronoun references but also in identifying implicit entities and implied relationships that bind together components of a sentence.


2021 ◽  
Vol 1 (2) ◽  
pp. 18-22
Author(s):  
Strahil Sokolov ◽  
Stanislava Georgieva

This paper presents a new approach to processing and categorization of text from patient documents in Bulgarian language using Natural Language Processing and Edge AI. The proposed algorithm contains several phases - personal data anonymization, pre-processing and conversion of text to vectors, model training and recognition. The experimental results in terms of achieved accuracy are comparable with modern approaches.


2021 ◽  
Author(s):  
Alessandro Oltramari ◽  
Jonathan Francis ◽  
Filip Ilievski ◽  
Kaixin Ma ◽  
Roshanak Mirzaee

This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.


AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 738-755
Author(s):  
Jingxiu Huang ◽  
Qingtang Liu ◽  
Yunxiang Zheng ◽  
Linjing Wu

Natural language understanding technologies play an essential role in automatically solving math word problems. In the process of machine understanding Chinese math word problems, comma disambiguation, which is associated with a class imbalance binary learning problem, is addressed as a valuable instrument to transform the problem statement of math word problems into structured representation. Aiming to resolve this problem, we employed the synthetic minority oversampling technique (SMOTE) and random forests to comma classification after their hyperparameters were jointly optimized. We propose a strict measure to evaluate the performance of deployed comma classification models on comma disambiguation in math word problems. To verify the effectiveness of random forest classifiers with SMOTE on comma disambiguation, we conducted two-stage experiments on two datasets with a collection of evaluation measures. Experimental results showed that random forest classifiers were significantly superior to baseline methods in Chinese comma disambiguation. The SMOTE algorithm with optimized hyperparameter settings based on the categorical distribution of different datasets is preferable, instead of with its default values. For practitioners, we suggest that hyperparameters of a classification models be optimized again after parameter settings of SMOTE have been changed.


2021 ◽  
pp. 1-12
Author(s):  
Manaal Faruqui ◽  
Dilek Hakkani-Tür

Abstract As more users across the world are interacting with dialog agents in their daily life, there is a need for better speech understanding that calls for renewed attention to the dynamics between research in automatic speech recognition (ASR) and natural language understanding (NLU). We briefly review these research areas and lay out the current relationship between them. In light of the observations we make in this paper, we argue that (1) NLU should be cognizant of the presence of ASR models being used upstream in a dialog system’s pipeline, (2) ASR should be able to learn from errors found in NLU, (3) there is a need for end-to-end datasets that provide semantic annotations on spoken input, (4) there should be stronger collaboration between ASR and NLU research communities.


Sign in / Sign up

Export Citation Format

Share Document