YAG: A TEMPLATE-BASED TEXT REALIZATION SYSTEM FOR DIALOG

Author(s):  
SONGSAK CHANNARUKUL ◽  
SUSAN W. MCROY ◽  
SYED S. ALI

We present a natural language realization component, called YAG, that is suitable for intelligent tutoring systems that use dialog. Dialog imposes unique requirements on a generation component, namely: dialog systems must interact in real-time; they must be capable of producing fragmentary output; and they may be re-deployed in a number of different domains. Our approach to real-time natural language realization combines a declarative, template-based approach for the representation of text structure with knowledge-based methods for representing semantic content. Possible text structures are defined in a declarative language that is easy to understand, maintain, and re-use. A dialog system can use YAG to realize text structures by specifying a template and content from its knowledge base. Content can be specified in one of two ways: (1) as a sequence of propositions along with some control features; or (2) as a set of feature-value pairs. YAG's template realization algorithm realizes text without any search (in contrast to systems that must find rules that unify with a feature structure).

Author(s):  
Ronnie W. Smith ◽  
D. Richard Hipp

Every natural language parser will sometimes misunderstand its input. Misunderstandings can arise from speech recognition errors or inadequacies in the language grammar, or they may result from an input that is ungrammatical or ambiguous. Whatever their cause, misunderstandings can jeopardize the success of the larger system of which the parser is a component. For this reason, it is important to reduce the number of misunderstandings to a minimum. In a dialog system, it is possible to reduce the number of misunderstandings by requiring the user to verify each utterance. Some speech dialog systems implement verification by requiring the user to speak every utterance twice, or to confirm a word-by-word readback of every utterance. Such verification is effective at reducing errors that result from word misrecognitions, but does nothing to abate misunderstandings that result from other causes. Furthermore, verification of all utterances can be needlessly wearisome to the user, especially if the system is working well. A superior approach is to have the spoken language system verify the deduced meaning of an input only under circumstances where the accuracy of the deduced meaning is seriously in doubt, or correct understanding is essential to the success of the dialog. The verification is accomplished through the use of a verification subdialog—a short sequence of conversational exchanges intended to confirm or reject the hypothesized meaning. The following example of a verification subdialog will suffice to illustrate the idea. . . . computer: What is the LED displaying? user: The same thing. computer: Did you mean to say that the LED is displaying the same thing? user: Yes. . . . As will be further seen below, selective verification via a subdialog results in an unintrusive, human-like exchange between user and machine. A recent enhancement to the Circuit Fix-it Shop dialog system is a subsystem that uses a verification subdialog to verify the meaning of the user’s utterance only when the meaning is in doubt or when accuracy is critical for the success of the dialog. Notable features of this new verification subsystem include the following.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1813
Author(s):  
Krzysztof Wołk

We live in a time when dialogue systems are becoming a very popular tool. It is estimated that in 2021 more than 80% of communication with customers on the first line of service will be based on chatbots. They enter not only the retail market but also various other industries, e.g., they are used for medical interviews, information gathering or preliminary assessment and classification of problems. Unfortunately, when these work incorrectly it leads to dissatisfaction. Such systems have the possibility of contacting a human consultant with a special command, but this is not the point. The dialog system should provide a good, uninterrupted and fluid experience and not show that it is an artificial creation. Analysing the sentiment of the entire dialogue in real time can provide a solution to this problem. In our study, we focus on studying the methods of analysing the sentiment of dialogues based on machine learning for the English language and the morphologically complex Polish language, which also represents a language with a small amount of training resources. We analyse the methods directly and use the machine translator as an intermediary, thus checking the quality changes between models based on limited resources and those based on much larger English but machine translated texts. We manage to obtain over 89% accuracy using BERT-based models. We make recommendations in this regard, also taking into account the cost aspect of implementing and maintaining such a system.


Author(s):  
Ronnie W. Smith ◽  
D. Richard Hipp

This book has presented a computational model for integrated dialog processing. The primary contributions of this research follow. • A mechanism (the Missing Axiom Theory) for integrating subtheories that each address an independently studied subproblem of dialog processing (i.e. interactive task processing, the role of language, user modeling, and exploiting dialog expectation for contextual interpretation and plan recognition). • A computational theory for variable initiative behavior that enables a system to vary its responses at any given moment according to its level of initiative. • Detailed experimental results from the usage of a spoken natural language dialog system that illustrate the viability of the theory and identify behavioral differences of users as a function of their experience and initiative level. This chapter provides a concluding critique, which identifies areas of ongoing work and offers some advice for readers interested in developing their own spoken natural language dialog systems. This section describes important issues we did not successfully address in this research because either (1) we studied the problem but do not as yet have a satisfactory answer; or (2) it was not necessary to investigate the problem for the current system. Regardless of the reason, incorporating solutions to these problems is needed to strengthen the overall model. In section 4.7.3 we have already discussed the difficulties in determining when and how to change the level of initiative during a dialog as well as the problems in maintaining coherence when such a change occurs. Ongoing work in this area is being conducted by Guinn [Gui93]. His model for setting the initiative is based on the idea of “evaluating which participant is better capable of directing the solution of a goal by an examination of the user models of the two participants.” He provides a formula for estimating the competence of a dialog participant based on a probabilistic model of the participant’s knowledge about the domain. Using this formula, Guinn has conducted extensive experimental simulations testing four different methods of selecting initiative.


Author(s):  
Ronnie W. Smith ◽  
D. Richard Hipp

As spoken natural language dialog systems technology continues to make great strides, numerous issues regarding dialog processing still need to be resolved. This book presents an exciting new dialog processing architecture that allows for a number of behaviors required for effective human-machine interactions, including: problem-solving to help the user carry out a task, coherent subdialog movement during the problem-solving process, user model usage, expectation usage for contextual interpretation and error correction, and variable initiative behavior for interacting with users of differing expertise. The book also details how different dialog problems in processing can be handled simultaneously, and provides instructions and in-depth result from pertinent experiments. Researchers and professionals in natural language systems will find this important new book an invaluable addition to their libraries.


2021 ◽  
Vol 54 (2) ◽  
pp. 1-37
Author(s):  
Dhivya Chandrasekaran ◽  
Vijay Mago

Estimating the semantic similarity between text data is one of the challenging and open research problems in the field of Natural Language Processing (NLP). The versatility of natural language makes it difficult to define rule-based methods for determining semantic similarity measures. To address this issue, various semantic similarity methods have been proposed over the years. This survey article traces the evolution of such methods beginning from traditional NLP techniques such as kernel-based methods to the most recent research work on transformer-based models, categorizing them based on their underlying principles as knowledge-based, corpus-based, deep neural network–based methods, and hybrid methods. Discussing the strengths and weaknesses of each method, this survey provides a comprehensive view of existing systems in place for new researchers to experiment and develop innovative ideas to address the issue of semantic similarity.


Author(s):  
D. Kiritsis ◽  
Michel Porchet ◽  
L. Boutzev ◽  
I. Zic ◽  
P. Sourdin

Abstract In this paper we present our experience from the use of two different expert system development environments to Wire-EDM CAD/CAM knowledge based application. The two systems used follow two different AI approaches: the one is based on the constraint propagation theory and provides a natural language oriented programming environment, while the other is a production rule system with backward-forward chaining mechanisms and a conventional-like programming style. Our experience showed that the natural language programming style offers an easier and more productive environment for knowledge based CAD/CAM systems development.


Sign in / Sign up

Export Citation Format

Share Document