scholarly journals A Modular Architecture for Multi-Purpose Conversational System Development

Author(s):  
Adrián Artola ◽  
Zoraida Callejas ◽  
David Griol

As the complexity of intelligent environments grows, there is a need for more sophisticated and flexible interfaces. Conversational systems constitute a very interesting alternative to ease the users’ workload when interacting with such environments, as they can operate them in natural language. A number of commercial toolkits for their implementation have appeared recently. However, these are usually tailored to specific implementations of the processes involved for processing the user’s utterance and generate the system response. In this paper, we present a modular architecture to develop conversational systems by means of a plug-and-play paradigm that allows the integration of developers’ specific implementations and commercial utilities under different configurations that can be adapted to the specific requirements for each system.

Author(s):  
D. Kiritsis ◽  
Michel Porchet ◽  
L. Boutzev ◽  
I. Zic ◽  
P. Sourdin

Abstract In this paper we present our experience from the use of two different expert system development environments to Wire-EDM CAD/CAM knowledge based application. The two systems used follow two different AI approaches: the one is based on the constraint propagation theory and provides a natural language oriented programming environment, while the other is a production rule system with backward-forward chaining mechanisms and a conventional-like programming style. Our experience showed that the natural language programming style offers an easier and more productive environment for knowledge based CAD/CAM systems development.


2013 ◽  
Vol 21 (2) ◽  
pp. 167-200 ◽  
Author(s):  
SEBASTIAN PADÓ ◽  
TAE-GIL NOH ◽  
ASHER STERN ◽  
RUI WANG ◽  
ROBERTO ZANOLI

AbstractA key challenge at the core of many Natural Language Processing (NLP) tasks is the ability to determine which conclusions can be inferred from a given natural language text. This problem, called theRecognition of Textual Entailment (RTE), has initiated the development of a range of algorithms, methods, and technologies. Unfortunately, research on Textual Entailment (TE), like semantics research more generally, is fragmented into studies focussing on various aspects of semantics such as world knowledge, lexical and syntactic relations, or more specialized kinds of inference. This fragmentation has problematic practical consequences. Notably, interoperability among the existing RTE systems is poor, and reuse of resources and algorithms is mostly infeasible. This also makes systematic evaluations very difficult to carry out. Finally, textual entailment presents a wide array of approaches to potential end users with little guidance on which to pick. Our contribution to this situation is the novel EXCITEMENT architecture, which was developed to enable and encourage the consolidation of methods and resources in the textual entailment area. It decomposes RTE into components with strongly typed interfaces. We specify (a) a modular linguistic analysis pipeline and (b) a decomposition of the ‘core’ RTE methods into top-level algorithms and subcomponents. We identify four major subcomponent types, including knowledge bases and alignment methods. The architecture was developed with a focus on generality, supporting all major approaches to RTE and encouraging language independence. We illustrate the feasibility of the architecture by constructing mappings of major existing systems onto the architecture. The practical implementation of this architecture forms the EXCITEMENT open platform. It is a suite of textual entailment algorithms and components which contains the three systems named above, including linguistic-analysis pipelines for three languages (English, German, and Italian), and comprises a number of linguistic resources. By addressing the problems outlined above, the platform provides a comprehensive and flexible basis for research and experimentation in textual entailment and is available as open source software under the GNU General Public License.


2021 ◽  
Vol 39 (4) ◽  
pp. 1-29
Author(s):  
Pengjie Ren ◽  
Zhumin Chen ◽  
Zhaochun Ren ◽  
Evangelos Kanoulas ◽  
Christof Monz ◽  
...  

In this article, we address the problem of answering complex information needs by conducting conversations with search engines , in the sense that users can express their queries in natural language and directly receive the information they need from a short system response in a conversational manner. Recently, there have been some attempts towards a similar goal, e.g., studies on Conversational Agent s (CAs) and Conversational Search (CS). However, they either do not address complex information needs in search scenarios or they are limited to the development of conceptual frameworks and/or laboratory-based user studies. We pursue two goals in this article: (1) the creation of a suitable dataset, the Search as a Conversation (SaaC) dataset, for the development of pipelines for conversations with search engines, and (2) the development of a state-of-the-art pipeline for conversations with search engines, Conversations with Search Engines (CaSE), using this dataset. SaaC is built based on a multi-turn conversational search dataset, where we further employ workers from a crowdsourcing platform to summarize each relevant passage into a short, conversational response. CaSE enhances the state-of-the-art by introducing a supporting token identification module and a prior-aware pointer generator, which enables us to generate more accurate responses. We carry out experiments to show that CaSE is able to outperform strong baselines. We also conduct extensive analyses on the SaaC dataset to show where there is room for further improvement beyond CaSE. Finally, we release the SaaC dataset and the code for CaSE and all models used for comparison to facilitate future research on this topic.


Author(s):  
María Virginia Mauco ◽  
María Carmen Leonardi ◽  
Daniel Riesco

Formal methods have come into use for the construction of real systems as they help to increase software quality and reliability, and even though their industrial use is still limited, it has been steadily growing (Bowen & Hinchey, 2006; van Lamsweerde, 2000). When used early in the software development process, they can reveal ambiguities, incompleteness, inconsistencies, errors, or misunderstandings that otherwise might only be discovered during costly testing and debugging phases. A well-known formal method is the RAISE Method (George et al., 1995), which has been used on real developments (Dang Van, George, Janowski, & Moore, 2002). One tangible product of applying a formal method is a formal specification. A formal specification serves as a contract, a valuable piece of documentation, and a means of communication among stakeholders and software engineers. Formal specifications may be used throughout the software lifecycle and they may be manipulated by automated tools for a wide variety of purposes such as model checking, deductive verification, animation, test data generation, formal reuse of components, and refinement from specification to implementation (van Lamsweerde, 2000). However, one of the problems with formal specifications is that they are hard to master and not easily comprehensible to stakeholders, and even to non-formal specification specialists. This is particularly inconvenient during the first stages of system development when interaction with stakeholders is very important. In practice, the analysis often starts from interviews with the stakeholders, and this source of information is heavily based on natural language as stakeholders must be able to read and understand the results of requirements capture. Then specifications are never formal at first. A good formal approach should use both informal and formal techniques (Bjorner, 2000). The requirements baseline (Leite, Hadad, Doorn, & Kaplan, 2000), for example, is a technique proposed to formalize requirements elicitation and modeling, which includes two natural language models, the language extended lexicon (LEL) and the scenario model, which ease and encourage stakeholders’ active participation. However, specifying requirements in natural language has some drawbacks related to natural language imprecision. Based on the previous considerations, we proposed a technique to derive an initial formal specification in the RAISE specification language (RSL) from the LEL and the scenario model (Mauco, 2004; Mauco & Riesco, 2005a; Mauco, Riesco, & George, 2004). The technique provides a set of manual heuristics to derive types and functions and structure them in modules taking into account the structured description of requirements provided by the LEL and the scenario model. But, for systems of considerable size this manual derivation is very tedious and time consuming and may be error-prone. Besides, maintenance of consistency between LEL and scenarios, and the RSL specification is a critical problem as well as tracking of traceability relationships. In this article, we present an enhancement to this technique, which consists in the RSL-based formalization of some of the heuristics to derive RSL types from the LEL. The aim of this formalization is to serve as the basis for a semiautomatic strategy that could be implemented by a tool. More concretely, we describe a set of RSL-based derivation rules that will transform the information contained in the LEL into abstract and concrete RSL types. These derivation rules are a useful starting point to deal with the great amount of requirements information modeled in the LEL, as they provide a systematic and consistent way of defining a tentative set of RSL types. We also present some examples of the application of the rules and discuss advantages and disadvantages of the strategy proposed.


1987 ◽  
Vol 31 (3) ◽  
pp. 363-367 ◽  
Author(s):  
Gerald P. Chubb ◽  
Noreen Stodolsky ◽  
Warren D. Fleming ◽  
John A. Hassoun

The Saturation of Tactical Aviator Load Limits (STALL) is defined as the intersection of asymptotically high and low load limits. In a closed queuing system consisting of M homogeneous demand generators, it has been shown that response time becomes asymptotically linear as M increases. This provides a quantitative basis for specifying the saturation point if one knows both arrival rate and service rate (the inverse of task duration). Early in system development, one can typically estimate arrival rates based on mission analyses. But task durations cannot be estimated until procedures have been defined, based on system design. At this stage, it is useful to determine the design requirements. Given the imposed load, how fast must servicing be to keep up with demand? Logically, service rates must exceed arrival rates, but the question is: by how much? Two related criteria can apply: the number of backlogged demands, or the system response time. STALL computes statistics for both. Preliminary model validation has been accomplished, using simulation runs to study model robustness to systematic violations of assumptions. Predictive validity depends on being able to demonstrate that the assumptions are valid in a particular application. The simulations demonstrate what can happen when that match is not successfully achieved. These studies demonstrated that the predictions will typically be most robust for an over-saturated system. The model is least sensitive to violations of the servicing assumptions. Furthermore, it is easy to relax the assumption of homogeneous demand generators by developing planned model extensions.


Sign in / Sign up

Export Citation Format

Share Document