Building Intelligent Multimodal Assistants Based on Logic Programming in the Meme Media Architecture

2007 ◽  
pp. 145-168
Author(s):  
Kimihito Ito

This chapter introduces a software architecture to build intelligent multimodal assistants. The architecture consists of three basic components: a meme media system, an inference system, and an embodied interface agent system that makes multimodal presentations available to users. In an experimental implementation of the architecture, the author uses three components as the basic framework: Intelligent Pad for a meme media system, Prolog for a logic programming system, and Multimodal Presentation Markup Language (MPML) for controlling an interface agent system. The experimental implementation shows how character agents are defined in a simple declarative manner using logic programming on meme media objects.

2015 ◽  
Vol 16 (2) ◽  
pp. 189-235 ◽  
Author(s):  
DANIELA INCLEZAN ◽  
MICHAEL GELFOND

AbstractThe paper introduces a new modular action language,${\mathcal ALM}$, and illustrates the methodology of its use. It is based on the approach of Gelfond and Lifschitz (1993,Journal of Logic Programming 17, 2–4, 301–321; 1998,Electronic Transactions on AI 3, 16, 193–210) in which a high-level action language is used as a front end for a logic programming system description. The resulting logic programming representation is used to perform various computational tasks. The methodology based on existing action languages works well for small and even medium size systems, but is not meant to deal with larger systems that requirestructuring of knowledge.$\mathcal{ALM}$is meant to remedy this problem. Structuring of knowledge in${\mathcal ALM}$is supported by the concepts ofmodule(a formal description of a specific piece of knowledge packaged as a unit),module hierarchy, andlibrary, and by the division of a system description of${\mathcal ALM}$into two parts:theoryandstructure. Atheoryconsists of one or more modules with a common theme, possibly organized into a module hierarchy based on adependency relation. It contains declarations of sorts, attributes, and properties of the domain together with axioms describing them.Structuresare used to describe the domain's objects. These features, together with the means for defining classes of a domain as special cases of previously defined ones, facilitate the stepwise development, testing, and readability of a knowledge base, as well as the creation of knowledge representation libraries.


2011 ◽  
Vol 12 (1-2) ◽  
pp. 127-156 ◽  
Author(s):  
JOACHIM SCHIMPF ◽  
KISH SHEN

AbstractECLiPSe is a Prolog-based programming system, aimed at the development and deployment of constraint programming applications. It is also used for teaching most aspects of combinatorial problem solving, for example, problem modelling, constraint programming, mathematical programming and search techniques. It uses an extended Prolog as its high-level modelling and control language, complemented by several constraint solver libraries, interfaces to third-party solvers, an integrated development environment and interfaces for embedding into host environments. This paper discusses language extensions, implementation aspects, components, and tools that we consider relevant on the way from Logic Programming to Constraint Logic Programming.


2021 ◽  
pp. 218-238
Author(s):  
Richard Evans

This paper describes a neuro-symbolic system for distilling interpretable logical theories out of streams of raw, unprocessed sensory experience. We combine a binary neural network, that maps raw sensory input to concepts, with an inductive logic programming system, that combines concepts into declarative rules. Both the inductive logic programming system and the binary neural network are encoded as logic programs, so the weights of the neural network and the declarative rules of the theory can be solved jointly as a single SAT problem. This way, we are able to jointly learn how to perceive (mapping raw sensory information to concepts) and apperceive (combining concepts into declarative rules). We apply our system, the Apperception Engine, to the Sokoban domain. Given a sequence of noisy pixel images, the system has to construct objects that persist over time, extract attributes that change over time, and induce rules explaining how the attributes change over time. We compare our system with a neural network baseline, and show that the baseline is significantly outperformed by the Apperception Engine.


Sign in / Sign up

Export Citation Format

Share Document