Visual Languages for Interactive Computing
Latest Publications


TOTAL DOCUMENTS

22
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781599045344, 9781599045368

Author(s):  
Marco Padula ◽  
Amanda Reggiori

This chapter is intended to question what usability is, or should be, in the field of computer science. We focus the design of information systems meant as systems enabling large virtual communities to access information and communications; systems aimed to support the working activities of a restricted team, but also to offer services accessible and usable in the perspective of global digital inclusion. We shall not propose a specific viewpoint to usability, but a focus to many concepts, aspects, potentialities which have to be nowadays considered to detail the idea of usability and design suited systems. Systems must be used by a community of users in their complex working activity to process material or information; to modify rough material but also the working environment and methods. This requires to consider them as tools in a social context of use which has expectations from the technological progress.


Author(s):  
Eduardo Costa ◽  
Alexandre Grings ◽  
Marcus Vinicius dos Santos

Many people argue that Visual Programming languages are self-documenting. This article points out that there is no such thing as a self-documenting language. Besides this, many popular methods used to document programs written in other languages do not suit Visual Languages perfectly, and need some tailoring. Therefore, the authors propose a visual adaptation of the dataflow method of documentation. They also present versions of instantiated documentation and denotational semantics applied to visual languages. Finally, they present a Prolog based complete example of documentation.


Author(s):  
Alessandro Campi

This Chapter describes a visual framework; called XQBE; that covers the most important aspects of XML data management; spanning the visualization of XML documents; the formulation of queries; the representation and specification of document schemata; the definition of integrity constraints; the formulation of updates; and the expression of reactive behaviors in response to data modifications. All these features are strongly unified by a common visual abstraction and a few recurrent paradigms; so as to provide a homogeneous and comprehensive environment that allows even users without advanced programming skills to deal with nontrivial XML data management and transformation tasks. The intrinsic ambiguity inherent in any visual representation of richly expressive languages required a considerable effort of formalization in the semantics of XQBE that eventually lead to a solution with major advantages in terms of intuitiveness. In other words; this means that the unique (and unambiguous) effect of a statement is the one the user would expect.


Author(s):  
Esther Guerra ◽  
Juan de Lara

In this chapter, we present our approach for the definition of Multi-View Visual Languages (MVVLs). These are languages made of a set of different diagram types, which are used to specify the different aspects of a system. A prominent example of this kind of languages is UML, which defines a set of diagrams for the description of the static and dynamic elements of software systems. In the multi-view approach, consistency checking is essential to verify that the combination of the various system views yields a consistent description of the system. We use two techniques to define environments for MVVLs: meta-modelling and graph transformation. The former is used to describe the syntax of the whole language. In addition, we define a meta-model for each diagram type of the language (that we call viewpoint) as a restriction of the complete MVVL meta-model. From this high-level description, we can generate a customized environment supporting the definition of multiple system views. Consistency between views is ensured by translating each one of them into a unique repository model which is conformant to the meta-model of the whole language. The translation is performed by automatically generated graph transformation rules. Whenever a change is performed in a view, some rules are triggered to update the repository. These updates may trigger other rules to propagate the changes from the repository to the rest of the views. In our approach, graph transformation techniques are also used for other purposes, such as model simulation, optimization and transformation into other formalisms. In this chapter, we also discuss the integration of these concepts in the AToM3 tool, and show some illustrative examples by generating an environment for a small subset of UML.


Author(s):  
Daniela Fogli ◽  
Andrea Marcante ◽  
Piero Mussio

In this chapter it is recognized that the knowledge relevant to the design of an interactive system is distributed among several stakeholders: domain experts, software engineers and Human-Computer Interaction experts. Hence, the design of an interactive system is a multi-facet activity requiring the collaboration of experts from these communities. Each community describes an interactive system through visual sentences of a Visual Language (VL). A first VL allows domain experts to reason on the system usage in their specific activities. A second VL, the State-Chart language, is used to specify the system behaviour for software engineers purposes. A communication gap exists among the two communities, in that domain experts do not understand software engineers jargon and vice versa. To overcome this gap, a third VL permits Human-Computer Interaction experts to translate the user view of the system embedded in their Visual Language into a specification in the software engineering Visual Language.


Author(s):  
Arianna D’Ulizia ◽  
Grifoni Patrizia

This chapter introduces a classification of ambiguities in Visual Languages and discusses the ambiguities that occur in Spatial Visual Query Languages. It is adopted the definition of Visual Language, given in (Bottoni et al. 1995), as a set of Visual Sentence, each formed by an image, a description, an interpretation function and a materialization function. It is proposed a distinction between ambiguities produced by 1-n relationship between an image and its description, and ambiguities due to imprecision produced by the user’s behaviour during the interaction. Furthermore, the authors hope that this comprehensive classification of ambiguities may assist in the definition of Visual Languages, in order to allow the user to communicate through visual notations by avoiding to formulate sentences that have multiple interpretations.


Author(s):  
Paolo Bottoni ◽  
Dino Frediani ◽  
Paolo Quattrocchi

The definition of visual languages, of their semantics, and of the interactions with them, can all be referred to a notion of transformation of multisets of resources. Moreover, the concrete syntax for a particular language can be obtained in a semi-automatic way, by declaring the conformity of the language to some family of languages, specified by a metamodel. In a similar way, the generation of the associated semantics can take advantage of the identification of the variety of the semantics being expressed. According to the associated metamodel, one can obtain an abstract view of the semantic roles that visual elements can play with respect to the process being described. We propose here an integrated framework and interactive environment, based on a collection of metamodels, in which to express both syntactical characterizations of diagrammatic sentences and their semantic interpretations.


Author(s):  
Vincenzo Deufemia

Recognition of hand-drawn, diagrammatic sketches is a very active research field, since it finds a natural application in a wide range of domains, such as engineering, software design, and architecture. However, it is a particularly difficult task since the symbols of a sketched diagram can be drawn by using a different stroke-order, -number, and -direction. The difficulties in the recognition process are often made harder by the lack of precision and by the presence of ambiguities in messy hand-drawn sketches. In this article we present a brief survey on sketch understanding techniques and tools. We first present major problems that should be considered in the construction of on-line sketch recognizers. We analyze representative works for the recognition of freehand shape and describe several shape description languages for the automatic construction of sketch recognizers.


Author(s):  
Kristine Deray

Interactions are core part of interactive computing. However, their mechanisms remain poorly understood. The tendency has been to understand interactions in terms of the results they produce rather than to provide the mechanisms that explain “how” interactions unfold in time. In this chapter, we present a framework for creating visual languages for representing interactions, which uses human movement as a source for the core concepts of the visual language. Our approach is motivated and supported by the evidence, coming from the research on kinaesthetic thinking, that constructs based on human movement support higher-level cognitive processes and can be intuitively recognised by humans. We presented an overview of the framework, an instance of a visual language design using the proposed framework and its application for representing and analysing interactions between patients and practitioners in the healthcare domain. Developed approach and the corresponding techniques target interactive computer systems for facilitating interaction-rich domains, such as health care, in particular occupational therapy, collaborative design, and learning.


Author(s):  
Bernd Meyer ◽  
Paolo Bottoni

In this paper we investigate a new approach to formalizing interpretation of and reasoning with visual languages based on linear logic. We argue that an approach based on logic makes it possible to deal with different computational tasks in the usage of visual notations, from parsing and animation to reasoning about diagrams. However, classical first order logic, being monotonic, is not a suitable basis for such an approach. The paper therefore explores linear logic as an alternative. We demonstrate how parsing corresponds to linear proofs and prove the soundness and correctness of this mapping. As our mapping of grammars is into a subset of a linear logic programming language, we also demonstrate how multi-dimensional parsing can be understood as automated linear deduction. We proceed to discuss how the same framework can be used as the foundation of more complex forms of reasoning with and about diagrams.


Sign in / Sign up

Export Citation Format

Share Document