Ubiquitous User Interfaces

Author(s):  
Marco Blumendorf ◽  
Grzegorz Lehmann ◽  
Dirk Roscher ◽  
Sahin Albayrak

The widespread use of computing technology raises the need for interactive systems that adapt to user, device and environment. Multimodal user interfaces provide the means to support the user in various situations and to adapt the interaction to the user’s needs. In this chapter we present a system utilizing design-time user interface models at runtime to provide flexible multimodal user interfaces. The server-based system allows the combination and integration of multiple devices to support multimodal interaction and the adaptation of the user interface to the used devices, the user and the environment. The utilization of the user interface models at runtime allows exploiting the design information for advanced adaptation possibilities. An implementation of the system has been successfully deployed in a smart home environment throughout the Service Centric Home project (www.sercho.de).

2009 ◽  
pp. 1213-1222
Author(s):  
Christopher J. Pavlovski ◽  
Stella Mitchell

Traditional user interface design generally deals with the problem of enhancing the usability of a particular mode of user interaction, and a large body of literature exists concerning the design and implementation of graphical user interfaces. When considering the additional constraints that smaller mobile devices introduce, such as mobile phones and PDAs, an intuitive and heuristic user interface design is more difficult to achieve. Multimodal user interfaces employ several modes of interaction; this may include text, speech, visual gesture recognition, and haptics. To date, systems that employ speech and text for application interaction appear to be the mainstream multimodal solutions. There is some work on the design of multimodal user interfaces for general mobility accommodating laptops or desktop computers (Sinha & Landay, 2002). However, advances in multimodal technology to accommodate the needs of smaller mobile devices, such as mobile phones and portable digital assistants, are still emerging. Mobile phones are now commonly equipped with the mechanics for visual browsing of Internet applications, although their small screens and cumbersome text input methods pose usability challenges. The use of a voice interface together with a graphical interface is a natural solution to several challenges that mobile devices present. Such interfaces enable the user to exploit the strengths of each mode in order to make it easier to enter and access data on small devices. Furthermore, the flexibility offered by multiple modes for one application allows users to adapt their interactions based on preference and on environmental setting. For instance, handsfree speech operation may be conducted while driving, whereas graphical interactions can be adopted in noisy surroundings or when private data entry, such as a password, is required in a public environment. In this article we discuss multimodal technologies that address the technical and usability constraints of the mobile phone or PDA. These environments pose several additional challenges over general mobility solutions. This includes computational strength of the device, bandwidth constraints, and screen size restrictions. We outline the requirements of mobile multimodal solutions involving cellular phones. Drawing upon several trial deployments, we summarize the key designs points from both a technology and usability standpoint, and identify the outstanding problems in these designs. We also outline several future trends in how this technology is being deployed in various application scenarios, ranging from simple voice-activated search engines through to comprehensive mobile office applications.


Author(s):  
Sergey Sakulin ◽  
Alexander Alfimtsev ◽  
Evgeny Tipsin ◽  
Vladimir Devyatkov ◽  
Dmitry Sokolov

The rapid growth of computing devices has led to the emergence of distributed user interfaces. A user interface is called distributed if a user can interact with it using several devices at the same time. Formal methods for designing such interfaces, in particular methods for the distribution of interface elements across multiple devices, are yet to be developed. This is the reason why every time a new application requires a distributed user interface, the latter has to be designed from scratch, rendering the entire venture economically inefficient. In order to minimize costs, unify and automate the development of distributed interfaces, we need to formulate general formal methods for designing distributed interfaces that will be independent from a particular application or device. This article paper proposes a formal distribution method based on the pi-calculus.


2020 ◽  
Author(s):  
◽  
Mariel García-Hernández

A lo largo del tiempo se han desarrollado diversos documentos y normas que establecen la forma en que las interfaces para visualizaciones de información médica deben ser diseñadas, como por ejemplo la norma ISO 9241-11 “Ergonomic requirements for office work with visual display terminal”, la norma IEC TR 61997 “Guidelines for the user interfaces in multimedia equipment for general purpose use”, el ISO/IEC “Information technology - user interface for mobile tools” o la norma ISO 13407 “Human-centred design processes for interactive systems”, en donde se establecen lineamientos técnicos para el diseño de dichos artefactos. Sin embargo, desde el punto de vista del diseño de información y hasta la fecha de publicación de esta tesis, no han sido propuestas guías desde la usabilidad en el diseño mismo, es por eso por lo que la presente tesis doctoral tuvo como objetivo generar guías de usabilidad para el desarrollo de interfaces graficas de visualizaciones de información medica. El desarrollo de las guías de usabilidad propuestas en esta tesis tuvo como base investigaciones de autores como Nielsen, Frascara, Lonsdale & Lonsdale y Cairo, quienes abordan lineamientos como el color, la estructura de la información, el texto, elementos gráficos y el usuario desde la perspectiva del diseño de información y la ergonomía cognitiva. Las guías propuestas en esta tesis doctoral buscan que, al ser implementadas, el diseñador a cargo del desarrollo de interfaces genere artefactos usables, es decir, que sean eficientes (fáciles de leer), efectivos (fáciles de comprender) y satisfactorios (agradables estéticamente) para el usuario que interactuara con estos. Para lograrlo se implementó una prueba de usabilidad, la cual estuvo dividida en dos fases; la primera buscaba validar la composición editorial de las guías y la segunda validar el contenido (en términos de información) de las mismas.


Formalization approaches of user interface design (UID) in conjunction with model driven techniques aim to improve the usability in terms of conformity to standards or style guides and to leverage code generation of interactive software systems, so that various UI platforms for web, desktop or mobile Applications are supported. Because large parts of the UI are described platform independent instead of platform dependent implementations, re-usability of the UI concept is also improved. However, UI formalization requires the usage of a formal UI description language and a higher level of abstractness compared to concrete UI code. These languages need to be learned by the UI designer. In practice, most parts of a user interface are still manually designed and coded individually for every platform. This paper describes how HCI (Human Computer Interface) patterns that are described formally can be used in conjunction with model-based user interface design in order to make it easier for the designer to use formalization techniques for the development of user interfaces. The approach uses two UML profiles: The MBUID (Model-Based User Interface Design) profile and the HCI pattern profile. With these profiles formal models of interactive systems can be created on a platform independent level. The user interface is then automatically generated by model-driven development tool chain.


2019 ◽  
Vol 12 (2) ◽  
pp. 91-114
Author(s):  
Dorra Zaibi ◽  
Meriem Riahi ◽  
Faouzi Moussa

The advent of mobile devices has raised new challenges. One main challenge concerns the quality of the dialogue between human beings and interactive systems. This dialogue pertains principally to the user interface since it is the visible part of the systems to humans. Developing a usable interface is crucial for the success or failure in the actual use of mobile applications. One of the important research issues for the effective use of user interfaces is regarding how to conduct usability requirements in ubiquitous environments. The present article addresses this issue taking into account context-awareness. The authors propose a methodology in this direction that considers the human factors while designing user interfaces. Particularly, the approach focuses on how to select consistent usability requirements and how to incorporate them into user interface development process considering the context of use. An illustrative case study is presented. Finally, an experimental platform for interface evaluation is proposed.


2008 ◽  
Vol 17 (04) ◽  
pp. 467-494 ◽  
Author(s):  
STEPHAN LUKOSCH ◽  
MOHAMED BOURIMI

Web-based collaborative systems support a variety of complex scenarios. Not only the interaction among one user and a computer has to be modeled but also the interaction among the collaborating users as well. As a result, the user interfaces of many web-based collaborative systems are quite complex, but hardly use approved user interface concepts for the design of interactive systems. Thereby, web-based collaborative systems aggravate the interaction of the users with the system and also with each other. In this article, we describe how the adaptability and usability of such systems can particularly be improved by supporting direct manipulation techniques for navigation as well as tailoring. The new functionality for tailoring and navigation is complemented by new forms of visualizing synchronous awareness information and supporting communication in web-based systems. We show this exemplarily by retrofitting the web-based collaborative system CURE while highlighting the concepts that can be easily transferred to other web-based collaborative systems.


Author(s):  
Karin Coninx ◽  
Joan De Boeck ◽  
Chris Raymaekers ◽  
Lode Vanacken

The creation of virtual environments is often a lengthy and expensive process. Especially defining the interaction dialog between the user and the environment is a difficult task, as the communication is often multimodal by nature. In this chapter, we elaborate on an approach which facilitates the development of this kind of user interfaces. In particular, we propose a model-based user interface design process (MBUID), in which the interface is defined by means of high level notations, rather than by writing low level programming code. The approach lifts the design to a higher level of abstraction, resulting in a shortened development cycle leaving the opportunity for creating intermediate prototypes and user evaluation, ultimately resulting in better and cheaper virtual environment interfaces.


Author(s):  
Kai Tuuri ◽  
Antti Pirhonen ◽  
Pasi Välkkynen

The creative processes of interaction design operate in terms we generally use for conceptualising human-computer interaction (HCI). Therefore the prevailing design paradigm provides a framework that essentially affects and guides the design process. We argue that the current mainstream design paradigm for multimodal user-interfaces takes human sensory-motor modalities and the related userinterface technologies as separate channels of communication between user and an application. Within such a conceptualisation, multimodality implies the use of different technical devices in interaction design. This chapter outlines an alternative design paradigm, which is based on an action-oriented perspective on human perception and meaning creation process. The proposed perspective stresses the integrated sensory-motor experience and the active embodied involvement of a subject in perception coupled as a natural part of interaction. The outlined paradigm provides a new conceptual framework for the design of multimodal user interfaces. A key motivation for this new framework is in acknowledging multimodality as an inevitable quality of interaction and interaction design, the existence of which does not depend on, for example, the number of implemented presentation modes in an HCI application. We see that the need for such an interaction- and experience-derived perspective is amplified within the trend for computing to be moving into smaller devices of various forms which are being embedded into our everyday life. As a brief illustration of the proposed framework in practice, one case study of sonic interaction design is presented.


Sign in / Sign up

Export Citation Format

Share Document