Multimodality in Mobile Computing and Mobile Devices
Latest Publications


TOTAL DOCUMENTS

14
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781605669786, 9781605669793

Author(s):  
Julie Doyle ◽  
Michela Bertolotto ◽  
David Wilson

Information technology can play an important role in helping the elderly to live full, healthy and independent lives. However, elders are often overlooked as a potential user group of many technologies. In particular, we are concerned with the lack of GIS applications which might be useful to the elderly population. The main underlying reasons which make it difficult to design usable applications for elders are threefold. The first concerns a lack of digital literacy within this cohort, the second involves physical and cognitive age-related impairments while the third involves a lack of knowledge on improving usability in interactive geovisualisation and spatial systems. As such, in this chapter we analyse existing literature in the fields of mobile multimodal interfaces with emphasis on GIS and the specific requirements of the elderly in relation to the use of such technologies. We also examine the potential benefits that the elderly could gain through using such technology, as well as the shortcomings that current systems have, with the aim to ensure full potential for this diverse, user group. In particular, we identify specific requirements for the design of multimodal GIS through a usage example of a system we have developed. Such a system produced very good evaluation results in terms of usability and effectiveness when tested by a different user group. However, a number of changes are necessary to ensure usability and acceptability by an elderly cohort. A discussion of these concludes the chapter.


Author(s):  
Marcos Martinez-Diaz ◽  
Julian Fierrez ◽  
Javier Ortega-Garcia

Automatic signature verification on handheld devices can be seen as a means to improve usability in consumer applications and a way to reduce costs in corporate environments. It can be easily integrated in touchscreen devices, for example, as a part of combined handwriting and keypad-based multimodal interfaces. In the last few decades, several approaches to the problem of signature verification have been proposed. However, most research has been carried out considering signatures captured with digitizing tables, in which the quality of the captured data is much higher than in handheld devices. Signature verification on handheld devices represents a new scenario both for researchers and vendors. In this chapter, we introduce automatic signature verification as a component of multimodal interfaces; we analyze the applications and challenges of signature verification and overview available resources and research directions. A case study is also given, in which a state-of-the-art signature verification system adapted to handheld devices is presented.


Author(s):  
Antonio Gentile ◽  
Antonella Santangelo ◽  
Salvatore Sorce ◽  
Agnese Augello ◽  
Giovanni Pilato ◽  
...  

In this chapter the role of multimodality in intelligent, mobile guides for cultural heritage environments is discussed. Multimodal access to information contents enables the creation of systems with a higher degree of accessibility and usability. A multimodal interaction may involve several human interaction modes, such as sight, touch and voice to navigate contents, or gestures to activate controls. We first start our discussion by presenting a timeline of cultural heritage system evolution, spanning from 2001 to 2008, which highlights design issues such as intelligence and context-awareness in providing information. Then, multimodal access to contents is discussed, along with problems and corresponding solutions; an evaluation of several reviewed systems is also presented. Lastly, a case study multimodal framework termed MAGA is described, which combines intelligent conversational agents with speech recognition/ synthesis technology in a framework employing RFID based location and Wi-Fi based data exchange.


Author(s):  
Aidan Kehoe ◽  
Flaithri Neff ◽  
Ian Pitt

There are numerous challenges to accessing user assistance information in mobile and ubiquitous computing scenarios. For example, there may be little-or-no display real estate on which to present information visually, the user’s eyes may be busy with another task (e.g., driving), it can be difficult to read text while moving, etc. Speech, together with non-speech sounds and haptic feedback can be used to make assistance information available to users in these situations. Non-speech sounds and haptic feedback can be used to cue information that is to be presented to users via speech, ensuring that the listener is prepared and that leading words are not missed. In this chapter, we report on two studies that examine user perception of the duration of a pause between a cue (which may be a variety of non-speech sounds, haptic effects or combined non-speech sound plus haptic effects) and the subsequent delivery of assistance information using speech. Based on these user studies, recommendations for use of cue pause intervals in the range of 600 ms to 800 ms are made.


Author(s):  
Keith Waters

Multimodality presents challenges within a mobile cellular network. Variable connectivity, coupled with a wide variety of handset capabilities, present significant constraints that are difficult to overcome. As a result, commercial mobile multimodal implementations have yet to reach the consumer mass market, and are considered niche services. This chapter describes multimodality with handsets in cellular mobile networks that are coupled to new opportunities in targeted Web services. Such Web services aim to simplify and speed up interactions through new user experiences. This chapter highlights some key components with respect to a few existing approaches. While the most common forms of multimodality use voice and graphics, new modes of interaction are enabled via simple access to device properties, call the Delivery Context: Client Interfaces (DCCI).


Author(s):  
Marco de Sá ◽  
Carlos Duarte ◽  
Luís Carriço ◽  
Tiago Reis

In this chapter we describe a set of techniques and tools that aim at supporting designers while creating mobile multimodal applications. We explain how the additional difficulties that designers face during this process, especially those related to multimodalities, can be tackled. In particular, we present a scenario generation and context definition framework that can be used to drive design and support evaluation within realistic settings, promoting in-situ design and richer results. In conjunction with the scenario framework, we detail a prototyping tool that was developed to support the early stage prototyping and evaluation process of mobile multimodal applications, from the first sketch-based prototypes up to the final quantitative analysis of usage results. As a case study, we describe a mobile application for accessing and reading rich digital books. The application aims at offering users, in particular blind users, means to read and annotate digital books and it was designed to be used on Pocket PCs and Smartphones, including a set of features that enhance both content and usability of traditional books.


Author(s):  
Kai Tuuri ◽  
Antti Pirhonen ◽  
Pasi Välkkynen

The creative processes of interaction design operate in terms we generally use for conceptualising human-computer interaction (HCI). Therefore the prevailing design paradigm provides a framework that essentially affects and guides the design process. We argue that the current mainstream design paradigm for multimodal user-interfaces takes human sensory-motor modalities and the related userinterface technologies as separate channels of communication between user and an application. Within such a conceptualisation, multimodality implies the use of different technical devices in interaction design. This chapter outlines an alternative design paradigm, which is based on an action-oriented perspective on human perception and meaning creation process. The proposed perspective stresses the integrated sensory-motor experience and the active embodied involvement of a subject in perception coupled as a natural part of interaction. The outlined paradigm provides a new conceptual framework for the design of multimodal user interfaces. A key motivation for this new framework is in acknowledging multimodality as an inevitable quality of interaction and interaction design, the existence of which does not depend on, for example, the number of implemented presentation modes in an HCI application. We see that the need for such an interaction- and experience-derived perspective is amplified within the trend for computing to be moving into smaller devices of various forms which are being embedded into our everyday life. As a brief illustration of the proposed framework in practice, one case study of sonic interaction design is presented.


Author(s):  
Marco Blumendorf ◽  
Grzegorz Lehmann ◽  
Dirk Roscher ◽  
Sahin Albayrak

The widespread use of computing technology raises the need for interactive systems that adapt to user, device and environment. Multimodal user interfaces provide the means to support the user in various situations and to adapt the interaction to the user’s needs. In this chapter we present a system utilizing design-time user interface models at runtime to provide flexible multimodal user interfaces. The server-based system allows the combination and integration of multiple devices to support multimodal interaction and the adaptation of the user interface to the used devices, the user and the environment. The utilization of the user interface models at runtime allows exploiting the design information for advanced adaptation possibilities. An implementation of the system has been successfully deployed in a smart home environment throughout the Service Centric Home project (www.sercho.de).


Author(s):  
Jaeseung Chang ◽  
Marie-Luce Bourguet

Currently, a lack of reliable methodologies for the design and evaluation of usable multimodal interfaces makes developing multimodal interaction systems a big challenge. In this paper, we present a usability framework to support the design and evaluation of multimodal interaction systems. First, elementary multimodal commands are elicited using traditional usability techniques. Next, based on the CARE (Complementarity, Assignment, Redundancy, and Equivalence) properties and the FSM (Finite State Machine) formalism, the original set of elementary commands is expanded to form a comprehensive set of multimodal commands. Finally, this new set of multimodal commands is evaluated in two ways: user-testing and error-robustness evaluation. This usability framework acts as a structured and general methodology both for the design and for the evaluation of multimodal interaction. We have implemented software tools and applied this methodology to the design of a multimodal mobile phone to illustrate the use and potential of the proposed framework.


Author(s):  
Kay Kadner ◽  
Martin Knechtel ◽  
Gerald Huebsch ◽  
Thomas Springer ◽  
Christoph Pohl

The diversity of today’s mobile technology also entails multiple interaction channels offered per device. This chapter surveys the basics of multimodal interactions in a mobility context and introduces a number of concepts for platform support. Synchronization approaches for input fusion and output fission, as well as a concept for device federation as a means to leverage from heterogeneous devices, are discussed with the help of an exemplary multimodal route planning application. An outlook on future trends concludes the chapter.


Sign in / Sign up

Export Citation Format

Share Document