Multimodal Human Computer Interaction and Pervasive Services
Latest Publications


TOTAL DOCUMENTS

24
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

Published By IGI Global

9781605663869, 9781605663876

Author(s):  
Floriana Esposito ◽  
Teresa M.A. Basile ◽  
Nicola Di Mauro ◽  
Stefano Ferilli

One of the most important features of a mobile device concerns its flexibility and capability to adapt the functionality it provides to the users. However, the main problems of the systems present in literature are their incapability to identify user needs and, more importantly, the insufficient mappings of those needs to available resources/services. In this paper, we present a two-phase construction of the user model: firstly, an initial static user model is built for the user connecting to the system the first time. Then, the model is revised/adjusted by considering the information collected in the logs of the user interaction with the device/context in order to make the model more adequate to the evolving user’s interests/ preferences/behaviour. The initial model is built by exploiting the stereotype concept, its adjustment is performed exploiting machine learning techniques and particularly, sequence mining and pattern discovery strategies.


Author(s):  
Regina Bernhaupt

In order to develop easy-to-use multimodal interfaces for mobile applications, effective usability evaluation methods (UEMs) are an essential component of the development process. Over the past decades various usability evaluation methods have been developed and implemented to improve and assure easyto- use user interfaces and systems. However, most of the so-called ‘classical’ methods exhibit shortcomings when used in the field of mobile applications, especially when addressing multimodal interaction (MMI). Hence, several ‘classical’ methods were broadened, varied, and changed to meet the demands of testing usability for multimodal interfaces and mobile applications. This chapter presents a selection of these ‘classical’ methods, and introduces some newly developed methods for testing usability in the area of multimodal interfaces. The chapter concludes with a summary on currently available methods for usability evaluation of multimodal interfaces for mobile devices.


Author(s):  
Kristine Deray ◽  
Simeon Simoff

The purpose of this chapter is to set design guidelines on visual representations of interactions for mobile multimodal systems. The chapter looks at the features of interaction as process and how these features are exposed in the data. It presents a three layer framework for designing visual representations for mobile multimodal systems and a method that implements it. The method is based on an operationalisation of the source-target mapping from the contemporary theory of metaphors. Resultant design guidelines are grouped into (i) a set of high-level design requirements for visual representations of interactions on mobile multimodal systems; and (ii) a set of specific design requirements for the visual elements and displays for representing interactions on mobile multimodal systems. The second set then is considered subject to an additional requirement – the preservation of the beauty of the representation across the relevant modalities. The chapter is focused on the modality of the output. Though the chapter considers interaction data from human to human interactions, presented framework and designed guidelines are applicable towards interaction in general.


Author(s):  
Deborah A. Dahl

This chapter discusses a wide variety of current and emerging standards that support multimodal applications, including standards for architecture and communication, application definition, the user interface, and certifications. It focuses on standards for voice and GUI interaction. Some of the major standards discussed include the W3C multimodal architecture, VoiceXML, SCXML, EMMA, and speech grammar standards. The chapter concludes with a description of how the standards participate in a multimodal application and some future directions.


Author(s):  
Alessia D’Andrea ◽  
Fernando Ferri

This chapter describes changes that mobile devices, such as mobile phones, PDAs, iPods and smart phones improve on the learning process. The diffusion of these devices has drastically changed learning tools and the environment in which learning takes place. Learning has moved outside the classroom becoming “mobile.” Mobile learning provides both learners and teachers with the capability to collaborate and share data, knowledge, files, and messages everywhere and everytime. This allows learners and teachers to microcoordinate activities withoutlimitation of time and space.


Author(s):  
Sladjana Tesanovic ◽  
Danco Davcev ◽  
Vladimir Trajkovik

Multimodal mobile virtual blackboard system is made for consultation among students and professors. It is made to improve availability and communication using mobile handheld devices. Our system enables different forms of communication: chat, VoIP, draw, file exchange. Providing greater usability on small screens of mobile devices can be done by adaptation of the features in an application according to the specific user preferences and to the current utilization of the application. In this chapter, we describe our mobile virtual table consultation system with special attention to the multimodal solution of the user interface by using XML agents and fuzzy logic. The general opinion among the participants of the consultations lead on this mobile system is positive. Participants mark this system as user friendly, which points out that our efforts in development of adaptable user interface can serve as good practice in designing interfaces for mobile devices.


Author(s):  
Benoît Encelle ◽  
Nadine Baptiste-Jessel ◽  
Florence Sèdes

Personalization of user interfaces for browsing content is a key concept to ensure content accessibility. This personalization is especially needed for people with disabilities (e.g,. visually impaired) and/or for highly mobile individuals (driving, off-screen environments) and/or for people with limited devices (PDAs, mobile phones, etc.). In this direction, we introduce mechanisms, based on a user requirements study, that result in the generation of personalized user interfaces for browsing particular XML content types. These on-the-fly generated user interfaces can use several modalities for increasing communication possibilities: in this way, interactions between the user and the system can take place in a more natural manner.


Author(s):  
Stefania Pierno ◽  
Vladimiro Scotto di Carlo ◽  
Massimo Magaldi ◽  
Roberto Russo ◽  
Gian Luca Supino ◽  
...  

In this chapter, we describe a grid approach to providing multimodal context-sensitive social services to mobile users. Interaction design is a major issue for mobile information system not only in terms of input-output channels and information presentation, but also in terms of context-awareness. The proposed platform supports the development of multi-channel, multi-modal, mobile context aware applications, and it is described using an example in an emergency management scenario. The platform allows the deployment of services featuring a multimodal (synergic) UI and backed up on the server side by a distributed architecture based on a GRID approach to better afford the computing load generated by input channels processing. Since a computational GRID provides access to “resources” (typically computing related ones) we began to apply the same paradigm to the modelling and sharing of other resources as well. This concept is described using a scenario about emergencies and crisis management.


Author(s):  
Julie Doyle ◽  
Michela Bertolotto ◽  
David Wilson

The user interface is of critical importance in applications that provide mapping services. It defines the visualisation and interaction modes for carrying out a variety of mapping tasks, and ease of use is essential to successful user adoption. This is even more evident in a mobile context, where device limitations can hinder usability. In particular, interaction modes such as a pen/stylus are limited and can be quite difficult to use while mobile. Moreover, the majority of GIS interfaces are inherently complex and require significant user training, which can be a serious problem for novice users such as tourists. In this chapter, we review issues in the development of multimodal interfaces for mobile GIS, allowing for two or more modes of input, as an attempt to address interaction complexity in the context of mobile mapping applications. In particular, we review both the benefits and challenges of integrating multimodality into a GIS interface. We describe our multimodal mobile GIS CoMPASS which helps to address the problem by permitting users to interact with spatial data using a combination of speech and gesture input, effectively providing more intuitive and efficient interaction for mobile mapping applications.


Author(s):  
Marcus Specht

In the following chapter, an overview is given over the experiences and design decisions made in the European project RAFT for enabling live distributed collaboration between learners in the field and in the classroom. Beside a context analysis for defining requirements for service needed as an underlying infrastructure user interface design decisions were essential in the project. As a flexible and powerful approach a widget based design for the user interface enable the project to build clients for a variety of hardware and devices in the learning environment ranging from mobile phones, PDAs, tablet PCs, desktop computers, to electronic whiteboard solutions. Enabling consistent and synchronized access to information streams in such a distributed learning environment can be seen one essential insight of the described research.


Sign in / Sign up

Export Citation Format

Share Document