scholarly journals Advanced multimodal interaction techniques and user interfaces for serious games and virtual environments

Author(s):  
Fotis Liarokapis ◽  
Sebastian von Mammen ◽  
Athanasios Vourvopoulos
Author(s):  
Paula Alexandra Rego ◽  
Pedro Miguel Moreira ◽  
Luís Paulo Reis

Serious games is a field of research that has evolved substantially with valuable contributions to many application domains and areas. Patients often consider traditional rehabilitation approaches to be repetitive and boring, making it difficult for them to maintain their ongoing interest and to assure the completion of the treatment program. This chapter reviews serious games and the natural and multimodal user interfaces for the health rehabilitation domain. Specifically, it details a framework for the development of serious games that integrates a rich set of features that can be used to improve the designed games with direct benefits to the rehabilitation process. Highlighted features include natural and multimodal interaction, social skills (collaboration and competitiveness), and progress monitoring. Due to the rich set of features supported by the framework, the games' rehabilitation efficacy can be enhanced primarily from an increase in the patient's motivation when exercising the rehabilitation tasks. A preliminary test of the framework with elderly users is described.


Author(s):  
Paula Alexandra Rego ◽  
Pedro Miguel Moreira ◽  
Luís Paulo Reis

Serious Games is a field of research that has evolved substantially with valuable contributions to many application domains and areas. Patients often consider traditional rehabilitation approaches to be repetitive and boring, making it difficult for them to maintain their ongoing interest and to assure the completion of the treatment program. This paper reviews Serious Games and the natural and multimodal user interfaces for the health rehabilitation domain. Specifically, it details a framework for the development of Serious Games that integrates a rich set of features that can be used to improve the designed games with direct benefits to the rehabilitation process. Highlighted features include natural and multimodal interaction, social skills (collaboration and competitiveness) and progress monitoring. Due to the rich set of features supported by the framework, the games' rehabilitation efficacy can be enhanced primarily from an increase in the patient's motivation when exercising the rehabilitation tasks.


Gamification ◽  
2015 ◽  
pp. 404-424 ◽  
Author(s):  
Paula Alexandra Rego ◽  
Pedro Miguel Moreira ◽  
Luís Paulo Reis

Serious Games is a field of research that has evolved substantially with valuable contributions to many application domains and areas. Patients often consider traditional rehabilitation approaches to be repetitive and boring, making it difficult for them to maintain their ongoing interest and to assure the completion of the treatment program. This paper reviews Serious Games and the natural and multimodal user interfaces for the health rehabilitation domain. Specifically, it details a framework for the development of Serious Games that integrates a rich set of features that can be used to improve the designed games with direct benefits to the rehabilitation process. Highlighted features include natural and multimodal interaction, social skills (collaboration and competitiveness) and progress monitoring. Due to the rich set of features supported by the framework, the games' rehabilitation efficacy can be enhanced primarily from an increase in the patient's motivation when exercising the rehabilitation tasks.


2021 ◽  
Author(s):  
◽  
Florian Roider

Driving a modern car is more than just maneuvering the vehicle on the road. At the same time, drivers want to listen to music, operate the navigation system, compose, and read messages and more. Future cars are turning from simple means for transportation into smart devices on wheels. This trend will continue in the next years together with the advent of automated vehicles. However, technical challenges, legal regulations, and high costs slow down the penetration of automated vehicles. For this reason, a great majority of people will still be driving manually at least for the next decade. Consequently, it must be ensured that all the features of novel infotainment systems can be used easily, efficiently without distracting the driver from the task of driving and still provide a high user experience. A promising approach to cope with this challenge is multimodal in-car interaction. Multimodal interaction basically describes the combination of different input and output modalities for driver-vehicle interaction. Research has pointed out the potential to create a more flexible, efficient, and robust interaction. In addition to that, the integration of natural interaction modalities such as speech, gestures and gaze, the communication with the car could increase the naturalness of the interaction. Based on these advantages, the researcher community in the field of automotive user interfaces has produced several interesting concepts for multimodal interaction in vehicles. The problem is that the resulting insights and recommendations are often easily applicable in the design process of other concepts because they too concrete or very abstract. At the same time, concepts focus on different aspects. Some aim to reduce distraction while others want to increase efficiency or provide a better user experience. This makes it difficult to give overarching recommendations on how to combine natural input modalities while driving. As a consequence, interaction designers of in-vehicle systems are lacking adequate design support that enables them to transfer existing knowledge about the design of multimodal in-vehicle applications to their own concepts. This thesis addresses this gap by providing empirically validated design support for multimodal in-vehicle applications. It starts with a review of existing design support for automotive and multimodal applications. Based on that we report a series of user experiments that investigate various aspects of multimodal in-vehicle interaction with more than 200 participants in lab setups and driving simulators. During these experiments, we assessed the potentials of multimodality while driving, explored how user interfaces can support speech and gestures, and evaluated novel interaction techniques. The insights from these experiments extend existing knowledge from literature in order to create the first pattern collection for multimodal natural in-vehicle interaction. The collection contains 15 patterns that describe solutions for reoccurring problems when combining natural input with speech, gestures, or gaze in the car in a structured way. Finally, we present a prototype of an in-vehicle information system, which demonstrates the application of the proposed patterns and evaluate it in a driving-simulator experiment. This work contributes to field of automotive user interfaces in three ways. First, it presents the first pattern collection for multimodal natural in-vehicle interaction. Second, it illustrates and evaluates interaction techniques that combine speech and gestures with gaze input. Third, it provides empirical results of a series of user experiments that show the effects of multimodal natural interaction on different factors such as driving performance, glance behavior, interaction efficiency, and user experience.


Author(s):  
Xiaojun Bi ◽  
Andrew Howes ◽  
Per Ola Kristensson ◽  
Antti Oulasvirta ◽  
John Williamson

This chapter introduces the field of computational interaction, and explains its long tradition of research on human interaction with technology that applies to human factors engineering, cognitive modelling, artificial intelligence and machine learning, design optimization, formal methods, and control theory. It discusses how the book as a whole is part of an argument that, embedded in an iterative design process, computational interaction design has the potential to complement human strengths and provide a means to generate inspiring and elegant designs without refuting the part played by the complicated, and uncertain behaviour of humans. The chapters in this book manifest intellectual progress in the study of computational principles of interaction, demonstrated in diverse and challenging applications areas such as input methods, interaction techniques, graphical user interfaces, information retrieval, information visualization, and graphic design.


2012 ◽  
Vol 21 (1) ◽  
pp. 58-68 ◽  
Author(s):  
Sergio Moya ◽  
Dani Tost ◽  
Sergi Grau

We describe a graphical narrative editor that we have developed for the design of serious games for cognitive neurorehabilitation. The system is addressed to neuropsychologists. It is aimed at providing them an easy, user-friendly, and fast way of specifying the therapeutical contents of the rehabilitation tasks that constitute the serious games. The editor takes as input a description of the virtual task environment and the actions allowed inside. Therapists use it to describe the actions that they expect patients to do in order to fulfill the goals of the task and the behavior of the game if patients do not reach their goals. The output of the system is a complete description of the task logic. We have designed a 3D game platform that provides to the editor a description the 3D virtual environments, and that translates the task description created in the editor into the task logic. The main advantage of the system is that it is fully automatic, it allows therapists to interactively design the tasks and immediately validate them by realizing it virtually. We describe the design of the two applications and present the results of system testing.


Author(s):  
Derek Brock ◽  
Deborah Hix ◽  
Lynn Dievendorf ◽  
J. Gregory Trafton

Software user interfaces that provide users with more than one device, such as a mouse and keyboard, for interactively performing tasks, are now commonplace. Concerns about how to represent individual differences in patterns of use and acquisition of skill in such interfaces led the authors to develop modifications to the standard format of the User Action Notation (UAN) that substantially augment the notation's expressive power. These extensions allow the reader of an interface specification to make meaningful comparisons between functionally equivalent interaction techniques and task performance strategies in interfaces supporting multiple input devices. Furthermore, they offer researchers a new methodology for analyzing the behavioral aspects of user interfaces. These modifications are documented and their benefits discussed.


Sign in / Sign up

Export Citation Format

Share Document