scholarly journals Natural Multimodal Interaction in the Car - Generating Design Support for Speech, Gesture, and Gaze Interaction while Driving

2021 ◽  
Author(s):  
◽  
Florian Roider

Driving a modern car is more than just maneuvering the vehicle on the road. At the same time, drivers want to listen to music, operate the navigation system, compose, and read messages and more. Future cars are turning from simple means for transportation into smart devices on wheels. This trend will continue in the next years together with the advent of automated vehicles. However, technical challenges, legal regulations, and high costs slow down the penetration of automated vehicles. For this reason, a great majority of people will still be driving manually at least for the next decade. Consequently, it must be ensured that all the features of novel infotainment systems can be used easily, efficiently without distracting the driver from the task of driving and still provide a high user experience. A promising approach to cope with this challenge is multimodal in-car interaction. Multimodal interaction basically describes the combination of different input and output modalities for driver-vehicle interaction. Research has pointed out the potential to create a more flexible, efficient, and robust interaction. In addition to that, the integration of natural interaction modalities such as speech, gestures and gaze, the communication with the car could increase the naturalness of the interaction. Based on these advantages, the researcher community in the field of automotive user interfaces has produced several interesting concepts for multimodal interaction in vehicles. The problem is that the resulting insights and recommendations are often easily applicable in the design process of other concepts because they too concrete or very abstract. At the same time, concepts focus on different aspects. Some aim to reduce distraction while others want to increase efficiency or provide a better user experience. This makes it difficult to give overarching recommendations on how to combine natural input modalities while driving. As a consequence, interaction designers of in-vehicle systems are lacking adequate design support that enables them to transfer existing knowledge about the design of multimodal in-vehicle applications to their own concepts. This thesis addresses this gap by providing empirically validated design support for multimodal in-vehicle applications. It starts with a review of existing design support for automotive and multimodal applications. Based on that we report a series of user experiments that investigate various aspects of multimodal in-vehicle interaction with more than 200 participants in lab setups and driving simulators. During these experiments, we assessed the potentials of multimodality while driving, explored how user interfaces can support speech and gestures, and evaluated novel interaction techniques. The insights from these experiments extend existing knowledge from literature in order to create the first pattern collection for multimodal natural in-vehicle interaction. The collection contains 15 patterns that describe solutions for reoccurring problems when combining natural input with speech, gestures, or gaze in the car in a structured way. Finally, we present a prototype of an in-vehicle information system, which demonstrates the application of the proposed patterns and evaluate it in a driving-simulator experiment. This work contributes to field of automotive user interfaces in three ways. First, it presents the first pattern collection for multimodal natural in-vehicle interaction. Second, it illustrates and evaluates interaction techniques that combine speech and gestures with gaze input. Third, it provides empirical results of a series of user experiments that show the effects of multimodal natural interaction on different factors such as driving performance, glance behavior, interaction efficiency, and user experience.

Author(s):  
Michael Weber ◽  
Marc Hermann

This chapter gives an overview of the broad range of interaction techniques for use in ubiquitous computing. It gives a short introduction to the fundamentals of human-computer interaction and the traditional user interfaces, surveys multi-scale output devices, gives a general idea of hands and eyes input, specializes them by merging the virtual and real world, and introduces attention and affection for enhancing the interaction with computers and especially with disappearing computers. The human-computer interaction techniques surveyed here help support Weiser’s idea of ubiquitous computing (1991) and calm technology (Weiser & Brown, 1996) and result in more natural interaction techniques than in use of purely graphical user interfaces. This chapter will thus first introduce the basic principles in human-computer interaction from a cognitive perspective, but aimed at computer scientists. The humancomputer interaction cycle brings us to a discussion of input and output devices and their characteristics being used within this cycle. The interrelation of the physical and virtual world as we see it in ubiquitous computing has its predecessors in the domain of virtual and augmented realities where specific hands and eyes interaction techniques and technologies have been developed. The next step will be attentive and affective user interfaces and the use of tangible objects being manipulated directly without using dedicated I/O devices.


Author(s):  
Xiaojun Bi ◽  
Andrew Howes ◽  
Per Ola Kristensson ◽  
Antti Oulasvirta ◽  
John Williamson

This chapter introduces the field of computational interaction, and explains its long tradition of research on human interaction with technology that applies to human factors engineering, cognitive modelling, artificial intelligence and machine learning, design optimization, formal methods, and control theory. It discusses how the book as a whole is part of an argument that, embedded in an iterative design process, computational interaction design has the potential to complement human strengths and provide a means to generate inspiring and elegant designs without refuting the part played by the complicated, and uncertain behaviour of humans. The chapters in this book manifest intellectual progress in the study of computational principles of interaction, demonstrated in diverse and challenging applications areas such as input methods, interaction techniques, graphical user interfaces, information retrieval, information visualization, and graphic design.


2021 ◽  
Author(s):  
Marius Fechter ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractVirtual and augmented reality allows the utilization of natural user interfaces, such as realistic finger interaction, even for purposes that were previously dominated by the WIMP paradigm. This new form of interaction is particularly suitable for applications involving manipulation tasks in 3D space, such as CAD assembly modeling. The objective of this paper is to evaluate the suitability of natural interaction for CAD assembly modeling in virtual reality. An advantage of the natural interaction compared to the conventional operation by computer mouse would indicate development potential for user interfaces of current CAD applications. Our approach bases on two main elements. Firstly, a novel natural user interface for realistic finger interaction enables the user to interact with virtual objects similar to physical ones. Secondly, an algorithm automatically detects constraints between CAD components based solely on their geometry and spatial location. In order to prove the usability of the natural CAD assembly modeling approach in comparison with the assembly procedure in current WIMP operated CAD software, we present a comparative user study. Results show that the VR method including natural finger interaction significantly outperforms the desktop-based CAD application in terms of efficiency and ease of use.


Author(s):  
Andreas Löcken ◽  
Anna-Katharina Frison ◽  
Vanessa Fahn ◽  
Dominik Kreppold ◽  
Maximilian Götz ◽  
...  

Author(s):  
Derek Brock ◽  
Deborah Hix ◽  
Lynn Dievendorf ◽  
J. Gregory Trafton

Software user interfaces that provide users with more than one device, such as a mouse and keyboard, for interactively performing tasks, are now commonplace. Concerns about how to represent individual differences in patterns of use and acquisition of skill in such interfaces led the authors to develop modifications to the standard format of the User Action Notation (UAN) that substantially augment the notation's expressive power. These extensions allow the reader of an interface specification to make meaningful comparisons between functionally equivalent interaction techniques and task performance strategies in interfaces supporting multiple input devices. Furthermore, they offer researchers a new methodology for analyzing the behavioral aspects of user interfaces. These modifications are documented and their benefits discussed.


Sign in / Sign up

Export Citation Format

Share Document