automotive user interfaces
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 17)

H-INDEX

7
(FIVE YEARS 2)

2022 ◽  
Vol 54 (7) ◽  
pp. 1-26
Author(s):  
Michael Braun ◽  
Florian Weber ◽  
Florian Alt

Affective technology offers exciting opportunities to improve road safety by catering to human emotions. Modern car interiors enable the contactless detection of user states, paving the way for a systematic promotion of safe driver behavior through emotion regulation. We review the current literature regarding the impact of emotions on driver behavior and analyze the state of emotion regulation approaches in the car. We summarize challenges for affective interaction in the form of technological hurdles and methodological considerations, as well as opportunities to improve road safety by reinstating drivers into an emotionally balanced state. The purpose of this review is to outline the community’s combined knowledge for interested researchers, to provide a focussed introduction for practitioners, raise awareness for cultural aspects, and to identify future directions for affective interaction in the car.


2021 ◽  
Author(s):  
Justin Edwards ◽  
Philipp Wintersberger ◽  
Leigh Clark ◽  
Daniel Rough ◽  
Philip R Doyle ◽  
...  

2021 ◽  
Author(s):  
◽  
Florian Roider

Driving a modern car is more than just maneuvering the vehicle on the road. At the same time, drivers want to listen to music, operate the navigation system, compose, and read messages and more. Future cars are turning from simple means for transportation into smart devices on wheels. This trend will continue in the next years together with the advent of automated vehicles. However, technical challenges, legal regulations, and high costs slow down the penetration of automated vehicles. For this reason, a great majority of people will still be driving manually at least for the next decade. Consequently, it must be ensured that all the features of novel infotainment systems can be used easily, efficiently without distracting the driver from the task of driving and still provide a high user experience. A promising approach to cope with this challenge is multimodal in-car interaction. Multimodal interaction basically describes the combination of different input and output modalities for driver-vehicle interaction. Research has pointed out the potential to create a more flexible, efficient, and robust interaction. In addition to that, the integration of natural interaction modalities such as speech, gestures and gaze, the communication with the car could increase the naturalness of the interaction. Based on these advantages, the researcher community in the field of automotive user interfaces has produced several interesting concepts for multimodal interaction in vehicles. The problem is that the resulting insights and recommendations are often easily applicable in the design process of other concepts because they too concrete or very abstract. At the same time, concepts focus on different aspects. Some aim to reduce distraction while others want to increase efficiency or provide a better user experience. This makes it difficult to give overarching recommendations on how to combine natural input modalities while driving. As a consequence, interaction designers of in-vehicle systems are lacking adequate design support that enables them to transfer existing knowledge about the design of multimodal in-vehicle applications to their own concepts. This thesis addresses this gap by providing empirically validated design support for multimodal in-vehicle applications. It starts with a review of existing design support for automotive and multimodal applications. Based on that we report a series of user experiments that investigate various aspects of multimodal in-vehicle interaction with more than 200 participants in lab setups and driving simulators. During these experiments, we assessed the potentials of multimodality while driving, explored how user interfaces can support speech and gestures, and evaluated novel interaction techniques. The insights from these experiments extend existing knowledge from literature in order to create the first pattern collection for multimodal natural in-vehicle interaction. The collection contains 15 patterns that describe solutions for reoccurring problems when combining natural input with speech, gestures, or gaze in the car in a structured way. Finally, we present a prototype of an in-vehicle information system, which demonstrates the application of the proposed patterns and evaluate it in a driving-simulator experiment. This work contributes to field of automotive user interfaces in three ways. First, it presents the first pattern collection for multimodal natural in-vehicle interaction. Second, it illustrates and evaluates interaction techniques that combine speech and gestures with gaze input. Third, it provides empirical results of a series of user experiments that show the effects of multimodal natural interaction on different factors such as driving performance, glance behavior, interaction efficiency, and user experience.


Sign in / Sign up

Export Citation Format

Share Document