Increasing User Experience and Trust in Automated Vehicles via an Ambient Light Display

Author(s):  
Andreas Löcken ◽  
Anna-Katharina Frison ◽  
Vanessa Fahn ◽  
Dominik Kreppold ◽  
Maximilian Götz ◽  
...  
2021 ◽  
Vol 3 ◽  
Author(s):  
Franziska Hartwich ◽  
Cornelia Hollander ◽  
Daniela Johannmeyer ◽  
Josef F. Krems

Automated vehicles promise transformational benefits for future mobility systems, but only if they will be used regularly. However, due to the associated loss of control and fundamental change of in-vehicle user experience (shifting from active driver to passive passenger experience), many humans have reservations toward driving automation, which question their sufficient usage and market penetration. These reservations vary based on individual characteristics such as initial attitudes. User-adaptive in-vehicle Human-Machine Interfaces (HMIs) meeting varying user requirements may represent an important component of higher-level automated vehicles providing a pleasant and trustworthy passenger experience despite these barriers. In a driving simulator study, we evaluated the effects of two HMI versions (with permanent vs. context-adaptive information availability) on the passenger experience (perceived safety, understanding of driving behavior, driving comfort, driving enjoyment) and trust in automated vehicles of 50 first-time users with varying initial trust (lower vs. higher trust group). Additionally, we compared the user experience of both HMIs. Presenting driving-related information via HMI during driving improved all assessed aspects of passenger experience and trust. The higher trust group experienced automated driving as safest, most understandable and most comfortable with the context-adaptive HMI, while the lower trust group tended to experience the highest safety, understanding and comfort with the permanent HMI. Both HMIs received positive user experience ratings. The context-adaptive HMI received generally more positive ratings, even though this preference was more pronounced for the higher trust group. The results demonstrate the potential of increasing the system transparency of higher-level automated vehicles through HMIs to enhance users’ passenger experience and trust. They also consolidate previous findings on varying user requirements based on individual characteristics. User group-specific HMI effects on passenger experience support the relevance of user-adaptive HMI concepts addressing varying needs of different users by customizing HMI features, such as information availability. Consequently, providing full information permanently cannot be recommended as a universal standard for HMIs in automated vehicles. These insights represent next steps toward a pleasant and trustworthy passenger experience in higher-level automated vehicles for everyone, and support their market acceptance and thus the realization of their expected benefits for future mobility and society.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 176
Author(s):  
Rebecca Hainich ◽  
Uwe Drewitz ◽  
Klas Ihme ◽  
Jan Lauermann ◽  
Mathias Niedling ◽  
...  

Motion sickness (MS) is a syndrome associated with symptoms like nausea, dizziness, and other forms of physical discomfort. Automated vehicles (AVs) are potent at inducing MS because users are not adapted to this novel form of transportation, are provided with less information about the own vehicle’s trajectory, and are likely to engage in non-driving related tasks. Because individuals with an especially high MS susceptibility could be limited in their use of AVs, the demand for MS mitigation strategies is high. Passenger anticipation has been shown to have a modulating effect on symptoms, thus mitigating MS. To find an effective mitigation strategy, the prototype of a human–machine interface (HMI) that presents anticipatory ambient light cues for the AV’s next turn to the passenger was evaluated. In a realistic driving study with participants (N = 16) in an AV on a test track, an MS mitigation effect was evaluated based on the MS increase during the trial. An MS mitigation effect was found within a highly susceptible subsample through the presentation of anticipatory ambient light cues. The HMI prototype was proven to be effective regarding highly susceptible users. Future iterations could alleviate MS in field settings and improve the acceptance of AVs.


i-com ◽  
2019 ◽  
Vol 18 (2) ◽  
pp. 127-149 ◽  
Author(s):  
Andreas Riegler ◽  
Philipp Wintersberger ◽  
Andreas Riener ◽  
Clemens Holzmann

Abstract Increasing vehicle automation presents challenges as drivers of highly automated vehicles become more disengaged from the primary driving task. However, even with fully automated driving, there will still be activities that require interfaces for vehicle-passenger interactions. Windshield displays are a technology with a promising potential for automated driving, as they are able to provide large content areas supporting drivers in non-driving related activities. However, it is still unknown how potential drivers or passengers would use these displays. This work addresses user preferences for windshield displays in automated driving. Participants of a user study (N=63) were presented two levels of automation (conditional and full), and could freely choose preferred positions, content types, as well as size, transparency levels and importance levels of content windows using a simulated “ideal” windshield display. We visualized the results in form of heatmap data which show that user preferences differ with respect to the level of automation, age, gender, or environment aspects. These insights can help designers of interiors and in-vehicle applications to provide a rich user experience in highly automated vehicles.


2021 ◽  
Author(s):  
◽  
Florian Roider

Driving a modern car is more than just maneuvering the vehicle on the road. At the same time, drivers want to listen to music, operate the navigation system, compose, and read messages and more. Future cars are turning from simple means for transportation into smart devices on wheels. This trend will continue in the next years together with the advent of automated vehicles. However, technical challenges, legal regulations, and high costs slow down the penetration of automated vehicles. For this reason, a great majority of people will still be driving manually at least for the next decade. Consequently, it must be ensured that all the features of novel infotainment systems can be used easily, efficiently without distracting the driver from the task of driving and still provide a high user experience. A promising approach to cope with this challenge is multimodal in-car interaction. Multimodal interaction basically describes the combination of different input and output modalities for driver-vehicle interaction. Research has pointed out the potential to create a more flexible, efficient, and robust interaction. In addition to that, the integration of natural interaction modalities such as speech, gestures and gaze, the communication with the car could increase the naturalness of the interaction. Based on these advantages, the researcher community in the field of automotive user interfaces has produced several interesting concepts for multimodal interaction in vehicles. The problem is that the resulting insights and recommendations are often easily applicable in the design process of other concepts because they too concrete or very abstract. At the same time, concepts focus on different aspects. Some aim to reduce distraction while others want to increase efficiency or provide a better user experience. This makes it difficult to give overarching recommendations on how to combine natural input modalities while driving. As a consequence, interaction designers of in-vehicle systems are lacking adequate design support that enables them to transfer existing knowledge about the design of multimodal in-vehicle applications to their own concepts. This thesis addresses this gap by providing empirically validated design support for multimodal in-vehicle applications. It starts with a review of existing design support for automotive and multimodal applications. Based on that we report a series of user experiments that investigate various aspects of multimodal in-vehicle interaction with more than 200 participants in lab setups and driving simulators. During these experiments, we assessed the potentials of multimodality while driving, explored how user interfaces can support speech and gestures, and evaluated novel interaction techniques. The insights from these experiments extend existing knowledge from literature in order to create the first pattern collection for multimodal natural in-vehicle interaction. The collection contains 15 patterns that describe solutions for reoccurring problems when combining natural input with speech, gestures, or gaze in the car in a structured way. Finally, we present a prototype of an in-vehicle information system, which demonstrates the application of the proposed patterns and evaluate it in a driving-simulator experiment. This work contributes to field of automotive user interfaces in three ways. First, it presents the first pattern collection for multimodal natural in-vehicle interaction. Second, it illustrates and evaluates interaction techniques that combine speech and gestures with gaze input. Third, it provides empirical results of a series of user experiments that show the effects of multimodal natural interaction on different factors such as driving performance, glance behavior, interaction efficiency, and user experience.


2011 ◽  
Author(s):  
Christina Harrington ◽  
Sharon Joines
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document