Using eye-tracking to investigate the effects of pre-takeover visual engagement on situation awareness during automated driving

2021 ◽  
Vol 157 ◽  
pp. 106143
Author(s):  
Nade Liang ◽  
Jing Yang ◽  
Denny Yu ◽  
Kwaku O. Prakah-Asante ◽  
Reates Curry ◽  
...  
Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
HyunJoo Park ◽  
HyunJae Park ◽  
Sang-Hwan Kim

In conditional automated driving, drivers may be required starting manual driving from automated driving mode after take-over request (TOR). The objective of the study was to investigate different TOR features for drivers to engage in manual driving effectively in terms of reaction time, preference, and situation awareness (SA). Five TOR features, including four features using countdown, were designed and evaluated, consisted of combinations of different modalities and codes. Results revealed the use of non-verbal sound cue (beep) yielded shorter reaction time while participants preferred verbal sound cue (speech). Drivers' SA was not different for TOR features, but the level of SA was affected by different aspects of SA. The results may provide insights into designing multimodal TOR along with drivers' behavior during take-over tasks.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 42
Author(s):  
Lichao Yang ◽  
Mahdi Babayi Semiromi ◽  
Yang Xing ◽  
Chen Lv ◽  
James Brighton ◽  
...  

In conditionally automated driving, the engagement of non-driving activities (NDAs) can be regarded as the main factor that affects the driver’s take-over performance, the investigation of which is of great importance to the design of an intelligent human–machine interface for a safe and smooth control transition. This paper introduces a 3D convolutional neural network-based system to recognize six types of driver behaviour (four types of NDAs and two types of driving activities) through two video feeds based on head and hand movement. Based on the interaction of driver and object, the selected NDAs are divided into active mode and passive mode. The proposed recognition system achieves 85.87% accuracy for the classification of six activities. The impact of NDAs on the perspective of the driver’s situation awareness and take-over quality in terms of both activity type and interaction mode is further investigated. The results show that at a similar level of achieved maximum lateral error, the engagement of NDAs demands more time for drivers to accomplish the control transition, especially for the active mode NDAs engagement, which is more mentally demanding and reduces drivers’ sensitiveness to the driving situation change. Moreover, the haptic feedback torque from the steering wheel could help to reduce the time of the transition process, which can be regarded as a productive assistance system for the take-over process.


Author(s):  
Frederik Schewe ◽  
Hao Cheng ◽  
Alexander Hafner ◽  
Monika Sester ◽  
Mark Vollrath

We tested whether head-movements under automated driving can be used to classify a vehicle occupant as either situation-aware or unaware. While manually cornering, an active driver’s head tilt correlates with the road angle which serves as a visual reference, whereas an inactive passenger’s head follows the g-forces. Transferred to partial/conditional automation, the question arises whether aware occupant’s head-movements are comparable to drivers and if this can be used for classification. In a driving-simulator-study (n=43, within-subject design), four scenarios were used to generate or deteriorate situation awareness (manipulation checked). Recurrent neural networks were trained with the resulting head-movements. Inference statistics were used to extract the discriminating feature, ensuring explainability. A very accurate classification was achieved and the mean side rotation-rate was identified as the most differentiating factor. Aware occupants behave more like drivers. Therefore, head-movements can be used to classify situation awareness in experimental settings but also in real driving.


Sign in / Sign up

Export Citation Format

Share Document