The influence of time-criticality on Situation Awareness when retrieving human control after automated driving

Author(s):  
Arie P. van den Beukel ◽  
Mascha C. van der Voort
Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
HyunJoo Park ◽  
HyunJae Park ◽  
Sang-Hwan Kim

In conditional automated driving, drivers may be required starting manual driving from automated driving mode after take-over request (TOR). The objective of the study was to investigate different TOR features for drivers to engage in manual driving effectively in terms of reaction time, preference, and situation awareness (SA). Five TOR features, including four features using countdown, were designed and evaluated, consisted of combinations of different modalities and codes. Results revealed the use of non-verbal sound cue (beep) yielded shorter reaction time while participants preferred verbal sound cue (speech). Drivers' SA was not different for TOR features, but the level of SA was affected by different aspects of SA. The results may provide insights into designing multimodal TOR along with drivers' behavior during take-over tasks.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 42
Author(s):  
Lichao Yang ◽  
Mahdi Babayi Semiromi ◽  
Yang Xing ◽  
Chen Lv ◽  
James Brighton ◽  
...  

In conditionally automated driving, the engagement of non-driving activities (NDAs) can be regarded as the main factor that affects the driver’s take-over performance, the investigation of which is of great importance to the design of an intelligent human–machine interface for a safe and smooth control transition. This paper introduces a 3D convolutional neural network-based system to recognize six types of driver behaviour (four types of NDAs and two types of driving activities) through two video feeds based on head and hand movement. Based on the interaction of driver and object, the selected NDAs are divided into active mode and passive mode. The proposed recognition system achieves 85.87% accuracy for the classification of six activities. The impact of NDAs on the perspective of the driver’s situation awareness and take-over quality in terms of both activity type and interaction mode is further investigated. The results show that at a similar level of achieved maximum lateral error, the engagement of NDAs demands more time for drivers to accomplish the control transition, especially for the active mode NDAs engagement, which is more mentally demanding and reduces drivers’ sensitiveness to the driving situation change. Moreover, the haptic feedback torque from the steering wheel could help to reduce the time of the transition process, which can be regarded as a productive assistance system for the take-over process.


Author(s):  
Frederik Schewe ◽  
Hao Cheng ◽  
Alexander Hafner ◽  
Monika Sester ◽  
Mark Vollrath

We tested whether head-movements under automated driving can be used to classify a vehicle occupant as either situation-aware or unaware. While manually cornering, an active driver’s head tilt correlates with the road angle which serves as a visual reference, whereas an inactive passenger’s head follows the g-forces. Transferred to partial/conditional automation, the question arises whether aware occupant’s head-movements are comparable to drivers and if this can be used for classification. In a driving-simulator-study (n=43, within-subject design), four scenarios were used to generate or deteriorate situation awareness (manipulation checked). Recurrent neural networks were trained with the resulting head-movements. Inference statistics were used to extract the discriminating feature, ensuring explainability. A very accurate classification was achieved and the mean side rotation-rate was identified as the most differentiating factor. Aware occupants behave more like drivers. Therefore, head-movements can be used to classify situation awareness in experimental settings but also in real driving.


2011 ◽  
Vol 5 (2) ◽  
pp. 186-208 ◽  
Author(s):  
Michael Lewis ◽  
Huadong Wang ◽  
Shih Yi Chien ◽  
Prasanna Velagapudi ◽  
Paul Scerri ◽  
...  

The authors are developing a theory for human control of robot teams based on considering how control difficulty grows with team size. Current work focuses on domains, such as foraging, in which robots perform largely independent tasks. Such tasks are particularly amenable to analysis because effects on performance and cognitive resources are predicted to be additive, and tasks can safely be allocated across operators because of their independence. The present study addresses the interaction between automation and organization of human teams in controlling large robot teams performing an urban search-and-rescue (USAR) task. Two possible ways to organize operators were identified: as individual assignments of robots to operators, assigned robots, or as a shared pool in which operators service robots from the population as needed. The experiment compares two-person teams of operators controlling teams of 12 robots each in the assigned-robots condition or sharing control of 24 robots in the shared-pool condition using either waypoint control in the manual condition or autonomous path planning in the autonomy condition. Automating path planning improved system performance, but process measures suggest it may weaken situation awareness.


Sign in / Sign up

Export Citation Format

Share Document