Evaluation of Driver Drowsiness While Using Automated Driving Systems on Driving Simulator, Test Course and Public Roads

Author(s):  
Toshihisa Sato ◽  
Yuji Takeda ◽  
Motoyuki Akamatsu ◽  
Satoshi Kitazaki
2018 ◽  
Vol 1 (3) ◽  
pp. 99-106 ◽  
Author(s):  
Ryuichi Umeno ◽  
Makoto Itoh ◽  
Satoshi Kitazaki

Purpose Level 3 automated driving, which has been defined by the Society of Automotive Engineers, may cause driver drowsiness or lack of situation awareness, which can make it difficult for the driver to recognize where he/she is. Therefore, the purpose of this study was to conduct an experimental study with a driving simulator to investigate whether automated driving affects the driver’s own localization compared to manual driving. Design/methodology/approach Seventeen drivers were divided into the automated operation group and manual operation group. Drivers in each group were instructed to travel along the expressway and proceed to the specified destinations. The automated operation group was forced to select a course after receiving a Request to Intervene (RtI) from an automated driving system. Findings A driver who used the automated operation system tended to not take over the driving operation correctly when a lane change is immediately required after the RtI. Originality/value This is a fundamental research that examined how the automated driving operation affects the driver's own localization. The experimental results suggest that it is not enough to simply issue an RtI, and it is necessary to tell the driver what kind of circumstances he/she is in and what they should do next through the HMI. This conclusion can be taken into consideration for engineers who design automatic driving vehicles.


Author(s):  
Wyatt McManus ◽  
Jing Chen

Modern surface transportation vehicles often include different levels of automation. Higher automation levels have the potential to impact surface transportation in unforeseen ways. For example, connected vehicles with higher levels of automation are at a higher risk for hacking attempts, because automated driving assistance systems often rely on onboard sensors and internet connectivity (Amoozadeh et al., 2015). As the automation level of vehicle control rises, it is necessary to examine the effect different levels of automation have on the driver-vehicle interactions. While research into the effect of automation level on driver-vehicle interactions is growing, research into how automation level affects driver’s responses to vehicle hacking attempts is very limited. In addition, auditory warnings have been shown to effectively attract a driver’s attention while performing a driving task, which is often visually demanding (Baldwin, 2011; Petermeijer, Doubek, & de Winter, 2017). An auditory warning can be either speech-based containing sematic information (e.g., “car in blind spot”) or non-sematic (e.g., a tone, or an earcon), which can influence driver behaviors differently (Sabic, Mishler, Chen, & Hu, 2017). The purpose of the current study was to examine the effect of level of automation and warning type on driver responses to novel critical events, using vehicle hacking attempts as a concrete example, in a driving simulator. The current study compared how level of automation (manual vs. automated) and warning type (non-semantic vs. semantic) affected drivers’ responses to a vehicle hacking attempt using time to collision (TTC) values, maximum steering wheel angle, number of successful responses, and other measures of response. A full factorial between-subjects design with the two factors made four conditions (Manual Semantic, Manual Non-Semantic, Automated Semantic, and Automated Non-Semantic). Seventy-two participants recruited using SONA ( odupsychology.sona-systems.com ) completed two simulated drives to school in a driving simulator. The first drive ended with the participant safely arriving at school. A two-second warning was presented to the participants three quarters of the way through the second drive and was immediately followed by a simulated vehicle hacking attempt. The warning either stated “Danger, hacking attempt incoming” in the semantic conditions or was a 500 Hz sine tone in the non-semantic conditions. The hacking attempt lasted five seconds before simulating a crash into a vehicle and ending the simulation if no intervention by the driver occurred. Our results revealed no significant effect of level of automation or warning type on TTC or successful response rate. However, there was a significant effect of level of automation on maximum steering wheel angle. This is a measure of response quality (Shen & Neyens, 2017), such that manual drivers had safer responses to the hacking attempt with smaller maximum steering wheel angles. In addition, an effect of warning type that approached significance was also found for maximum steering wheel angle such that participants who received a semantic warning had more severe and dangerous responses to the hacking attempt. The TTC and successful response results from the current experiment do not match those in the previous literature. The null results were potentially due to the warning implementation time and the complexity of the vehicle hacking attempt. In contrast, the maximum steering wheel angle results indicated that level of automation and warning type affected the safety and severity of the participants’ responses to the vehicle hacking attempt. This suggests that both factors may influence responses to hacking attempts in some capacity. Further research will be required to determine if level of automation and warning type affect participants ability to safely respond to vehicle hacking attempts. Acknowledgments. We are grateful to Scott Mishler for his assistance with STISIM programming and Faye Wakefield, Hannah Smith, and Pettie Perkins for their assistance in data collection.


2020 ◽  
Author(s):  
Tyron Louw ◽  
Rafael Goncalves ◽  
Guilhermina Torrao ◽  
Vishnu Radhakrishnan ◽  
Wei Lyu ◽  
...  

There is evidence that drivers’ behaviour adapts after using different advanced driving assistance systems. For instance, drivers’ headway during car-following reduces after using adaptive cruise control. However, little is known about whether, and how, drivers’ behaviour will change if they experience automated car-following, and how this is affected by engagement in non-driving related tasks (NDRT). The aim of this driving simulator study, conducted as part of the H2020 L3Pilot project, was to address this topic. We also investigated the effect of the presence of a lead vehicle during the resumption of control, on subsequent manual driving behaviour. Thirty-two participants were divided into two experimental groups. During automated car-following, one group was engaged in an NDRT (SAE Level 3), while the other group was free to look around the road environment (SAE Level 2). Both groups were exposed to Long (1.5 s) and Short (.5 s) Time Headway (THW) conditions during automated car-following, and resumed control both with and without a lead vehicle. All post-automation manual drives were compared to a Baseline Manual Drive, which was recorded at the start of the experiment. Drivers in both groups significantly reduced their time headway in all post-automation drives, compared to a Baseline Manual Drive. There was a greater reduction in THW after drivers resumed control in the presence of a lead vehicle, and also after they had experienced a shorter THW during automated car following. However, whether drivers were in L2 or L3 did not appear to influence the change in mean THW. Subjective feedback suggests that drivers appeared not to be aware of the changes to their driving behaviour, but preferred longer THWs in automation. Our results suggest that automated driving systems should adopt longer THWs in car-following situations, since drivers’ behavioural adaptation may lead to adoption of unsafe headways after resumption of control.


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Frederik Naujoks ◽  
Yannick Forster ◽  
Katharina Wiedemann ◽  
Alexandra Neukum

During conditionally automated driving (CAD), driving time can be used for non-driving-related tasks (NDRTs). To increase safety and comfort of an automated ride, upcoming automated manoeuvres such as lane changes or speed adaptations may be communicated to the driver. However, as the driver’s primary task consists of performing NDRTs, they might prefer to be informed in a nondistracting way. In this paper, the potential of using speech output to improve human-automation interaction is explored. A sample of 17 participants completed different situations which involved communication between the automation and the driver in a motion-based driving simulator. The Human-Machine Interface (HMI) of the automated driving system consisted of a visual-auditory HMI with either generic auditory feedback (i.e., standard information tones) or additional speech output. The drivers were asked to perform a common NDRT during the drive. Compared to generic auditory output, communicating upcoming automated manoeuvres additionally by speech led to a decrease in self-reported visual workload and decreased monitoring of the visual HMI. However, interruptions of the NDRT were not affected by additional speech output. Participants clearly favoured the HMI with additional speech-based output, demonstrating the potential of speech to enhance usefulness and acceptance of automated vehicles.


2021 ◽  
Author(s):  
Vishnu Radhakrishnan ◽  
Natasha Merat ◽  
Tyron Louw ◽  
Rafael Goncalves ◽  
Wei Lyu ◽  
...  

This driving simulator study, conducted as a part of Horizon2020-funded L3Pilot project, investigated how different car-following situations affected driver workload, within the context of vehicle automation. Electrocardiogram (ECG) and electrodermal activity (EDA)-based physiological metrics were used as objective indicators of workload, along with self-reported workload ratings. A total of 32 drivers were divided into two equal groups, based on whether they engaged in a non-driving related task (NDRT) during automation or monitored the drive. Drivers in both groups were exposed to two counterbalanced experimental drives, lasting ~18 minutes each, of Short (0.5 s) and Long (1.5 s) Time Headway conditions during automated car-following (ACF), which was followed by a takeover that happened with or without a lead vehicle. We observed that the workload on the driver due to the NDRT was significantly higher than both monitoring the drive during ACF and manual car-following (MCF). Furthermore, the results indicated that shorter THWs and the presence of a lead vehicle can significantly increase driver workload during takeover scenarios, potentially affecting the safety of the vehicle. This warrants further research into understanding safe time headway thresholds to be maintained by automated vehicles, without placing additional mental or attentional demands on the driver. To conclude, our results indicated that ECG and EDA signals are sensitive to variations in workload, and hence, warrants further investigation on the value of combining these two signals to assess driver workload in real-time, to help the system respond appropriately to the limitations of the driver and predict their performance in driving task if and when they have to resume manual control of the vehicle.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1029 ◽  
Author(s):  
Thomas Kundinger ◽  
Nikoletta Sofra ◽  
Andreas Riener

Drowsy driving imposes a high safety risk. Current systems often use driving behavior parameters for driver drowsiness detection. The continuous driving automation reduces the availability of these parameters, therefore reducing the scope of such methods. Especially, techniques that include physiological measurements seem to be a promising alternative. However, in a dynamic environment such as driving, only non- or minimal intrusive methods are accepted, and vibrations from the roadbed could lead to degraded sensor technology. This work contributes to driver drowsiness detection with a machine learning approach applied solely to physiological data collected from a non-intrusive retrofittable system in the form of a wrist-worn wearable sensor. To check accuracy and feasibility, results are compared with reference data from a medical-grade ECG device. A user study with 30 participants in a high-fidelity driving simulator was conducted. Several machine learning algorithms for binary classification were applied in user-dependent and independent tests. Results provide evidence that the non-intrusive setting achieves a similar accuracy as compared to the medical-grade device, and high accuracies (>92%) could be achieved, especially in a user-dependent scenario. The proposed approach offers new possibilities for human–machine interaction in a car and especially for driver state monitoring in the field of automated driving.


Sign in / Sign up

Export Citation Format

Share Document