scholarly journals Takeover Safety Analysis with Driver Monitoring Systems and Driver-Vehicle Interfaces in Highly Automated Vehicles

2021 ◽  
Vol 11 (15) ◽  
pp. 6685
Author(s):  
Dongyeon Yu ◽  
Chanho Park ◽  
Hoseung Choi ◽  
Donggyu Kim ◽  
Sung-Ho Hwang

According to SAE J3016, autonomous driving can be divided into six levels, and partially automated driving is possible from level three up. A partially or highly automated vehicle can encounter situations involving total system failure. Here, we studied a strategy for safe takeover in such situations. A human-in-the-loop simulator, driver-vehicle interface, and driver monitoring system were developed, and takeover experiments were performed using various driving scenarios and realistic autonomous driving situations. The experiments allowed us to draw the following conclusions. The visual–auditory–haptic complex alarm effectively delivered warnings and had a clear correlation with the user’s subjective preferences. There were scenario types in which the system had to immediately enter minimum risk maneuvers or emergency maneuvers without requesting takeover. Lastly, the risk of accidents can be reduced by the driver monitoring system that prevents the driver from being completely immersed in non-driving-related tasks. We proposed a safe takeover strategy from these results, which provides meaningful guidance for the development of autonomous vehicles. Considering the subjective questionnaire evaluations of users, it is expected to improve the acceptance of autonomous vehicles and increase the adoption of autonomous vehicles.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1131
Author(s):  
Eduardo Sánchez Morales ◽  
Julian Dauth ◽  
Bertold Huber ◽  
Andrés García Higuera ◽  
Michael Botsch

A current trend in automotive research is autonomous driving. For the proper testing and validation of automated driving functions a reference vehicle state is required. Global Navigation Satellite Systems (GNSS) are useful in the automation of the vehicles because of their practicality and accuracy. However, there are situations where the satellite signal is absent or unusable. This research work presents a methodology that addresses those situations, thus largely reducing the dependency of Inertial Navigation Systems (INSs) on the SatNav. The proposed methodology includes (1) a standstill recognition based on machine learning, (2) a detailed mathematical description of the horizontation of inertial measurements, (3) sensor fusion by means of statistical filtering, (4) an outlier detection for correction data, (5) a drift detector, and (6) a novel LiDAR-based Positioning Method (LbPM) for indoor navigation. The robustness and accuracy of the methodology are validated with a state-of-the-art INS with Real-Time Kinematic (RTK) correction data. The results obtained show a great improvement in the accuracy of vehicle state estimation under adverse driving conditions, such as when the correction data is corrupted, when there are extended periods with no correction data and in the case of drifting. The proposed LbPM method achieves an accuracy closely resembling that of a system with RTK.


2021 ◽  
Vol 12 (3) ◽  
Author(s):  
Damien Schnebelen ◽  
Camilo Charron ◽  
Franck Mars

When manually steering a car, the driver’s visual perception of the driving scene and his or her motor actions to control the vehicle are closely linked. Since motor behaviour is no longer required in an automated vehicle, the sampling of the visual scene is affected. Autonomous driving typically results in less gaze being directed towards the road centre and a broader exploration of the driving scene, compared to manual driving. To examine the corollary of this situation, this study estimated the state of automation (manual or automated) on the basis of gaze behaviour. To do so, models based on partial least square regressions were computed by considering the gaze behaviour in multiple ways, using static indicators (percentage of time spent gazing at 13 areas of interests), dynamic indicators (transition matrices between areas) or both together. Analysis of the quality of predictions for the different models showed that the best result was obtained by considering both static and dynamic indicators. However, gaze dynamics played the most important role in distinguishing between manual and automated driving. This study may be relevant to the issue of driver monitoring in autonomous vehicles.


Author(s):  
Anna-Lena Köhler ◽  
Julia Pelzer ◽  
Kristian Seidel ◽  
Stefan Ladwig

In the context of autonomous driving, new possibilities for passenger positions and occupation arise. Vehicle concepts provide more degrees of freedom for seating configurations and different activities as a passenger, leading to a need for advanced protection principles. The H2020-project OSCCAR analyses occupant safety requirements for highly automated vehicles (HAV) and defines technological developments necessary for novel safety principles. In order to understand the potential of novel sitting postures and activities in the context of autonomous driving, an empirical user study was conducted to examine the impact of different scenarios on preferred sitting postures in a simulated automated driving situation. Results gave insights into detailed sitting postures that are most likely to be obtained by occupants in future use cases. The results serve as input to a test case matrix in order to design future occupant restraint principles.


Author(s):  
Sandra Boric ◽  
Edgar Schiebel ◽  
Christian Schlögl ◽  
Michaela Hildebrandt ◽  
Christina Hofer ◽  
...  

Autonomous driving has become an increasingly relevant issue for policymakers, the industry, service providers, infrastructure companies, and science. This study shows how bibliometrics can be used to identify the major technological aspects of an emerging research field such as autonomous driving. We examine the most influential publications and identify research fronts of scientific activities until 2017 based on a bibliometric literature analysis. Using the science mapping approach, publications in the research field of autonomous driving were retrieved from Web of Science and then structured using the bibliometric software BibTechMon by the AIT (Austrian Institute of Technology). At the time of our analysis, we identified four research fronts in the field of autonomous driving: (I) Autonomous Vehicles and Infrastructure, (II) Driver Assistance Systems, (III) Autonomous Mobile Robots, and (IV) IntraFace, i.e., automated facial image analysis. Researchers were working extensively on technologies that support the navigation and collection of data. Our analysis indicates that research was moving towards autonomous navigation and infrastructure in the urban environment. A noticeable number of publications focused on technologies for environment detection in automated vehicles. Still, research pointed at the technological challenges to make automated driving safe.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2796
Author(s):  
Ji-Hyeok Han ◽  
Da-Young Ju

In autonomous driving vehicles, the driver can engage in non-driving-related tasks and does not have to pay attention to the driving conditions or engage in manual driving. If an unexpected situation arises that the autonomous vehicle cannot manage, then the vehicle should notify and help the driver to prepare themselves for retaking manual control of the vehicle. Several effective notification methods based on multimodal warning systems have been reported. In this paper, we propose an advanced method that employs alarms for specific conditions by analyzing the differences in the driver’s responses, based on their specific situation, to trigger visual and auditory alarms in autonomous vehicles. Using a driving simulation, we carried out human-in-the-loop experiments that included a total of 38 drivers and 2 scenarios (namely drowsiness and distraction scenarios), each of which included a control-switching stage for implementing an alarm during autonomous driving. Reaction time, gaze indicator, and questionnaire data were collected, and electroencephalography measurements were performed to verify the drowsiness. Based on the experimental results, the drivers exhibited a high alertness to the auditory alarms in both the drowsy and distracted conditions, and the change in the gaze indicator was higher in the distraction condition. The results of this study show that there was a distinct difference between the driver’s response to the alarms signaled in the drowsy and distracted conditions. Accordingly, we propose an advanced notification method and future goals for further investigation on vehicle alarms.


2020 ◽  
Vol 11 (5) ◽  
pp. 1-20
Author(s):  
Susmitha Mohan ◽  
Manoj Phirke

Driver monitoring system has gained lot of popularity in automotive sector to ensure safety while driving. Collisions due to driver inattentiveness or driver fatigue or over reliance on autonomous driving features arethe major reasons for road accidents and fatalities. Driver monitoring systems aims to monitor various aspect of driving and provides appropriate warnings whenever required. Eye gaze estimation is a key element in almost all of the driver monitoring systems. Gaze estimation aims to find the point of gaze which is basically,” -where is driver looking”. This helps in understanding if the driver is attentively looking at the road or if he is distracted. Estimating gaze point also plays important role in many other applications like retail shopping, online marketing, psychological tests, healthcare etc. This paper covers the various aspects of eye gaze estimation for a driver monitoring system including sensor choice and sensor placement. There are multiple ways by which eye gaze estimation can be done. A detailed comparative study on two of the popular methods for gaze estimation using eye features is covered in this paper. An infra-red camera is used to capture data for this study. Method 1 tracks corneal reflection centre w.r.t the pupil centre and method 2 tracks the pupil centre w.r.t the eye centre to estimate gaze. There are advantages and disadvantages with both the methods which has been looked into. This paper can act as a reference for researchers working in the same field to understand possibilities and limitations of eye gaze estimation for driver monitoring system.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5626
Author(s):  
Jie Chen ◽  
Tao Wu ◽  
Meiping Shi ◽  
Wei Jiang

Autonomous driving with artificial intelligence technology has been viewed as promising for autonomous vehicles hitting the road in the near future. In recent years, considerable progress has been made with Deep Reinforcement Learnings (DRLs) for realizing end-to-end autonomous driving. Still, driving safely and comfortably in real dynamic scenarios with DRL is nontrivial due to the reward functions being typically pre-defined with expertise. This paper proposes a human-in-the-loop DRL algorithm for learning personalized autonomous driving behavior in a progressive learning way. Specifically, a progressively optimized reward function (PORF) learning model is built and integrated into the Deep Deterministic Policy Gradient (DDPG) framework, which is called PORF-DDPG in this paper. PORF consists of two parts: the first part of the PORF is a pre-defined typical reward function on the system state, the second part is modeled as a Deep Neural Network (DNN) for representing driving adjusting intention by the human observer, which is the main contribution of this paper. The DNN-based reward model is progressively learned using the front-view images as the input and via active human supervision and intervention. The proposed approach is potentially useful for driving in dynamic constrained scenarios when dangerous collision events might occur frequently with classic DRLs. The experimental results show that the proposed autonomous driving behavior learning method exhibits online learning capability and environmental adaptability.


Author(s):  
Xuan Fang ◽  
Tamás Tettamanti

It is believed that autonomous vehicles will replace conventional human drive vehicles in the next decades due to the emerging autonomous driving technology, which will definitely bring a massive transformation in the road transport sector. Due to the high complexity of traffic systems, efficient traffic simulation models for the assessment of this disruptive change are critical. The objective of this paper is to justify that the common practice of microscopic traffic simulation needs thorough revision and modification when it is applied with the presence of autonomous vehicles in order to get realistic results. Two high-fidelity traffic simulators (SUMO and VISSIM) were applied to show the sensitivity of microscopic simulation to automated vehicle’s behavior. Two traffic evaluation indicators (average travel time and average speed) were selected to quantitatively evaluate the macro-traffic performance of changes in driving behavior parameters (gap acceptance) caused by emerging autonomous driving technologies under different traffic demand conditions.


Author(s):  
Christian Vitale ◽  
Nikos Piperigkos ◽  
Christos Laoudias ◽  
Georgios Ellinas ◽  
Jordi Casademont ◽  
...  

AbstractThe main goal of the H2020-CARAMEL project is to address the cybersecurity gaps introduced by the new technological domains adopted by modern vehicles applying, among others, advanced Artificial Intelligence and Machine Learning techniques. As a result, CARAMEL enhances the protection against threats related to automated driving, smart charging of Electric Vehicles, and communication among vehicles or between vehicles and the roadside infrastructure. This work focuses on the latter and presents the CARAMEL architecture aiming at assessing the integrity of the information transmitted by vehicles, as well as at improving the security and privacy of communication for connected and autonomous driving. The proposed architecture includes: (1) multi-radio access technology capabilities, with simultaneous 802.11p and LTE-Uu support, enabled by the connectivity infrastructure; (2) a MEC platform, where, among others, algorithms for detecting attacks are implemented; (3) an intelligent On-Board Unit with anti-hacking features inside the vehicle; (4) a Public Key Infrastructure that validates in real-time the integrity of vehicle’s data transmissions. As an indicative application, the interaction between the entities of the CARAMEL architecture is showcased in case of a GPS spoofing attack scenario. Adopted attack detection techniques exploit robust in-vehicle and cooperative approaches that do not rely on encrypted GPS signals, but only on measurements available in the CARAMEL architecture.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7283
Author(s):  
Taohua Zhou ◽  
Mengmeng Yang ◽  
Kun Jiang ◽  
Henry Wong ◽  
Diange Yang

With the rapid development of automated vehicles (AVs), more and more demands are proposed towards environmental perception. Among the commonly used sensors, MMW radar plays an important role due to its low cost, adaptability In different weather, and motion detection capability. Radar can provide different data types to satisfy requirements for various levels of autonomous driving. The objective of this study is to present an overview of the state-of-the-art radar-based technologies applied In AVs. Although several published research papers focus on MMW Radars for intelligent vehicles, no general survey on deep learning applied In radar data for autonomous vehicles exists. Therefore, we try to provide related survey In this paper. First, we introduce models and representations from millimeter-wave (MMW) radar data. Secondly, we present radar-based applications used on AVs. For low-level automated driving, radar data have been widely used In advanced driving-assistance systems (ADAS). For high-level automated driving, radar data is used In object detection, object tracking, motion prediction, and self-localization. Finally, we discuss the remaining challenges and future development direction of related studies.


Sign in / Sign up

Export Citation Format

Share Document