scholarly journals Advanced Alarm Method Based on Driver’s State in Autonomous Vehicles

Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2796
Author(s):  
Ji-Hyeok Han ◽  
Da-Young Ju

In autonomous driving vehicles, the driver can engage in non-driving-related tasks and does not have to pay attention to the driving conditions or engage in manual driving. If an unexpected situation arises that the autonomous vehicle cannot manage, then the vehicle should notify and help the driver to prepare themselves for retaking manual control of the vehicle. Several effective notification methods based on multimodal warning systems have been reported. In this paper, we propose an advanced method that employs alarms for specific conditions by analyzing the differences in the driver’s responses, based on their specific situation, to trigger visual and auditory alarms in autonomous vehicles. Using a driving simulation, we carried out human-in-the-loop experiments that included a total of 38 drivers and 2 scenarios (namely drowsiness and distraction scenarios), each of which included a control-switching stage for implementing an alarm during autonomous driving. Reaction time, gaze indicator, and questionnaire data were collected, and electroencephalography measurements were performed to verify the drowsiness. Based on the experimental results, the drivers exhibited a high alertness to the auditory alarms in both the drowsy and distracted conditions, and the change in the gaze indicator was higher in the distraction condition. The results of this study show that there was a distinct difference between the driver’s response to the alarms signaled in the drowsy and distracted conditions. Accordingly, we propose an advanced notification method and future goals for further investigation on vehicle alarms.

Author(s):  
Gaojian Huang ◽  
Christine Petersen ◽  
Brandon J. Pitts

Semi-autonomous vehicles still require drivers to occasionally resume manual control. However, drivers of these vehicles may have different mental states. For example, drivers may be engaged in non-driving related tasks or may exhibit mind wandering behavior. Also, monitoring monotonous driving environments can result in passive fatigue. Given the potential for different types of mental states to negatively affect takeover performance, it will be critical to highlight how mental states affect semi-autonomous takeover. A systematic review was conducted to synthesize the literature on mental states (such as distraction, fatigue, emotion) and takeover performance. This review focuses specifically on five fatigue studies. Overall, studies were too few to observe consistent findings, but some suggest that response times to takeover alerts and post-takeover performance may be affected by fatigue. Ultimately, this review may help researchers improve and develop real-time mental states monitoring systems for a wide range of application domains.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


2019 ◽  
Vol 9 (23) ◽  
pp. 5126 ◽  
Author(s):  
Betz ◽  
Heilmeier ◽  
Wischnewski ◽  
Stahl ◽  
Lienkamp

Since 2017, a research team from the Technical University of Munich has developed a software stack for autonomous driving. The software was used to participate in the Roborace Season Alpha Championship. The championship aims to achieve autonomous race cars competing with different software stacks against each other. In May 2019, during a software test in Modena, Italy, the greatest danger in autonomous driving became reality: A minor change in environmental influences led an extensively tested software to crash into a barrier at speed. Crashes with autonomous vehicles have happened before but a detailed explanation of why software failed and what part of the software was not working correctly is missing in research articles. In this paper we present a general method that can be used to display an autonomous vehicle disengagement to explain in detail what happened. This method is then used to display and explain the crash from Modena. Firstly a brief introduction into the modular software stack that was used in the Modena event, consisting of three individual parts—perception, planning, and control—is given. Furthermore, the circumstancescausing the crash are elaborated in detail. By presented and explaining in detail which softwarepart failed and contributed to the crash we can discuss further software improvements. As a result, we present necessary functions that need to be integrated in an autonomous driving software stack to prevent such a vehicle behavior causing a fatal crash. In addition we suggest an enhancement of the current disengagement reports for autonomous driving regarding a detailed explanation of the software part that was causing the disengagement. In the outlook of this paper we present two additional software functions for assessing the tire and control performance of the vehicle to enhance the autonomous.


2019 ◽  
Vol 07 (03) ◽  
pp. 183-194
Author(s):  
Yoan Espada ◽  
Nicolas Cuperlier ◽  
Guillaume Bresson ◽  
Olivier Romain

The navigation of autonomous vehicles is confronted to the problem of an efficient place recognition system which is able to handle outdoor environments on the long run. The current Simultaneous Localization and Mapping (SLAM) and place recognition solutions have limitations that prevent them from achieving the performances needed for autonomous driving. This paper suggests handling the problem from another perspective by taking inspiration from biological models. We propose a neural architecture for the localization of an autonomous vehicle based on a neurorobotic model of the place cells (PC) found in the hippocampus of mammals. This model is based on an attentional mechanism and only takes into account visual information from a mono-camera and the orientation information to self-localize. It has the advantage to work with low resolution camera without the need of calibration. It also does not need a long learning phase as it uses a one-shot learning system. Such a localization model has already been integrated in a robot control architecture which allows for successful navigation both in indoor and small outdoor environments. The contribution of this paper is to study how it passes the scale change by evaluating the performance of this model over much larger outdoor environments. Eight experiments using real data (image and orientation) grabbed by a moving vehicle are studied (coming from the KITTI odometry datasets and datasets taken with VEDECOM vehicles). Results show the strong adaptability to different kinds of environments of this bio-inspired model primarily developed for indoor navigation.


2021 ◽  
Vol 11 (15) ◽  
pp. 6685
Author(s):  
Dongyeon Yu ◽  
Chanho Park ◽  
Hoseung Choi ◽  
Donggyu Kim ◽  
Sung-Ho Hwang

According to SAE J3016, autonomous driving can be divided into six levels, and partially automated driving is possible from level three up. A partially or highly automated vehicle can encounter situations involving total system failure. Here, we studied a strategy for safe takeover in such situations. A human-in-the-loop simulator, driver-vehicle interface, and driver monitoring system were developed, and takeover experiments were performed using various driving scenarios and realistic autonomous driving situations. The experiments allowed us to draw the following conclusions. The visual–auditory–haptic complex alarm effectively delivered warnings and had a clear correlation with the user’s subjective preferences. There were scenario types in which the system had to immediately enter minimum risk maneuvers or emergency maneuvers without requesting takeover. Lastly, the risk of accidents can be reduced by the driver monitoring system that prevents the driver from being completely immersed in non-driving-related tasks. We proposed a safe takeover strategy from these results, which provides meaningful guidance for the development of autonomous vehicles. Considering the subjective questionnaire evaluations of users, it is expected to improve the acceptance of autonomous vehicles and increase the adoption of autonomous vehicles.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Ze Liu ◽  
Yingfeng Cai ◽  
Hai Wang ◽  
Long Chen

AbstractRadar and LiDAR are two environmental sensors commonly used in autonomous vehicles, Lidars are accurate in determining objects’ positions but significantly less accurate as Radars on measuring their velocities. However, Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR, in this paper, an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, the real vehicle test under various driving environment scenarios is carried out. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of autonomous cars.


Author(s):  
Wenhui Huang ◽  
Francesco Braghin ◽  
Stefano Arrigoni

Abstract Autonomous driving has became one of the most hot trends in artificial intelligence area in recent years thanks to the machine learning algorithms. However, most of the autonomous driving studies are still limited to discrete action space. In this study, we propose to implement Deep Deterministic Policy Gradient algorithm for learning driving behavior over the continuous actions. For this purpose, a driving simulator is employed which interfaces with IPG CarMker software where the virtual environment and dynamical vehicle model can be built. “Human-in-the-loop” is performed in order to gather the data and a neural network which is implemented in Behavior Layer is trained to recognize two different scenarios-forward driving and stop. Based on the scenario the agent is dealing with, the actions are learnt and suggested from the DDPG algorithm. The experimental results show that DDPG algorithm is able to learn the optimal policy with continuous actions reliably for both scenarios.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


Author(s):  
DoHyun Daniel Yoon ◽  
Beshah Ayalew

An autonomous driving control system that incorporates notions from human-like social driving could facilitate an efficient integration of hybrid traffic where fully autonomous vehicles (AVs) and human operated vehicles (HOVs) are expected to coexist. This paper aims to develop such an autonomous vehicle control model using the social-force concepts, which was originally formulated for modeling the motion of pedestrians in crowds. In this paper, the social force concept is adapted to vehicular traffic where constituent navigation forces are defined as a target force, object forces, and lane forces. Then, nonlinear model predictive control (NMPC) scheme is formulated to mimic the predictive planning behavior of social human drivers where they are considered to optimize the total social force they perceive. The performance of the proposed social force-based autonomous driving control scheme is demonstrated via simulations of an ego-vehicle in multi-lane road scenarios. From adaptive cruise control (ACC) to smooth lane-changing behaviors, the proposed model provided a flexible yet efficient driving control enabling a safe navigation in various situations while maintaining reasonable vehicle dynamics.


Author(s):  
Heungseok Chae ◽  
Yonghwan Jeong ◽  
Hojun Lee ◽  
Jongcherl Park ◽  
Kyongsu Yi

This article describes the design, implementation, and evaluation of an active lane change control algorithm for autonomous vehicles with human factor considerations. Lane changes need to be performed considering both driver acceptance and safety with surrounding vehicles. Therefore, autonomous driving systems need to be designed based on an analysis of human driving behavior. In this article, manual driving characteristics are investigated using real-world driving test data. In lane change situations, interactions with surrounding vehicles were mainly investigated. And safety indices were developed with kinematic analysis. A safety indices–based lane change decision and control algorithm has been developed. In order to improve safety, stochastic predictions of both the ego vehicle and surrounding vehicles have been conducted with consideration of sensor noise and model uncertainties. The desired driving mode is decided to cope with all lane changes on highway. To obtain desired reference and constraints, motion planning for lane changes has been designed taking stochastic prediction-based safety indices into account. A stochastic model predictive control with constraints has been adopted to determine vehicle control inputs: the steering angle and the longitudinal acceleration. The proposed active lane change algorithm has been successfully implemented on an autonomous vehicle and evaluated via real-world driving tests. Safe and comfortable lane changes in high-speed driving on highways have been demonstrated using our autonomous test vehicle.


Sign in / Sign up

Export Citation Format

Share Document