scholarly journals A Personalized Behavior Learning System for Human-Like Longitudinal Speed Control of Autonomous Vehicles

Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3672 ◽  
Author(s):  
Chao Lu ◽  
Jianwei Gong ◽  
Chen Lv ◽  
Xin Chen ◽  
Dongpu Cao ◽  
...  

As the main component of an autonomous driving system, the motion planner plays an essential role for safe and efficient driving. However, traditional motion planners cannot make full use of the on-board sensing information and lack the ability to efficiently adapt to different driving scenes and behaviors of different drivers. To overcome this limitation, a personalized behavior learning system (PBLS) is proposed in this paper to improve the performance of the traditional motion planner. This system is based on the neural reinforcement learning (NRL) technique, which can learn from human drivers online based on the on-board sensing information and realize human-like longitudinal speed control (LSC) through the learning from demonstration (LFD) paradigm. Under the LFD framework, the desired speed of human drivers can be learned by PBLS and converted to the low-level control commands by a proportion integration differentiation (PID) controller. Experiments using driving simulator and real driving data show that PBLS can adapt to different drivers by reproducing their driving behaviors for LSC in different scenes. Moreover, through a comparative experiment with the traditional adaptive cruise control (ACC) system, the proposed PBLS demonstrates a superior performance in maintaining driving comfort and smoothness.

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


Safety ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 34
Author(s):  
Shi Cao ◽  
Pinyan Tang ◽  
Xu Sun

A new concept in the interior design of autonomous vehicles is rotatable or swivelling seats that allow people sitting in the front row to rotate their seats and face backwards. In the current study, we used a take-over request task conducted in a fixed-based driving simulator to compare two conditions, driver front-facing and rear-facing. Thirty-six adult drivers participated in the experiment using a within-subject design with take-over time budget varied. Take-over reaction time, remaining action time, crash, situation awareness and trust in automation were measured. Repeated measures ANOVA and Generalized Linear Mixed Model were conducted to analyze the results. The results showed that the rear-facing configuration led to longer take-over reaction time (on average 1.56 s longer than front-facing, p < 0.001), but it caused drivers to intervene faster after they turned back their seat in comparison to the traditional front-facing configuration. Situation awareness in both front-facing and rear-facing autonomous driving conditions were significantly lower (p < 0.001) than the manual driving condition, but there was no significant difference between the two autonomous driving conditions (p = 1.000). There was no significant difference of automation trust between front-facing and rear-facing conditions (p = 0.166). The current study showed that in a fixed-based simulator representing a conditionally autonomous car, when using the rear-facing driver seat configuration (where participants rotated the seat by themselves), participants had longer take-over reaction time overall due to physical turning, but they intervened faster after they turned back their seat for take-over response in comparison to the traditional front-facing seat configuration. This behavioral change might be at the cost of reduced take-over response quality. Crash rate was not significantly different in the current laboratory study (overall the average rate of crash was 11%). A limitation of the current study is that the driving simulator does not support other measures of take-over request (TOR) quality such as minimal time to collision and maximum magnitude of acceleration. Based on the current study, future studies are needed to further examine the effect of rotatable seat configurations with more detailed analysis of both TOR speed and quality measures as well as in real world driving conditions for better understanding of their safety implications.


2015 ◽  
Vol 27 (6) ◽  
pp. 660-670 ◽  
Author(s):  
Udara Eshan Manawadu ◽  
◽  
Masaaki Ishikawa ◽  
Mitsuhiro Kamezaki ◽  
Shigeki Sugano ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/08.jpg"" width=""300"" /> Driving simulator</div>Intelligent passenger vehicles with autonomous capabilities will be commonplace on our roads in the near future. These vehicles will reshape the existing relationship between the driver and vehicle. Therefore, to create a new type of rewarding relationship, it is important to analyze when drivers prefer autonomous vehicles to manually-driven (conventional) vehicles. This paper documents a driving simulator-based study conducted to identify the preferences and individual driving experiences of novice and experienced drivers of autonomous and conventional vehicles under different traffic and road conditions. We first developed a simplified driving simulator that could connect to different driver-vehicle interfaces (DVI). We then created virtual environments consisting of scenarios and events that drivers encounter in real-world driving, and we implemented fully autonomous driving. We then conducted experiments to clarify how the autonomous driving experience differed for the two groups. The results showed that experienced drivers opt for conventional driving overall, mainly due to the flexibility and driving pleasure it offers, while novices tend to prefer autonomous driving due to its inherent ease and safety. A further analysis indicated that drivers preferred to use both autonomous and conventional driving methods interchangeably, depending on the road and traffic conditions.


Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2405
Author(s):  
Heung-Gu Lee ◽  
Dong-Hyun Kang ◽  
Deok-Hwan Kim

Currently, the existing vehicle-centric semi-autonomous driving modules do not consider the driver’s situation and emotions. In an autonomous driving environment, when changing to manual driving, human–machine interface and advanced driver assistance systems (ADAS) are essential to assist vehicle driving. This study proposes a human–machine interface that considers the driver’s situation and emotions to enhance the ADAS. A 1D convolutional neural network model based on multimodal bio-signals is used and applied to control semi-autonomous vehicles. The possibility of semi-autonomous driving is confirmed by classifying four driving scenarios and controlling the speed of the vehicle. In the experiment, by using a driving simulator and hardware-in-the-loop simulation equipment, we confirm that the response speed of the driving assistance system is 351.75 ms and the system recognizes four scenarios and eight emotions through bio-signal data.


2019 ◽  
Vol 07 (03) ◽  
pp. 183-194
Author(s):  
Yoan Espada ◽  
Nicolas Cuperlier ◽  
Guillaume Bresson ◽  
Olivier Romain

The navigation of autonomous vehicles is confronted to the problem of an efficient place recognition system which is able to handle outdoor environments on the long run. The current Simultaneous Localization and Mapping (SLAM) and place recognition solutions have limitations that prevent them from achieving the performances needed for autonomous driving. This paper suggests handling the problem from another perspective by taking inspiration from biological models. We propose a neural architecture for the localization of an autonomous vehicle based on a neurorobotic model of the place cells (PC) found in the hippocampus of mammals. This model is based on an attentional mechanism and only takes into account visual information from a mono-camera and the orientation information to self-localize. It has the advantage to work with low resolution camera without the need of calibration. It also does not need a long learning phase as it uses a one-shot learning system. Such a localization model has already been integrated in a robot control architecture which allows for successful navigation both in indoor and small outdoor environments. The contribution of this paper is to study how it passes the scale change by evaluating the performance of this model over much larger outdoor environments. Eight experiments using real data (image and orientation) grabbed by a moving vehicle are studied (coming from the KITTI odometry datasets and datasets taken with VEDECOM vehicles). Results show the strong adaptability to different kinds of environments of this bio-inspired model primarily developed for indoor navigation.


Author(s):  
DoHyun Daniel Yoon ◽  
Beshah Ayalew

An autonomous driving control system that incorporates notions from human-like social driving could facilitate an efficient integration of hybrid traffic where fully autonomous vehicles (AVs) and human operated vehicles (HOVs) are expected to coexist. This paper aims to develop such an autonomous vehicle control model using the social-force concepts, which was originally formulated for modeling the motion of pedestrians in crowds. In this paper, the social force concept is adapted to vehicular traffic where constituent navigation forces are defined as a target force, object forces, and lane forces. Then, nonlinear model predictive control (NMPC) scheme is formulated to mimic the predictive planning behavior of social human drivers where they are considered to optimize the total social force they perceive. The performance of the proposed social force-based autonomous driving control scheme is demonstrated via simulations of an ego-vehicle in multi-lane road scenarios. From adaptive cruise control (ACC) to smooth lane-changing behaviors, the proposed model provided a flexible yet efficient driving control enabling a safe navigation in various situations while maintaining reasonable vehicle dynamics.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3124
Author(s):  
Hyunjin Bae ◽  
Gu Lee ◽  
Jaeseung Yang ◽  
Gwanjun Shin ◽  
Gyeungho Choi ◽  
...  

In autonomous driving, using a variety of sensors to recognize preceding vehicles at middle and long distances is helpful for improving driving performance and developing various functions. However, if only LiDAR or cameras are used in the recognition stage, it is difficult to obtain the necessary data due to the limitations of each sensor. In this paper, we proposed a method of converting the vision-tracked data into bird’s eye-view (BEV) coordinates using an equation that projects LiDAR points onto an image and a method of fusion between LiDAR and vision-tracked data. Thus, the proposed method was effective through the results of detecting the closest in-path vehicle (CIPV) in various situations. In addition, even when experimenting with the EuroNCAP autonomous emergency braking (AEB) test protocol using the result of fusion, AEB performance was improved through improved cognitive performance than when using only LiDAR. In the experimental results, the performance of the proposed method was proven through actual vehicle tests in various scenarios. Consequently, it was convincing that the proposed sensor fusion method significantly improved the adaptive cruise control (ACC) function in autonomous maneuvering. We expect that this improvement in perception performance will contribute to improving the overall stability of ACC.


Author(s):  
Timothy J. Wright ◽  
William J. Horrey ◽  
Mary F. Lesch ◽  
Md Mahmudur Rahman

Drivers’ trust in automation will likely determine the extent that autonomous and semi-autonomous vehicles are adopted, and once adopted, used properly. Unfortunately, previous studies have typically utilized overt subjective measures in determining an individual’s level of trust in automation. The current study aims to evaluate a covert behavioral measure of trust based on drivers’ body (head, hand, and foot) movements as they experience a simulated autonomous system. Videos of drivers interacting with an autonomous driving system in a driving simulator were coded. Body movement counts and average durations were derived from this coding and these data were compared across quartile rankings (based on self-reported trust) to examine body movements’ sensitivity to drivers’ level of trust. Results suggest body movements are not sensitive to individual differences in reported trust. Future work should further examine the utility of this covert behavioral metric by further examining situational differences.


2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Weilong Song ◽  
Guangming Xiong ◽  
Huiyan Chen

Autonomous vehicles need to perform social accepted behaviors in complex urban scenarios including human-driven vehicles with uncertain intentions. This leads to many difficult decision-making problems, such as deciding a lane change maneuver and generating policies to pass through intersections. In this paper, we propose an intention-aware decision-making algorithm to solve this challenging problem in an uncontrolled intersection scenario. In order to consider uncertain intentions, we first develop a continuous hidden Markov model to predict both the high-level motion intention (e.g., turn right, turn left, and go straight) and the low level interaction intentions (e.g., yield status for related vehicles). Then a partially observable Markov decision process (POMDP) is built to model the general decision-making framework. Due to the difficulty in solving POMDP, we use proper assumptions and approximations to simplify this problem. A human-like policy generation mechanism is used to generate the possible candidates. Human-driven vehicles’ future motion model is proposed to be applied in state transition process and the intention is updated during each prediction time step. The reward function, which considers the driving safety, traffic laws, time efficiency, and so forth, is designed to calculate the optimal policy. Finally, our method is evaluated in simulation with PreScan software and a driving simulator. The experiments show that our method could lead autonomous vehicle to pass through uncontrolled intersections safely and efficiently.


2017 ◽  
Vol 9 (2) ◽  
pp. 58-74 ◽  
Author(s):  
Marcel Walch ◽  
Kristin Mühl ◽  
Martin Baumann ◽  
Michael Weber

Autonomous vehicles will need de-escalation strategies to compensate when reaching system limitations. Car-driver handovers can be considered one possible method to deal with system boundaries. The authors suggest a bimodal (auditory and visual) handover assistant based on user preferences and design principles for automated systems. They conducted a driving simulator study with 30 participants to investigate the take-over performance of drivers. In particular, the authors examined the effect of different warning conditions (take-over request only with 4 and 6 seconds time budget vs. an additional pre-cue, which states why the take-over request will follow) in different hazardous situations. Their results indicated that all warning conditions were feasible in all situations, although the short time budget (4 seconds) was rather challenging and led to a less safe performance. An alert ahead of a take-over request had the positive effect that the participants took over and intervened earlier in relation to the appearance of the take-over request. Overall, the authors' evaluation showed that bimodal warnings composed of textual and iconographic visual displays accompanied by alerting jingles and spoken messages are a promising approach to alert drivers and to ask them to take over.


Sign in / Sign up

Export Citation Format

Share Document