From Neurorobotic Localization to Autonomous Vehicles

2019 ◽  
Vol 07 (03) ◽  
pp. 183-194
Author(s):  
Yoan Espada ◽  
Nicolas Cuperlier ◽  
Guillaume Bresson ◽  
Olivier Romain

The navigation of autonomous vehicles is confronted to the problem of an efficient place recognition system which is able to handle outdoor environments on the long run. The current Simultaneous Localization and Mapping (SLAM) and place recognition solutions have limitations that prevent them from achieving the performances needed for autonomous driving. This paper suggests handling the problem from another perspective by taking inspiration from biological models. We propose a neural architecture for the localization of an autonomous vehicle based on a neurorobotic model of the place cells (PC) found in the hippocampus of mammals. This model is based on an attentional mechanism and only takes into account visual information from a mono-camera and the orientation information to self-localize. It has the advantage to work with low resolution camera without the need of calibration. It also does not need a long learning phase as it uses a one-shot learning system. Such a localization model has already been integrated in a robot control architecture which allows for successful navigation both in indoor and small outdoor environments. The contribution of this paper is to study how it passes the scale change by evaluating the performance of this model over much larger outdoor environments. Eight experiments using real data (image and orientation) grabbed by a moving vehicle are studied (coming from the KITTI odometry datasets and datasets taken with VEDECOM vehicles). Results show the strong adaptability to different kinds of environments of this bio-inspired model primarily developed for indoor navigation.

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3672 ◽  
Author(s):  
Chao Lu ◽  
Jianwei Gong ◽  
Chen Lv ◽  
Xin Chen ◽  
Dongpu Cao ◽  
...  

As the main component of an autonomous driving system, the motion planner plays an essential role for safe and efficient driving. However, traditional motion planners cannot make full use of the on-board sensing information and lack the ability to efficiently adapt to different driving scenes and behaviors of different drivers. To overcome this limitation, a personalized behavior learning system (PBLS) is proposed in this paper to improve the performance of the traditional motion planner. This system is based on the neural reinforcement learning (NRL) technique, which can learn from human drivers online based on the on-board sensing information and realize human-like longitudinal speed control (LSC) through the learning from demonstration (LFD) paradigm. Under the LFD framework, the desired speed of human drivers can be learned by PBLS and converted to the low-level control commands by a proportion integration differentiation (PID) controller. Experiments using driving simulator and real driving data show that PBLS can adapt to different drivers by reproducing their driving behaviors for LSC in different scenes. Moreover, through a comparative experiment with the traditional adaptive cruise control (ACC) system, the proposed PBLS demonstrates a superior performance in maintaining driving comfort and smoothness.


2019 ◽  
Vol 2019 ◽  
pp. 1-16 ◽  
Author(s):  
Marcin Luckner

Machine learning techniques are a standard approach in spam detection. Their quality depends on the quality of the learning set, and when the set is out of date, the quality of classification falls rapidly. The most popular public web spam dataset that can be used to train a spam detector—WEBSPAM-UK2007—is over ten years old. Therefore, there is a place for a lifelong machine learning system that can replace the detectors based on a static learning set. In this paper, we propose a novel web spam recognition system. The system automatically rebuilds the learning set to avoid classification based on outdated data. Using a built-in automatic selection of the active classifier the system very quickly attains productive accuracy despite a limited learning set. Moreover, the system automatically rebuilds the learning set using external data from spam traps and popular web services. A test on real data from Quora, Reddit, and Stack Overflow proved the high recognition quality. Both the obtained average accuracy and the F-measure were 0.98 and 0.96 for semiautomatic and full–automatic mode, respectively.


2019 ◽  
Vol 9 (23) ◽  
pp. 5126 ◽  
Author(s):  
Betz ◽  
Heilmeier ◽  
Wischnewski ◽  
Stahl ◽  
Lienkamp

Since 2017, a research team from the Technical University of Munich has developed a software stack for autonomous driving. The software was used to participate in the Roborace Season Alpha Championship. The championship aims to achieve autonomous race cars competing with different software stacks against each other. In May 2019, during a software test in Modena, Italy, the greatest danger in autonomous driving became reality: A minor change in environmental influences led an extensively tested software to crash into a barrier at speed. Crashes with autonomous vehicles have happened before but a detailed explanation of why software failed and what part of the software was not working correctly is missing in research articles. In this paper we present a general method that can be used to display an autonomous vehicle disengagement to explain in detail what happened. This method is then used to display and explain the crash from Modena. Firstly a brief introduction into the modular software stack that was used in the Modena event, consisting of three individual parts—perception, planning, and control—is given. Furthermore, the circumstancescausing the crash are elaborated in detail. By presented and explaining in detail which softwarepart failed and contributed to the crash we can discuss further software improvements. As a result, we present necessary functions that need to be integrated in an autonomous driving software stack to prevent such a vehicle behavior causing a fatal crash. In addition we suggest an enhancement of the current disengagement reports for autonomous driving regarding a detailed explanation of the software part that was causing the disengagement. In the outlook of this paper we present two additional software functions for assessing the tire and control performance of the vehicle to enhance the autonomous.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1131
Author(s):  
Eduardo Sánchez Morales ◽  
Julian Dauth ◽  
Bertold Huber ◽  
Andrés García Higuera ◽  
Michael Botsch

A current trend in automotive research is autonomous driving. For the proper testing and validation of automated driving functions a reference vehicle state is required. Global Navigation Satellite Systems (GNSS) are useful in the automation of the vehicles because of their practicality and accuracy. However, there are situations where the satellite signal is absent or unusable. This research work presents a methodology that addresses those situations, thus largely reducing the dependency of Inertial Navigation Systems (INSs) on the SatNav. The proposed methodology includes (1) a standstill recognition based on machine learning, (2) a detailed mathematical description of the horizontation of inertial measurements, (3) sensor fusion by means of statistical filtering, (4) an outlier detection for correction data, (5) a drift detector, and (6) a novel LiDAR-based Positioning Method (LbPM) for indoor navigation. The robustness and accuracy of the methodology are validated with a state-of-the-art INS with Real-Time Kinematic (RTK) correction data. The results obtained show a great improvement in the accuracy of vehicle state estimation under adverse driving conditions, such as when the correction data is corrupted, when there are extended periods with no correction data and in the case of drifting. The proposed LbPM method achieves an accuracy closely resembling that of a system with RTK.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Ying Zhuo ◽  
Lan Yan ◽  
Wenbo Zheng ◽  
Yutian Zhang ◽  
Chao Gou

Autonomous driving has become a prevalent research topic in recent years, arousing the attention of many academic universities and commercial companies. As human drivers rely on visual information to discern road conditions and make driving decisions, autonomous driving calls for vision systems such as vehicle detection models. These vision models require a large amount of labeled data while collecting and annotating the real traffic data are time-consuming and costly. Therefore, we present a novel vehicle detection framework based on the parallel vision to tackle the above issue, using the specially designed virtual data to help train the vehicle detection model. We also propose a method to construct large-scale artificial scenes and generate the virtual data for the vision-based autonomous driving schemes. Experimental results verify the effectiveness of our proposed framework, demonstrating that the combination of virtual and real data has better performance for training the vehicle detection model than the only use of real data.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Ze Liu ◽  
Yingfeng Cai ◽  
Hai Wang ◽  
Long Chen

AbstractRadar and LiDAR are two environmental sensors commonly used in autonomous vehicles, Lidars are accurate in determining objects’ positions but significantly less accurate as Radars on measuring their velocities. However, Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR, in this paper, an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, the real vehicle test under various driving environment scenarios is carried out. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of autonomous cars.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


Author(s):  
DoHyun Daniel Yoon ◽  
Beshah Ayalew

An autonomous driving control system that incorporates notions from human-like social driving could facilitate an efficient integration of hybrid traffic where fully autonomous vehicles (AVs) and human operated vehicles (HOVs) are expected to coexist. This paper aims to develop such an autonomous vehicle control model using the social-force concepts, which was originally formulated for modeling the motion of pedestrians in crowds. In this paper, the social force concept is adapted to vehicular traffic where constituent navigation forces are defined as a target force, object forces, and lane forces. Then, nonlinear model predictive control (NMPC) scheme is formulated to mimic the predictive planning behavior of social human drivers where they are considered to optimize the total social force they perceive. The performance of the proposed social force-based autonomous driving control scheme is demonstrated via simulations of an ego-vehicle in multi-lane road scenarios. From adaptive cruise control (ACC) to smooth lane-changing behaviors, the proposed model provided a flexible yet efficient driving control enabling a safe navigation in various situations while maintaining reasonable vehicle dynamics.


Author(s):  
Hongbo Gao ◽  
Guanya Shi ◽  
Kelong Wang ◽  
Guotao Xie ◽  
Yuchao Liu

Purpose Over the past decades, there has been significant research effort dedicated to the development of autonomous vehicles. The decision-making system, which is responsible for driving safety, is one of the most important technologies for autonomous vehicles. The purpose of this study is the use of an intensive learning method combined with car-following data by a driving simulator to obtain an explanatory learning following algorithm and establish an anthropomorphic car-following model. Design/methodology/approach This paper proposed car-following method based on reinforcement learning for autonomous vehicles decision-making. An approximator is used to approximate the value function by determining state space, action space and state transition relationship. A gradient descent method is used to solve the parameter. Findings The effect of car-following on certain driving styles is initially achieved through the simulation of step conditions. The effect of car-following initially proves that the reinforcement learning system is more adaptive to car following and that it has certain explanatory and stability based on the explicit calculation of R. Originality/value The simulation results show that the car-following method based on reinforcement learning for autonomous vehicle decision-making realizes reliable car-following decision-making and has the advantages of simple sample, small amount of data, simple algorithm and good robustness.


Sign in / Sign up

Export Citation Format

Share Document