scholarly journals CAEV-Surrounding Objects Detection and Tracking for Autonomous Driving Using Lidar and Radar Fusion

2020 ◽  
Author(s):  
Ze Liu ◽  
Feng Ying Cai

Abstract Radar and Lidar are two environmental sensors commonly used in autonomous vehicles,Lidars are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LIDAR, we proposed an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, radar and LIDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, we do a variety of driving environment under the real car algorithm verification test. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of driverless cars.

2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Ze Liu ◽  
Yingfeng Cai ◽  
Hai Wang ◽  
Long Chen

AbstractRadar and LiDAR are two environmental sensors commonly used in autonomous vehicles, Lidars are accurate in determining objects’ positions but significantly less accurate as Radars on measuring their velocities. However, Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR, in this paper, an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, the real vehicle test under various driving environment scenarios is carried out. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of autonomous cars.


Author(s):  
Wael Farag ◽  

In this paper, a real-time road-Object Detection and Tracking (LR_ODT) method for autonomous driving is proposed. The method is based on the fusion of lidar and radar measurement data, where they are installed on the ego car, and a customized Unscented Kalman Filter (UKF) is employed for their data fusion. The merits of both devices are combined using the proposed fusion approach to precisely provide both pose and velocity information for objects moving in roads around the ego car. Unlike other detection and tracking approaches, the balanced treatment of both pose estimation accuracy and its real-time performance is the main contribution in this work. The proposed technique is implemented using the high-performance language C++ and utilizes highly optimized math and optimization libraries for best real-time performance. Simulation studies have been carried out to evaluate the performance of the LR_ODT for tracking bicycles, cars, and pedestrians. Moreover, the performance of the UKF fusion is compared to that of the Extended Kalman Filter fusion (EKF) showing its superiority. The UKF has outperformed the EKF on all test cases and all the state variable levels (-24% average RMSE). The employed fusion technique show how outstanding is the improvement in tracking performance compared to the use of a single device (-29% RMES with lidar and -38% RMSE with radar).


2019 ◽  
Vol 8 (11) ◽  
pp. 501
Author(s):  
Sungil Ham ◽  
Junhyuck Im ◽  
Minjun Kim ◽  
Kuk Cho

For autonomous driving, a control system that supports precise road maps is required to monitor the operation status of autonomous vehicles in the research stage. Such a system is also required for research related to automobile engineering, sensors, and artificial intelligence. The design of Google Maps and other map services is limited to the provision of map support at 20 levels of high-resolution precision. An ideal map should include information on roads, autonomous vehicles, and Internet of Things (IOT) facilities that support autonomous driving. The aim of this study was to design a map suitable for the control of autonomous vehicles in Gyeonggi Province in Korea. This work was part of the project “Building a Testbed for Pilot Operations of Autonomous Vehicles”. The map design scheme was redesigned for an autonomous vehicle control system based on the “Easy Map” developed by the National Geography Center, which provides free design schema. In addition, a vector-based precision map, including roads, sidewalks, and road markings, was produced to provide content suitable for 20 levels. A hybrid map that combines the vector layer of the road and an unmanned aerial vehicle (UAV) orthographic map was designed to facilitate vehicle identification. A control system that can display vehicle and sensor information based on the designed map was developed, and an environment to monitor the operation of autonomous vehicles was established. Finally, the high-precision map was verified through an accuracy test and driving data from autonomous vehicles.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


Author(s):  
Mingcong Cao ◽  
Junmin Wang

Abstract In contrast to the single-light detection and ranging (LiDAR) system, multi-LiDAR sensors may improve the environmental perception for autonomous vehicles. However, an elaborated guideline of multi-LiDAR data processing is absent in the existing literature. This paper presents a systematic solution for multi-LiDAR data processing, which orderly includes calibration, filtering, clustering, and classification. As the accuracy of obstacle detection is fundamentally determined by noise filtering and object clustering, this paper proposes a novel filtering algorithm and an improved clustering method within the multi-LiDAR framework. To be specific, the applied filtering approach is based on occupancy rates (ORs) of sampling points. Besides, ORs are derived from the sparse “feature seeds” in each searching space. For clustering, the density-based spatial clustering of applications with noise (DBSCAN) is improved with an adaptive searching (AS) algorithm for higher detection accuracy. Besides, more robust and accurate obstacle detection can be achieved by combining AS-DBSCAN with the proposed OR-based filtering. An indoor perception test and an on-road test were conducted on a fully instrumented autonomous hybrid electric vehicle. Experimental results have verified the effectiveness of the proposed algorithms, which facilitate a reliable and applicable solution for obstacle detection.


2019 ◽  
Vol 9 (23) ◽  
pp. 5126 ◽  
Author(s):  
Betz ◽  
Heilmeier ◽  
Wischnewski ◽  
Stahl ◽  
Lienkamp

Since 2017, a research team from the Technical University of Munich has developed a software stack for autonomous driving. The software was used to participate in the Roborace Season Alpha Championship. The championship aims to achieve autonomous race cars competing with different software stacks against each other. In May 2019, during a software test in Modena, Italy, the greatest danger in autonomous driving became reality: A minor change in environmental influences led an extensively tested software to crash into a barrier at speed. Crashes with autonomous vehicles have happened before but a detailed explanation of why software failed and what part of the software was not working correctly is missing in research articles. In this paper we present a general method that can be used to display an autonomous vehicle disengagement to explain in detail what happened. This method is then used to display and explain the crash from Modena. Firstly a brief introduction into the modular software stack that was used in the Modena event, consisting of three individual parts—perception, planning, and control—is given. Furthermore, the circumstancescausing the crash are elaborated in detail. By presented and explaining in detail which softwarepart failed and contributed to the crash we can discuss further software improvements. As a result, we present necessary functions that need to be integrated in an autonomous driving software stack to prevent such a vehicle behavior causing a fatal crash. In addition we suggest an enhancement of the current disengagement reports for autonomous driving regarding a detailed explanation of the software part that was causing the disengagement. In the outlook of this paper we present two additional software functions for assessing the tire and control performance of the vehicle to enhance the autonomous.


2019 ◽  
Vol 07 (03) ◽  
pp. 183-194
Author(s):  
Yoan Espada ◽  
Nicolas Cuperlier ◽  
Guillaume Bresson ◽  
Olivier Romain

The navigation of autonomous vehicles is confronted to the problem of an efficient place recognition system which is able to handle outdoor environments on the long run. The current Simultaneous Localization and Mapping (SLAM) and place recognition solutions have limitations that prevent them from achieving the performances needed for autonomous driving. This paper suggests handling the problem from another perspective by taking inspiration from biological models. We propose a neural architecture for the localization of an autonomous vehicle based on a neurorobotic model of the place cells (PC) found in the hippocampus of mammals. This model is based on an attentional mechanism and only takes into account visual information from a mono-camera and the orientation information to self-localize. It has the advantage to work with low resolution camera without the need of calibration. It also does not need a long learning phase as it uses a one-shot learning system. Such a localization model has already been integrated in a robot control architecture which allows for successful navigation both in indoor and small outdoor environments. The contribution of this paper is to study how it passes the scale change by evaluating the performance of this model over much larger outdoor environments. Eight experiments using real data (image and orientation) grabbed by a moving vehicle are studied (coming from the KITTI odometry datasets and datasets taken with VEDECOM vehicles). Results show the strong adaptability to different kinds of environments of this bio-inspired model primarily developed for indoor navigation.


Author(s):  
Wei Hanbing ◽  
Wu Yanhong ◽  
Chen Xing ◽  
Xu Jin ◽  
Rahul Sharma

Over a long period of time, the fully autonomous vehicle is far from commercial application. The concept of ‘human-vehicle shared control (HVSC)’ provides a promising solution to enhance autonomous driving safety. In order to characterize the evolution of the driver’s feature in the process of HVSC, a dynamics model of HVSC with the driver’s neuromuscular characteristic is proposed in this paper. It takes into account the driver’s neuromuscular characteristics, such as stretch reflection, feedback stiffness, etc. By designing a model predictive control (MPC) controller, the feedback of the vehicle’s state and steering torque is constructed. For validation of the model, driving simulation has been conducted in our table-based driving simulator. The vehicle state and the surface electromyography of the driver’s arm working muscle group are collected simultaneously. Subsequently, the hierarchical least square (HLS) parameter identification and unscented Kalman filter (UKF) observer is used to identify and estimate the important characteristic parameters respectively based on the experimental results. The comparisons show that the HVSC can characterize the vehicle’s dynamic state and the driver’s personalized characteristic can be identified by HLS. This paper will serve as a theoretical basis of control strategy allocation between the human and vehicle during shared control for L3 class autonomous vehicle.


Sign in / Sign up

Export Citation Format

Share Document