scholarly journals Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review

Author(s):  
De Jong Yeong ◽  
Gustavo Velasco-Hernandez ◽  
John Barry ◽  
Joseph Walsh

The market for autonomous vehicles (AV) is expected to experience significant growth over the coming decades and to revolutionize the future of transportation and mobility. The AV is a vehicle that is capable of perceiving its environment and perform driving tasks safely and efficiently with little or no human intervention and is anticipated to eventually replace conventional vehicles. Self-driving vehicles employ various sensors to sense and perceive their surroundings and, also rely on advances in 5G communication technology to achieve this objective. Sensors are fundamental to the perception of surroundings and the development of sensor technologies associated with AVs has advanced at a significant pace in recent years. Despite remarkable advancements, sensors can still fail to operate as required, due to for example, hardware defects, noise and environment conditions. Hence, it is not desirable to rely on a single sensor for any autonomous driving task. The practical approaches shown in recent research is to incorporate multiple, complementary sensors to overcome the shortcomings of individual sensors operating independently. This article reviews the technical performance and capabilities of sensors applicable to autonomous vehicles, mainly focusing on vision cameras, LiDAR and Radar sensors. The review also considers the compatibility of sensors with various software systems enabling the multi-sensor fusion approach for obstacle detection. This review article concludes by highlighting some of the challenges and possible future research directions.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2140
Author(s):  
De Jong Yeong ◽  
Gustavo Velasco-Hernandez ◽  
John Barry ◽  
Joseph Walsh

With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


2021 ◽  
Vol 9 (2) ◽  
pp. 731-739
Author(s):  
M Hyndhavi, Et. al.

The development of vehicle tracking using sensor fusion is presented in this paper. Advanced driver assistance systems (ADAS) are becoming more popular in recent years. These systems use sensor information for real-time control. To improve the standard and robustness, especially in the presence of environmental noises like varying lighting, weather conditions, and fusion of sensors has been the center of attention in recent studies. Faced with complex traffic conditions, the single sensor has been unable to meet the security requirements of ADAS and autonomous driving. The common environment perception sensors consist of radar, camera, and lidar which have both pros and cons. The sensor fusion is a necessary technology for autonomous driving which provides a better vision and understanding of vehicles surrounding. We mainly focus on highway scenarios that enable an autonomous car to comfortably follow other cars at various speeds while keeping a secure distance and mix the advantages of both sensors with a sensor fusion approach. The radar and vision sensor information are fused to produce robust and accurate measurements. And the experimental results indicate that the comparison of using only radar sensors and sensor fusion of both camera and radar sensors is presented in this paper. The algorithm is described along with simulation results by using MATLAB.


Author(s):  
Mingcong Cao ◽  
Junmin Wang

Abstract In contrast to the single-light detection and ranging (LiDAR) system, multi-LiDAR sensors may improve the environmental perception for autonomous vehicles. However, an elaborated guideline of multi-LiDAR data processing is absent in the existing literature. This paper presents a systematic solution for multi-LiDAR data processing, which orderly includes calibration, filtering, clustering, and classification. As the accuracy of obstacle detection is fundamentally determined by noise filtering and object clustering, this paper proposes a novel filtering algorithm and an improved clustering method within the multi-LiDAR framework. To be specific, the applied filtering approach is based on occupancy rates (ORs) of sampling points. Besides, ORs are derived from the sparse “feature seeds” in each searching space. For clustering, the density-based spatial clustering of applications with noise (DBSCAN) is improved with an adaptive searching (AS) algorithm for higher detection accuracy. Besides, more robust and accurate obstacle detection can be achieved by combining AS-DBSCAN with the proposed OR-based filtering. An indoor perception test and an on-road test were conducted on a fully instrumented autonomous hybrid electric vehicle. Experimental results have verified the effectiveness of the proposed algorithms, which facilitate a reliable and applicable solution for obstacle detection.


2021 ◽  
Vol 11 (16) ◽  
pp. 7225
Author(s):  
Eugenio Tramacere ◽  
Sara Luciani ◽  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati

Self-driving vehicles have experienced an increase in research interest in the last decades. Nevertheless, fully autonomous vehicles are still far from being a common means of transport. This paper presents the design and experimental validation of a processor-in-the-loop (PIL) architecture for an autonomous sports car. The considered vehicle is an all-wheel drive full-electric single-seater prototype. The retained PIL architecture includes all the modules required for autonomous driving at system level: environment perception, trajectory planning, and control. Specifically, the perception pipeline exploits obstacle detection algorithms based on Artificial Intelligence (AI), and the trajectory planning is based on a modified Rapidly-exploring Random Tree (RRT) algorithm based on Dubins curves, while the vehicle is controlled via a Model Predictive Control (MPC) strategy. The considered PIL layout is implemented firstly using a low-cost card-sized computer for fast code verification purposes. Furthermore, the proposed PIL architecture is compared in terms of performance to an alternative PIL using high-performance real-time target computing machine. Both PIL architectures exploit User Datagram Protocol (UDP) protocol to properly communicate with a personal computer. The latter PIL architecture is validated in real-time using experimental data. Moreover, they are also validated with respect to the general autonomous pipeline that runs in parallel on the personal computer during numerical simulation.


2017 ◽  
Vol 11 (3) ◽  
pp. 225-238 ◽  
Author(s):  
Mica R. Endsley

Autonomous and semiautonomous vehicles are currently being developed by over14 companies. These vehicles may improve driving safety and convenience, or they may create new challenges for drivers, particularly with regard to situation awareness (SA) and autonomy interaction. I conducted a naturalistic driving study on the autonomy features in the Tesla Model S, recording my experiences over a 6-month period, including assessments of SA and problems with the autonomy. This preliminary analysis provides insights into the challenges that drivers may face in dealing with new autonomous automobiles in realistic driving conditions, and it extends previous research on human-autonomy interaction to the driving domain. Issues were found with driver training, mental model development, mode confusion, unexpected mode interactions, SA, and susceptibility to distraction. New insights into challenges with semiautonomous driving systems include increased variability in SA, the replacement of continuous control with serial discrete control, and the need for more complex decisions. Issues that deserve consideration in future research and a set of guidelines for driver interfaces of autonomous systems are presented and used to create recommendations for improving driver SA when interacting with autonomous vehicles.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Jun Wang ◽  
Li Zhang ◽  
Yanjun Huang ◽  
Jian Zhao ◽  
Francesco Bella

Autonomous vehicle (AV) is regarded as the ultimate solution to future automotive engineering; however, safety still remains the key challenge for the development and commercialization of the AVs. Therefore, a comprehensive understanding of the development status of AVs and reported accidents is becoming urgent. In this article, the levels of automation are reviewed according to the role of the automated system in the autonomous driving process, which will affect the frequency of the disengagements and accidents when driving in autonomous modes. Additionally, the public on-road AV accident reports are statistically analyzed. The results show that over 3.7 million miles have been tested for AVs by various manufacturers from 2014 to 2018. The AVs are frequently taken over by drivers if they deem necessary, and the disengagement frequency varies significantly from 2 × 10−4 to 3 disengagements per mile for different manufacturers. In addition, 128 accidents in 2014–2018 are studied, and about 63% of the total accidents are caused in autonomous mode. A small fraction of the total accidents (∼6%) is directly related to the AVs, while 94% of the accidents are passively initiated by the other parties, including pedestrians, cyclists, motorcycles, and conventional vehicles. These safety risks identified during on-road testing, represented by disengagements and actual accidents, indicate that the passive accidents which are caused by other road users are the majority. The capability of AVs to alert and avoid safety risks caused by the other parties and to make safe decisions to prevent possible fatal accidents would significantly improve the safety of AVs. Practical applications. This literature review summarizes the safety-related issues for AVs by theoretical analysis of the AV systems and statistical investigation of the disengagement and accident reports for on-road testing, and the findings will help inform future research efforts for AV developments.


Author(s):  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati ◽  
Andrea Tonoli

This paper presents a redundant multi-object detection method for autonomous driving, exploiting a combination of Light Detection and Ranging (LiDAR) and stereocamera sensors to detect different obstacles. These sensors are used for distinct perception pipelines considering a custom hardware/software architecture deployed on a self-driving electric racing vehicle. Consequently, the creation of a local map with respect to the vehicle position enables development of further local trajectory planning algorithms. The LiDAR-based algorithm exploits segmentation of point clouds for the ground filtering and obstacle detection. The stereocamerabased perception pipeline is based on a Single Shot Detector using a deep learning neural network. The presented algorithm is experimentally validated on the instrumented vehicle during different driving maneuvers.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.


Sign in / Sign up

Export Citation Format

Share Document