scholarly journals Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.

2021 ◽  
Vol 23 (06) ◽  
pp. 1288-1293
Author(s):  
Dr. S. Rajkumar ◽  
◽  
Aklilu Teklemariam ◽  
Addisalem Mekonnen ◽  
◽  
...  

Autonomous Vehicles (AV) reduces human intervention by perceiving the vehicle’s location with respect to the environment. In this regard, utilization of multiple sensors corresponding to various features of environment perception yields not only detection but also enables tracking and classification of the object leading to high security and reliability. Therefore, we propose to deploy hybrid multi-sensors such as Radar, LiDAR, and camera sensors. However, the data acquired with these hybrid sensors overlaps with the wide viewing angles of the individual sensors, and hence convolutional neural network and Kalman Filter (KF) based data fusion framework was implemented with a goal to facilitate a robust object detection system to avoid collisions inroads. The complete system tested over 1000 road scenarios for real-time environment perception showed that our hardware and software configurations outperformed numerous other conventional systems. Hence, this system could potentially find its application in object detection, tracking, and classification in a real-time environment.


2021 ◽  
Vol 336 ◽  
pp. 07004
Author(s):  
Ruoyu Fang ◽  
Cheng Cai

Obstacle detection and target tracking are two major issues for intelligent autonomous vehicles. This paper proposes a new scheme to achieve target tracking and real-time obstacle detection of obstacles based on computer vision. ResNet-18 deep learning neural network is utilized for obstacle detection and Yolo-v3 deep learning neural network is employed for real-time target tracking. These two trained models can be deployed on an autonomous vehicle equipped with an NVIDIA Jetson Nano motherboard. The autonomous vehicle moves to avoid obstacles and follow tracked targets by camera. Adjusting the steering and movement of the autonomous vehicle according to the PID algorithm during the movement, therefore, will help the proposed vehicle achieve stable and precise tracking.


2021 ◽  
Vol 257 ◽  
pp. 02061
Author(s):  
Haoru Luo ◽  
Kechun Liu

For autonomous vehicles, autonomous positioning is a core technology in their development. A good positioning system not only helps them efficiently complete autonomous operations, but also improves safety performance. At present, the use of global positioning system (GPS) is a more mainstream positioning method, but in indoor, serious shelter and other environments, GPS signal loss will lead to positioning failure. In order to solve this problem, this paper proposes a method of mapping before positioning, and designs a set of high precision real-time positioning system by combining the technology of multi-sensor fusion. The designed system was carried on a Wuling sightseeing bus, and the mapping and positioning tests were carried out in the Nanhu Campus of Wuhan University of Technology, the East Campus of Mafangshan Campus and the underground garage where GPS signals were lost. The test results show that the system can realize the high precision real-time positioning function of the autonomous vehicle. Therefore, the in-depth study and implementation of this system is of great significance to the promotion and application of the automatic driving industry.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


Automation ◽  
2020 ◽  
Vol 1 (1) ◽  
pp. 17-32
Author(s):  
Thomas Kent ◽  
Anthony Pipe ◽  
Arthur Richards ◽  
Jim Hutchinson ◽  
Wolfgang Schuster

VENTURER was one of the first three UK government funded research and innovation projects on Connected Autonomous Vehicles (CAVs) and was conducted predominantly in the South West region of the country. A series of increasingly complex scenarios conducted in an urban setting were used to: (i) evaluate the technology created as a part of the project; (ii) systematically assess participant responses to CAVs and; (iii) inform the development of potential insurance models and legal frameworks. Developing this understanding contributed key steps towards facilitating the deployment of CAVs on UK roads. This paper aims to describe the VENTURER Project trials, their objectives and detail some of the key technologies used. Importantly we aim to introduce some informative challenges that were overcame and the subsequent project and technological lessons learned in a hope to help others plan and execute future CAV research. The project successfully integrated several technologies crucial to CAV development. These included, a Decision Making System using behaviour trees to make high level decisions; A pilot-control system to smoothly and comfortably turn plans into throttle and steering actuation; Sensing and perception systems to make sense of raw sensor data; Inter-CAV Wireless communication capable of demonstrating vehicle-to-vehicle communication of potential hazards. The closely coupled technology integration, testing and participant-focused trial schedule led to a greatly improved understanding of the engineering and societal barriers that CAV development faces. From a behavioural standpoint the importance of reliability and repeatability far outweighs a need for novel trajectories, while the sensor-to-perception capabilities are critical, the process of verification and validation is extremely time consuming. Additionally, the added capabilities that can be leveraged from inter-CAV communications shows the potential for improved road safety that could result. Importantly, to effectively conduct human factors experiments in the CAV sector under consistent and repeatable conditions, one needs to define a scripted and stable set of scenarios that uses reliable equipment and a controllable environmental setting. This requirement can often be at odds with making significant technology developments, and if both are part of a project’s goals then they may need to be separated from each other.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3096 ◽  
Author(s):  
Junfeng Xin ◽  
Shixin Li ◽  
Jinlu Sheng ◽  
Yongbo Zhang ◽  
Ying Cui

Multi-sensor fusion for unmanned surface vehicles (USVs) is an important issue for autonomous navigation of USVs. In this paper, an improved particle swarm optimization (PSO) is proposed for real-time autonomous navigation of a USV in real maritime environment. To overcome the conventional PSO’s inherent shortcomings, such as easy occurrence of premature convergence and human experience-determined parameters, and to enhance the precision and algorithm robustness of the solution, this work proposes three optimization strategies: linearly descending inertia weight, adaptively controlled acceleration coefficients, and random grouping inversion. Their respective or combinational effects on the effectiveness of path planning are investigated by Monte Carlo simulations for five TSPLIB instances and application tests for the navigation of a self-developed unmanned surface vehicle on the basis of multi-sensor data. Comparative results show that the adaptively controlled acceleration coefficients play a substantial role in reducing the path length and the linearly descending inertia weight help improve the algorithm robustness. Meanwhile, the random grouping inversion optimizes the capacity of local search and maintains the population diversity by stochastically dividing the single swarm into several subgroups. Moreover, the PSO combined with all three strategies shows the best performance with the shortest trajectory and the superior robustness, although retaining solution precision and avoiding being trapped in local optima require more time consumption. The experimental results of our USV demonstrate the effectiveness and efficiency of the proposed method for real-time navigation based on multi-sensor fusion.


2015 ◽  
Vol 7 (4) ◽  
Author(s):  
F. Heidari ◽  
R. Fotouhi

This paper describes a human-inspired method (HIM) and a fully integrated navigation strategy for a wheeled mobile robot in an outdoor farm setting. The proposed strategy is composed of four main actions: sensor data analysis, obstacle detection, obstacle avoidance, and goal seeking. Using these actions, the navigation approach is capable of autonomous row-detection, row-following, and path planning motion in outdoor settings. In order to drive the robot in off-road terrain, it must detect holes or ground depressions (negative obstacles) that are inherent parts of these environments, in real-time at a safe distance from the robot. Key originalities of the proposed approach are its capability to accurately detect both positive (over ground) and negative obstacles, and accurately identify the end of the rows of bushes (e.g., in a farm) and enter the next row. Experimental evaluations were carried out using a differential wheeled mobile robot in different settings. The robot, used for experiments, utilizes a tilting unit, which carries a laser range finder (LRF) to detect objects, and a real-time kinematics differential global positioning system (RTK-DGPS) unit for localization. Experiments demonstrate that the proposed technique is capable of successfully detecting and following rows (path following) as well as robust navigation of the robot for point-to-point motion control.


2016 ◽  
Author(s):  
Georg Tanzmeister

This dissertation is focused on the environment model for automated vehicles. A reliable model of the local environment available in real-time is a prerequisite to enable almost any useful ­activity performed by a robot, such as planning motions to fulfill tasks. It is particularly important in safety critical applications, such as for autonomous vehicles in regular traffic. In this thesis, novel concepts for local mapping, tracking, the detection of principal moving directions, cost evaluations in motion planning, and road course estimation have been developed. An object- and sensor-independent grid representation forms the basis of all presented methods enabling a generic and robust estimation of the environment. All approaches have been evaluated with sensor data from real road scenarios, and their performance has been experimentally demonstrated with a test vehicle. ...


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-22
Author(s):  
Arnav Malawade ◽  
Mohanad Odema ◽  
Sebastien Lajeunesse-degroot ◽  
Mohammad Abdullah Al Faruque

Autonomous vehicles (AV) are expected to revolutionize transportation and improve road safety significantly. However, these benefits do not come without cost; AVs require large Deep-Learning (DL) models and powerful hardware platforms to operate reliably in real-time, requiring between several hundred watts to one kilowatt of power. This power consumption can dramatically reduce vehicles’ driving range and affect emissions. To address this problem, we propose SAGE: a methodology for selectively offloading the key energy-consuming modules of DL architectures to the cloud to optimize edge, energy usage while meeting real-time latency constraints. Furthermore, we leverage Head Network Distillation (HND) to introduce efficient bottlenecks within the DL architecture in order to minimize the network overhead costs of offloading with almost no degradation in the model’s performance. We evaluate SAGE using an Nvidia Jetson TX2 and an industry-standard Nvidia Drive PX2 as the AV edge, devices and demonstrate that our offloading strategy is practical for a wide range of DL models and internet connection bandwidths on 3G, 4G LTE, and WiFi technologies. Compared to edge-only computation, SAGE reduces energy consumption by an average of 36.13% , 47.07% , and 55.66% for an AV with one low-resolution camera, one high-resolution camera, and three high-resolution cameras, respectively. SAGE also reduces upload data size by up to 98.40% compared to direct camera offloading.


Sign in / Sign up

Export Citation Format

Share Document