Lane following and obstacle detection techniques in autonomous driving vehicles

Author(s):  
Phanindra Amaradi ◽  
Nishanth Sriramoju ◽  
Li Dang ◽  
Girma S. Tewolde ◽  
Jaerock Kwon
Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2140
Author(s):  
De Jong Yeong ◽  
Gustavo Velasco-Hernandez ◽  
John Barry ◽  
Joseph Walsh

With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


2019 ◽  
Vol 9 (14) ◽  
pp. 2843 ◽  
Author(s):  
Pierre Duthon ◽  
Michèle Colomb ◽  
Frédéric Bernardin

Autonomous driving is based on innovative technologies that have to ensure that vehicles are driven safely. LiDARs are one of the reference sensors for obstacle detection. However, this technology is affected by adverse weather conditions, especially fog. Different wavelengths are investigated to meet this challenge (905 nm vs. 1550 nm). The influence of wavelength on light transmission in fog is then examined and results reported. A theoretical approach by calculating the extinction coefficient for different wavelengths is presented in comparison to measurements with a spectroradiometer in the range of 350 nm–2450 nm. The experiment took place in the French Cerema PAVIN BPplatform for intelligent vehicles, which makes it possible to reproduce controlled fogs of different density for two types of droplet size distribution. Direct spectroradiometer extinction measurements vary in the same way as the models. Finally, the wavelengths for LiDARs should not be chosen on the basis of fog conditions: there is a small difference (<10%) between the extinction coefficients at 905 nm and 1550 nm for the same emitted power in fog.


Author(s):  
Huanbing Gao ◽  
Lei Liu ◽  
Ya Tian ◽  
Shouyin Lu

This paper presented 3D reconstruction method for road scene with the help of obstacle detection. 3D reconstruction for road scene can be used in autonomous driving, driver assistance system, car navigation systems. However, some errors often rose when 3D reconstructing due to the shade from the moving object in the road scene. The presented 3D reconstruction method with obstacle detection feedback can avoid this problem. Firstly, this paper offers a framework for the 3D reconstruction of road scene by laser scanning and vision. A calibration method based on the location of horizon is proposed, and a method of attitude angle measuring based on vanishing point is proposed to revise the 3D reconstruction result. Secondly, the reconstruction framework is extended by integrating with an object recognition that can automatically detect and discriminate obstacles in the input video streams by a RANSAC approach and threshold filter, and localizes them in the 3D model. 3D reconstruction and obstacle detection are tightly integrated and benefit from each other. The experiment result verified the feasibility and practicability of the proposed method.


10.5772/56603 ◽  
2013 ◽  
Vol 10 (6) ◽  
pp. 261 ◽  
Author(s):  
Hao Sun ◽  
Huanxin Zou ◽  
Shilin Zhou ◽  
Cheng Wang ◽  
Naser El-Sheimy

2020 ◽  
pp. 123-145
Author(s):  
Sushma Jaiswal ◽  
Tarun Jaiswal

In computer vision, object detection is a very important, exciting and mind-blowing study. Object detection work in numerous fields such as observing security, independently/autonomous driving and etc. Deep-learning based object detection techniques have developed at a very fast pace and have attracted the attention of many researchers. The main focus of the 21st century is the development of the object-detection framework, comprehensively and genuinely. In this investigation, we initially investigate and evaluate the various object detection approaches and designate the benchmark datasets. We also delivered the wide-ranging general idea of object detection approaches in an organized way. We covered the first and second stage detectors of object detection methods. And lastly, we consider the construction of these object detection approaches to give dimensions for further research.


Sign in / Sign up

Export Citation Format

Share Document