scholarly journals Real Time Vehicle Detection, Tracking, and Inter-vehicle Distance Estimation based on Stereovision and Deep Learning using YOLOv3

Author(s):  
Omar BOURJA ◽  
Hatim DERROUZ ◽  
Hamd AIT ABDELALI ◽  
Abdelilah MAACH ◽  
Rachid OULAD HAJ THAMI ◽  
...  
2021 ◽  
Vol 13 (3) ◽  
pp. 809-820
Author(s):  
V. Sowmya ◽  
R. Radha

Vehicle detection and recognition require demanding advanced computational intelligence and resources in a real-time traffic surveillance system for effective traffic management of all possible contingencies. One of the focus areas of deep intelligent systems is to facilitate vehicle detection and recognition techniques for robust traffic management of heavy vehicles. The following are such sophisticated mechanisms: Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Regional Convolutional Neural Networks (R-CNN), You Only Look Once (YOLO) model, etcetera. Accordingly, it is pivotal to choose the precise algorithm for vehicle detection and recognition, which also addresses the real-time environment. In this study, a comparison of deep learning algorithms, such as the Faster R-CNN, YOLOv2, YOLOv3, and YOLOv4, are focused on diverse aspects of the features. Two entities for transport heavy vehicles, the buses and trucks, constitute detection and recognition elements in this proposed work. The mechanics of data augmentation and transfer-learning is implemented in the model; to build, execute, train, and test for detection and recognition to avoid over-fitting and improve speed and accuracy. Extensive empirical evaluation is conducted on two standard datasets such as COCO and PASCAL VOC 2007. Finally, comparative results and analyses are presented based on real-time.


2021 ◽  
Author(s):  
ming ji ◽  
Chuanxia Sun ◽  
Yinglei Hu

Abstract In order to solve the increasingly serious traffic congestion problem, an intelligent transportation system is widely used in dynamic traffic management, which effectively alleviates traffic congestion and improves road traffic efficiency. With the continuous development of traffic data acquisition technology, it is possible to obtain real-time traffic data in the road network in time. A large amount of traffic information provides a data guarantee for the analysis and prediction of road network traffic state. Based on the deep learning framework, this paper studies the vehicle recognition algorithm and road environment discrimination algorithm, which greatly improves the accuracy of highway vehicle recognition. Collect highway video surveillance images in different environments, establish a complete original database, build a deep learning model of environment discrimination, and train the classification model to realize real-time environment recognition of highway, as the basic condition of vehicle recognition and traffic event discrimination, and provide basic information for vehicle detection model selection. To improve the accuracy of road vehicle detection, the vehicle target labeling and sample preprocessing of different environment samples are carried out. On this basis, the vehicle recognition algorithm is studied, and the vehicle detection algorithm based on weather environment recognition and fast RCNN model is proposed. Then, the performance of the vehicle detection algorithm described in this paper is verified by comparing the detection accuracy differences between different environment dataset models and overall dataset models, different network structures and deep learning methods, and other methods.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1205
Author(s):  
Jong Bae Kim

In this paper a method for detecting and estimating the distance of a vehicle driving in front using a single black-box camera installed in a vehicle was proposed. In order to apply the proposed method to autonomous vehicles, it was required to reduce the throughput and speed-up the processing. To do this, the proposed method decomposed the input image into multiple-resolution images for real-time processing and then extracted the aggregated channel features (ACFs). The idea was to extract only the most important features from images at different resolutions symmetrically. A method of detecting an object and a method of estimating a vehicle’s distance from a bird’s eye view through inverse perspective mapping (IPM) were applied. In the proposed method, ACFs were used to generate the AdaBoost-based vehicle detector. The ACFs were extracted from the LUV color, edge gradient, and orientation (histograms of oriented gradients) of the input image. Subsequently, by applying IPM and transforming a 2D input image into 3D by generating an image projected in three dimensions, the distance between the detected vehicle and the autonomous vehicle was detected. The proposed method was applied in a real-world road environment and showed accurate results for vehicle detection and distance estimation in real-time processing. Thus, it was showed that our method is applicable to autonomous vehicles.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 532 ◽  
Author(s):  
Antoine Mauri ◽  
Redouane Khemmar ◽  
Benoit Decoux ◽  
Nicolas Ragot ◽  
Romain Rossi ◽  
...  

In core computer vision tasks, we have witnessed significant advances in object detection, localisation and tracking. However, there are currently no methods to detect, localize and track objects in road environments, and taking into account real-time constraints. In this paper, our objective is to develop a deep learning multi object detection and tracking technique applied to road smart mobility. Firstly, we propose an effective detector-based on YOLOv3 which we adapt to our context. Subsequently, to localize successfully the detected objects, we put forward an adaptive method aiming to extract 3D information, i.e., depth maps. To do so, a comparative study is carried out taking into account two approaches: Monodepth2 for monocular vision and MADNEt for stereoscopic vision. These approaches are then evaluated over datasets containing depth information in order to discern the best solution that performs better in real-time conditions. Object tracking is necessary in order to mitigate the risks of collisions. Unlike traditional tracking approaches which require target initialization beforehand, our approach consists of using information from object detection and distance estimation to initialize targets and to track them later. Expressly, we propose here to improve SORT approach for 3D object tracking. We introduce an extended Kalman filter to better estimate the position of objects. Extensive experiments carried out on KITTI dataset prove that our proposal outperforms state-of-the-art approches.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2012
Author(s):  
JongBae Kim

This paper proposes a real-time detection method for a car driving ahead in real time on a tunnel road. Unlike the general road environment, the tunnel environment is irregular and has significantly lower illumination, including tunnel lighting and light reflected from driving vehicles. The environmental restrictions are large owing to pollution by vehicle exhaust gas. In the proposed method, a real-time detection method is used for vehicles in tunnel images learned in advance using deep learning techniques. To detect the vehicle region in the tunnel environment, brightness smoothing and noise removal processes are carried out. The vehicle region is learned after generating a learning image using the ground-truth method. The YOLO v2 model, with an optimal performance compared to the performances of deep learning algorithms, is applied. The training parameters are refined through experiments. The vehicle detection rate is approximately 87%, while the detection accuracy is approximately 94% for the proposed method applied to various tunnel road environments.


Sign in / Sign up

Export Citation Format

Share Document