scholarly journals Boosting Multi-Vehicle Tracking with a Joint Object Detection and Viewpoint Estimation Sensor

Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4062 ◽  
Author(s):  
Roberto López-Sastre ◽  
Carlos Herranz-Perdiguero ◽  
Ricardo Guerrero-Gómez-Olmedo ◽  
Daniel Oñoro-Rubio ◽  
Saturnino Maldonado-Bascón

In this work, we address the problem of multi-vehicle detection and tracking for traffic monitoring applications. We preset a novel intelligent visual sensor for tracking-by-detection with simultaneous pose estimation. Essentially, we adapt an Extended Kalman Filter (EKF) to work not only with the detections of the vehicles but also with their estimated coarse viewpoints, directly obtained with the vision sensor. We show that enhancing the tracking with observations of the vehicle pose, results in a better estimation of the vehicles trajectories. For the simultaneous object detection and viewpoint estimation task, we present and evaluate two independent solutions. One is based on a fast GPU implementation of a Histogram of Oriented Gradients (HOG) detector with Support Vector Machines (SVMs). For the second, we adequately modify and train the Faster R-CNN deep learning model, in order to recover from it not only the object localization but also an estimation of its pose. Finally, we publicly release a challenging dataset, the GRAM Road Traffic Monitoring (GRAM-RTM), which has been especially designed for evaluating multi-vehicle tracking approaches within the context of traffic monitoring applications. It comprises more than 700 unique vehicles annotated across more than 40.300 frames of three videos. We expect the GRAM-RTM becomes a benchmark in vehicle detection and tracking, providing the computer vision and intelligent transportation systems communities with a standard set of images, annotations and evaluation procedures for multi-vehicle tracking. We present a thorough experimental evaluation of our approaches with the GRAM-RTM, which will be useful for establishing further comparisons. The results obtained confirm that the simultaneous integration of vehicle localizations and pose estimations as observations in an EKF, improves the tracking results.

Author(s):  
Adhi Prahara ◽  
Ahmad Azhari ◽  
Murinto Murinto

Vehicle has several types and each of them has different color, size, and shape. The appearance of vehicle also changes if viewed from different viewpoint of traffic surveillance camera. This situation can create many possibilities of vehicle poses. However, the one in common, vehicle pose usually follows road direction. Therefore, this research proposes a method to estimate the pose of vehicle for vehicle detection and tracking based on road direction. Vehicle training data are generated from 3D vehicle models in four-pair orientation categories. Histogram of Oriented Gradients (HOG) and Linear-Support Vector Machine (Linear-SVM) are used to build vehicle detectors from the data. Road area is extracted from traffic surveillance image to localize the detection area. The pose of vehicle which estimated based on road direction will be used to select a suitable vehicle detector for vehicle detection process. To obtain the final vehicle object, vehicle line checking method is applied to the vehicle detection result. Finally, vehicle tracking is performed to give label on each vehicle. The test conducted on various viewpoints of traffic surveillance camera shows that the method effectively detects and tracks vehicle by estimating the pose of vehicle. Performance evaluation of the proposed method shows 0.9170 of accuracy and 0.9161 of balance accuracy (BAC).


2020 ◽  
Vol 10 (11) ◽  
pp. 3986
Author(s):  
Tuan-Anh Pham ◽  
Myungsik Yoo

In recent years, vision-based vehicle detection has received considerable attention in the literature. Depending on the ambient illuminance, vehicle detection methods are classified as daytime and nighttime detection methods. In this paper, we propose a nighttime vehicle detection and tracking method with occlusion handling based on vehicle lights. First, bright blobs that may be vehicle lights are segmented in the captured image. Then, a machine learning-based method is proposed to classify whether the bright blobs are headlights, taillights, or other illuminant objects. Subsequently, the detected vehicle lights are tracked to further facilitate the determination of the vehicle position. As one vehicle is indicated by one or two light pairs, a light pairing process using spatiotemporal features is applied to pair vehicle lights. Finally, vehicle tracking with occlusion handling is applied to refine incorrect detections under various traffic situations. Experiments on two-lane and four-lane urban roads are conducted, and a quantitative evaluation of the results shows the effectiveness of the proposed method.


2019 ◽  
Vol 11 (18) ◽  
pp. 2155 ◽  
Author(s):  
Jie Wang ◽  
Sandra Simeonova ◽  
Mozhdeh Shahbazi

Along with the advancement of light-weight sensing and processing technologies, unmanned aerial vehicles (UAVs) have recently become popular platforms for intelligent traffic monitoring and control. UAV-mounted cameras can capture traffic-flow videos from various perspectives providing a comprehensive insight into road conditions. To analyze the traffic flow from remotely captured videos, a reliable and accurate vehicle detection-and-tracking approach is required. In this paper, we propose a deep-learning framework for vehicle detection and tracking from UAV videos for monitoring traffic flow in complex road structures. This approach is designed to be invariant to significant orientation and scale variations in the videos. The detection procedure is performed by fine-tuning a state-of-the-art object detector, You Only Look Once (YOLOv3), using several custom-labeled traffic datasets. Vehicle tracking is conducted following a tracking-by-detection paradigm, where deep appearance features are used for vehicle re-identification, and Kalman filtering is used for motion estimation. The proposed methodology is tested on a variety of real videos collected by UAVs under various conditions, e.g., in late afternoons with long vehicle shadows, in dawn with vehicles lights being on, over roundabouts and interchange roads where vehicle directions change considerably, and from various viewpoints where vehicles’ appearance undergo substantial perspective distortions. The proposed tracking-by-detection approach performs efficiently at 11 frames per second on color videos of 2720p resolution. Experiments demonstrated that high detection accuracy could be achieved with an average F1-score of 92.1%. Besides, the tracking technique performs accurately, with an average multiple-object tracking accuracy (MOTA) of 81.3%. The proposed approach also addressed the shortcomings of the state-of-the-art in multi-object tracking regarding frequent identity switching, resulting in a total of only one identity switch over every 305 tracked vehicles.


2020 ◽  
Vol 21 (2) ◽  
pp. 125-133
Author(s):  
De Rosal Ignatius Moses Setiadi ◽  
Rizki Ramadhan Fratama ◽  
Nurul Diyah Ayu Partiningsih

AbstractThis research proposes a background subtraction method with the truncate threshold to improve the accuracy of vehicle detection and tracking in real-time video streams. In previous research, vehicle detection accuracy still needs to be optimized, so it needed to be improved. In the vehicle detection method, there are several parts that greatly affect, one of which is the thresholding technique. Different thresholding methods can affect the results of the background and foreground separation. Based on the results of testing the proposed method can improve accuracy by more than 20% compared to the previous method. The thresholding method has a considerable influence on the final result of vehicle object detection. The results of the average accuracy of the three types of time, i.e. morning, daytime, and afternoon reached 96.01%. These results indicate that the vehicle counting accuracy is very satisfying, moreover, the method has also been implemented in a real way and can run smoothly.


Sign in / Sign up

Export Citation Format

Share Document