Real-time vehicle detection using histograms of oriented gradients and AdaBoost classification

Optik ◽  
2016 ◽  
Vol 127 (19) ◽  
pp. 7941-7951 ◽  
Author(s):  
Gang Yan ◽  
Ming Yu ◽  
Yang Yu ◽  
Longfei Fan
Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1205
Author(s):  
Jong Bae Kim

In this paper a method for detecting and estimating the distance of a vehicle driving in front using a single black-box camera installed in a vehicle was proposed. In order to apply the proposed method to autonomous vehicles, it was required to reduce the throughput and speed-up the processing. To do this, the proposed method decomposed the input image into multiple-resolution images for real-time processing and then extracted the aggregated channel features (ACFs). The idea was to extract only the most important features from images at different resolutions symmetrically. A method of detecting an object and a method of estimating a vehicle’s distance from a bird’s eye view through inverse perspective mapping (IPM) were applied. In the proposed method, ACFs were used to generate the AdaBoost-based vehicle detector. The ACFs were extracted from the LUV color, edge gradient, and orientation (histograms of oriented gradients) of the input image. Subsequently, by applying IPM and transforming a 2D input image into 3D by generating an image projected in three dimensions, the distance between the detected vehicle and the autonomous vehicle was detected. The proposed method was applied in a real-world road environment and showed accurate results for vehicle detection and distance estimation in real-time processing. Thus, it was showed that our method is applicable to autonomous vehicles.


Author(s):  
Andres Bell ◽  
Tomas Mantecon ◽  
Cesar Diaz ◽  
Carlos R. del-Blanco ◽  
Fernando Jaureguizar ◽  
...  

2013 ◽  
Vol 62 (6) ◽  
pp. 2453-2468 ◽  
Author(s):  
Vinh Dinh Nguyen ◽  
Thuy Tuong Nguyen ◽  
Dung Duc Nguyen ◽  
Sang Jun Lee ◽  
Jae Wook Jeon

2021 ◽  
Vol 13 (3) ◽  
pp. 809-820
Author(s):  
V. Sowmya ◽  
R. Radha

Vehicle detection and recognition require demanding advanced computational intelligence and resources in a real-time traffic surveillance system for effective traffic management of all possible contingencies. One of the focus areas of deep intelligent systems is to facilitate vehicle detection and recognition techniques for robust traffic management of heavy vehicles. The following are such sophisticated mechanisms: Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Regional Convolutional Neural Networks (R-CNN), You Only Look Once (YOLO) model, etcetera. Accordingly, it is pivotal to choose the precise algorithm for vehicle detection and recognition, which also addresses the real-time environment. In this study, a comparison of deep learning algorithms, such as the Faster R-CNN, YOLOv2, YOLOv3, and YOLOv4, are focused on diverse aspects of the features. Two entities for transport heavy vehicles, the buses and trucks, constitute detection and recognition elements in this proposed work. The mechanics of data augmentation and transfer-learning is implemented in the model; to build, execute, train, and test for detection and recognition to avoid over-fitting and improve speed and accuracy. Extensive empirical evaluation is conducted on two standard datasets such as COCO and PASCAL VOC 2007. Finally, comparative results and analyses are presented based on real-time.


Sign in / Sign up

Export Citation Format

Share Document