vehicle recognition
Recently Published Documents


TOTAL DOCUMENTS

266
(FIVE YEARS 81)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Vol 6 (2) ◽  
pp. 105-111
Author(s):  
Yevhen Fastiuk ◽  
◽  
Ruslan Bachynskyy ◽  
Nataliia Huzynets

In this era, people using vehicles is getting increased day by day. As pedestrians leading a dog for a walk, or hurrying to their workplace in the morning, we’ve all experienced unsafe, fast-moving vehicles operated by inattentive drivers that nearly mow us down. Many of us live in apartment complexes or housing neighborhoods where ignorant drivers disregard safety and zoom by, going way too fast. To plan, monitor and also control these vehicles is becoming a big challenge. In the article, we have come up with a solution to the above problem using the video surveillance considering the video data from the traffic cameras. Using computer vision and deep learning technology we will be able to recognize violations of rules. This article will describe modern CV and DL methods to recognize vehicle on the road and traffic violations of rules by them. Implementation of methods can be done using OpenCV Python as a tool. Our proposed solution can recognize vehicles, track their speed and help in counting the objects precisely.


Author(s):  
Jingjing Zhang ◽  
Jingsheng Lei ◽  
Shengying Yang ◽  
Xinqi Yang

AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 684-704
Author(s):  
Karen Panetta ◽  
Landry Kezebou ◽  
Victor Oludare ◽  
James Intriligator ◽  
Sos Agaian

The concept of searching and localizing vehicles from live traffic videos based on descriptive textual input has yet to be explored in the scholarly literature. Endowing Intelligent Transportation Systems (ITS) with such a capability could help solve crimes on roadways. One major impediment to the advancement of fine-grain vehicle recognition models is the lack of video testbench datasets with annotated ground truth data. Additionally, to the best of our knowledge, no metrics currently exist for evaluating the robustness and performance efficiency of a vehicle recognition model on live videos and even less so for vehicle search and localization models. In this paper, we address these challenges by proposing V-Localize, a novel artificial intelligence framework for vehicle search and continuous localization captured from live traffic videos based on input textual descriptions. An efficient hashgraph algorithm is introduced to compute valid target information from textual input. This work further introduces two novel datasets to advance AI research in these challenging areas. These datasets include (a) the most diverse and large-scale Vehicle Color Recognition (VCoR) dataset with 15 color classes—twice as many as the number of color classes in the largest existing such dataset—to facilitate finer-grain recognition with color information; and (b) a Vehicle Recognition in Video (VRiV) dataset, a first of its kind video testbench dataset for evaluating the performance of vehicle recognition models in live videos rather than still image data. The VRiV dataset will open new avenues for AI researchers to investigate innovative approaches that were previously intractable due to the lack of annotated traffic vehicle recognition video testbench dataset. Finally, to address the gap in the field, five novel metrics are introduced in this paper for adequately accessing the performance of vehicle recognition models in live videos. Ultimately, the proposed metrics could also prove intuitively effective at quantitative model evaluation in other video recognition applications. T One major advantage of the proposed vehicle search and continuous localization framework is that it could be integrated in ITS software solution to aid law enforcement, especially in critical cases such as of amber alerts or hit-and-run incidents.


2021 ◽  
Author(s):  
ming ji ◽  
Chuanxia Sun ◽  
Yinglei Hu

Abstract In order to solve the increasingly serious traffic congestion problem, an intelligent transportation system is widely used in dynamic traffic management, which effectively alleviates traffic congestion and improves road traffic efficiency. With the continuous development of traffic data acquisition technology, it is possible to obtain real-time traffic data in the road network in time. A large amount of traffic information provides a data guarantee for the analysis and prediction of road network traffic state. Based on the deep learning framework, this paper studies the vehicle recognition algorithm and road environment discrimination algorithm, which greatly improves the accuracy of highway vehicle recognition. Collect highway video surveillance images in different environments, establish a complete original database, build a deep learning model of environment discrimination, and train the classification model to realize real-time environment recognition of highway, as the basic condition of vehicle recognition and traffic event discrimination, and provide basic information for vehicle detection model selection. To improve the accuracy of road vehicle detection, the vehicle target labeling and sample preprocessing of different environment samples are carried out. On this basis, the vehicle recognition algorithm is studied, and the vehicle detection algorithm based on weather environment recognition and fast RCNN model is proposed. Then, the performance of the vehicle detection algorithm described in this paper is verified by comparing the detection accuracy differences between different environment dataset models and overall dataset models, different network structures and deep learning methods, and other methods.


Author(s):  
Qing Chang ◽  
Jiaxiang Ren ◽  
Huaguo Zhou ◽  
Yang Zhou ◽  
Yukun Song

Currently, transportation agencies have implemented different wrong-way driving (WWD) detection systems based on loop detectors, radar detectors, or thermal cameras. Such systems are often deployed at fixed locations in urban areas or on toll roads. The majority of rural interchange terminals does not have real-time detection systems for WWD incidents. Portable traffic cameras are used to temporarily monitor WWD activities at rural interchange terminals. However, it has always been a time-consuming task to manually review those videos to identify WWD incidents. The objective of this study was to develop an unsupervised trajectory-based method to automatically detect WWD incidents from regular traffic videos (not limited by mounting height and angle). The principle of the method includes three primary steps: vehicle recognition and trajectory generation, trajectory clustering, and outlier detection. This study also developed a new subtrajectory-based metric that makes the algorithm more adaptable for vehicle trajectory classification in different road scenarios. Finally, the algorithm was tested by analyzing 357 h of traffic videos from 14 partial cloverleaf interchange terminals in seven U.S. states. The results suggested that the method could identify all the WWD incidents in the testing videos with an average precision of 80%. The method significantly reduced person-hours for reviewing the traffic videos. Furthermore, the new method could also be applied in detecting and extracting other kinds of abnormal traffic activities, such as illegal U-turns.


Author(s):  
Bixin Cai ◽  
Qidong Wang ◽  
Wuwei Chen ◽  
Linfeng Zhao ◽  
Huiran Wang

Vehicle detection plays a crucial role in the decision-making, planning, and control of intelligent vehicles. It is one of the main tasks of environmental perception and an essential part of ensuring driving safety. In order to capture unique vehicle features and improve vehicle recognition efficiency, this paper fuses texture features of image and edge features of LIDAR to detect frontal vehicle targets. First, we use wavelet analysis and geometric analysis to segment the ground and determine the region of interest for vehicle detection. Then, the point cloud of the vehicle detected is projected into the image to locate the ROI. Moreover, the edge feature of the vehicle is guided to extract according to the maximum gradient direction of the vehicle’s rear contour. Furthermore, the Haar texture feature is integrated to identify the vehicle, and a filter is designed according to the point cloud’s spatial distribution to eliminate the error targets. Finally, it is verified by real-vehicle comparison tests that the proposed fusion method can effectively improve the vehicles’ detection with not much time.


2021 ◽  
Vol 2007 (1) ◽  
pp. 012049
Author(s):  
Amrita Rai ◽  
Reshu Agrawal ◽  
Shylaja V. Karatangi ◽  
Seema nayak

Author(s):  
Chaoqing Wang ◽  
Junlong Cheng ◽  
Yuefei Wang ◽  
Yurong Qian

A vehicle make and model recognition (VMMR) system is a common requirement in the field of intelligent transportation systems (ITS). However, it is a challenging task because of the subtle differences between vehicle categories. In this paper, we propose a hierarchical scheme for VMMR. Specifically, the scheme consists of (1) a feature extraction framework called weighted mask hierarchical bilinear pooling (WMHBP) based on hierarchical bilinear pooling (HBP) which weakens the influence of invalid background regions by generating a weighted mask while extracting features from discriminative regions to form a more robust feature descriptor; (2) a hierarchical loss function that can learn the appearance differences between vehicle brands, and enhance vehicle recognition accuracy; (3) collection of vehicle images from the Internet and classification of images with hierarchical labels to augment data for solving the problem of insufficient data and low picture resolution and improving the model’s generalization ability and robustness. We evaluate the proposed framework for accuracy and real-time performance and the experiment results indicate a recognition accuracy of 95.1% and an FPS (frames per second) of 107 for the framework for the Stanford Cars public dataset, which demonstrates the superiority of the method and its availability for ITS.


Sign in / Sign up

Export Citation Format

Share Document