Identifying Wrong-Way Driving Incidents from Regular Traffic Videos Using Unsupervised Trajectory-based Method

Author(s):  
Qing Chang ◽  
Jiaxiang Ren ◽  
Huaguo Zhou ◽  
Yang Zhou ◽  
Yukun Song

Currently, transportation agencies have implemented different wrong-way driving (WWD) detection systems based on loop detectors, radar detectors, or thermal cameras. Such systems are often deployed at fixed locations in urban areas or on toll roads. The majority of rural interchange terminals does not have real-time detection systems for WWD incidents. Portable traffic cameras are used to temporarily monitor WWD activities at rural interchange terminals. However, it has always been a time-consuming task to manually review those videos to identify WWD incidents. The objective of this study was to develop an unsupervised trajectory-based method to automatically detect WWD incidents from regular traffic videos (not limited by mounting height and angle). The principle of the method includes three primary steps: vehicle recognition and trajectory generation, trajectory clustering, and outlier detection. This study also developed a new subtrajectory-based metric that makes the algorithm more adaptable for vehicle trajectory classification in different road scenarios. Finally, the algorithm was tested by analyzing 357 h of traffic videos from 14 partial cloverleaf interchange terminals in seven U.S. states. The results suggested that the method could identify all the WWD incidents in the testing videos with an average precision of 80%. The method significantly reduced person-hours for reviewing the traffic videos. Furthermore, the new method could also be applied in detecting and extracting other kinds of abnormal traffic activities, such as illegal U-turns.

2017 ◽  
Vol 22 (5) ◽  
pp. 1433-1444 ◽  
Author(s):  
Huansheng Song ◽  
Xuan Wang ◽  
Cui Hua ◽  
Weixing Wang ◽  
Qi Guan ◽  
...  

2009 ◽  
Vol 42 (15) ◽  
pp. 383-390
Author(s):  
W.K. Mak ◽  
F. Viti ◽  
S.P. Hoogendoorn ◽  
A. Hegyi

Author(s):  
Michael L. Pack ◽  
Brian L. Smith ◽  
William T. Scherer

Transportation agencies have invested significantly in extensive closed-circuit television (CCTV) systems to monitor freeways in urban areas. While thes systems have proven to be very effective in supporting incident management, they do not support the collection of quantitative measures of traffic conditions. Instead, they simply provide images that must be interpreted by trained operators. While there are several video image vehicle detection systems (VIVDS) on the market that have the capability to automatically derive traffic measures fro video imagery, these systems require the installation of fixed-position cameras. Thus, they have not been integrated with the existing moveable CCTV cameras. VIVDS camera positioning and calibration challenges were addressed and a prototype machine-vision system was developed that successfully integrated existing moveable CCTV cameras with VIVDS. Results of testing the prototype are presentedindicating that when the camera’s initial zoom level was kept between ×1 and ×1.5, the camera consistently could be returned to its original position with a repositioning accuracy of less than 0.03 to 0.1 regardless of the camera’s displaced pan, tilt, or zoom settings at the time of repositioning. This level of positional accuracy when combined with a VIVDS resulted in vehicle count errors of less than 1%.


CICTP 2020 ◽  
2020 ◽  
Author(s):  
Changlei Wen ◽  
Jian Wang ◽  
Yakun Zhang ◽  
Ting Xu ◽  
Xiang Zhang ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2202 ◽  
Author(s):  
MinJi Park ◽  
Byoung Chul Ko

While the number of casualties and amount of property damage caused by fires in urban areas are increasing each year, studies on their automatic detection have not maintained pace with the scale of such fire damage. Camera-based fire detection systems have numerous advantages over conventional sensor-based methods, but most research in this area has been limited to daytime use. However, night-time fire detection in urban areas is more difficult to achieve than daytime detection owing to the presence of ambient lighting such as headlights, neon signs, and streetlights. Therefore, in this study, we propose an algorithm that can quickly detect a fire at night in urban areas by reflecting its night-time characteristics. It is termed ELASTIC-YOLOv3 (which is an improvement over the existing YOLOv3) to detect fire candidate areas quickly and accurately, regardless of the size of the fire during the pre-processing stage. To reflect the dynamic characteristics of a night-time flame, N frames are accumulated to create a temporal fire-tube, and a histogram of the optical flow of the flame is extracted from the fire-tube and converted into a bag-of-features (BoF) histogram. The BoF is then applied to a random forest classifier, which achieves a fast classification and high classification performance of the tabular features to verify a fire candidate. Based on a performance comparison against a few other state-of-the-art fire detection methods, the proposed method can increase the fire detection at night compared to deep neural network (DNN)-based methods and achieves a reduced processing time without any loss in accuracy.


2017 ◽  
Vol 2645 (1) ◽  
pp. 195-202 ◽  
Author(s):  
Yishi Zhang ◽  
Zhijun Chen ◽  
Chaozhong Wu ◽  
Junfeng Jiang ◽  
Bin Ran

In past years, the task of automatic vehicle trajectory analysis in video surveillance systems has gained increasing attention in the research community. Vehicle trajectory analysis can identify normal and abnormal vehicle motion patterns and is useful for traffic management. Although some analysis methods of vehicle trajectory have been developed, the application of these methods is still limited in practice. In this study, a novel adaptive vehicle trajectory classification method via sparse reconstruction and mutual information analysis based on video surveillance systems was proposed. The l0-norm minimization of sparse reconstruction in the method was relaxed to the lp-norm minimization (0 < p < 1). In addition, to consider the nonlinear correlation between the test trajectory and the dictionary, mutual information between the test trajectory and the reconstructed one was taken into account. A hybrid orthogonal matching pursuit–Newton method (HON) was developed to effectively find the sparse solutions for trajectory classification. Two real-world data sets (including the stop sign data set and straight data set) were used in the experiments to validate the performance and effectiveness of the proposed method. Experimental results show that the trajectory classification accuracy is significantly improved by the proposed method compared with most well-known classifiers, namely, NB, k–nearest neighbor, support vector machine, and typical extant sparse reconstruction methods.


Sign in / Sign up

Export Citation Format

Share Document