Mining Rare and Frequent Events in Multi-camera Surveillance Video

Author(s):  
Valery A. Petrushin
2013 ◽  
Vol 321-324 ◽  
pp. 1041-1045
Author(s):  
Jian Rong Cao ◽  
Yang Xu ◽  
Cai Yun Liu

After background modeling and segmenting of moving object for surveillance video, this paper firstly presented a noninteractive matting algorithm of video moving object based on GrabCut. These matted moving objects then were placed in a background image on the condition of nonoverlapping arrangement, so a frame could be obtained with several moving objects placed in a background image. Finally, a series of these frame images could be achieved in timeline and a single camera surveillance video synopsis could be formed. The experimental results show that this video synopsis has the features of conciseness and readable concentrated form and the efficiency of browsing and retrieval can be improved.


2020 ◽  
Author(s):  
Jacob Selvage ◽  
Carlos Humphries ◽  
Floyd Mcclanahan ◽  
Anthony Rhodarmer ◽  
Arthur Gosman

Multi-person tracking plays a critical role in the analysis of surveillance video. However, most existing work focus on shorterterm (e.g. minute-long or hour-long) video sequences. Therefore, we propose a multi-person tracking algorithm for very long-term (e.g. month-long) multi-camera surveillance scenarios. Long-term tracking is challenging because 1) the apparel/appearance of the same person will vary greatly over multiple days and 2) a person will leave and re-enter the scene numerous times. To tackle these challenges, we leverage face recognition information, which is robust to apparel change, to automatically reinitialize our tracker over multiple days of recordings. Unfortunately, recognized faces are unavailable oftentimes. Therefore, our tracker propagates identity information to frames without recognized faces by uncovering the appearance and spatial manifold formed by person detections. We tested our algorithm on a 23-day 15-camera data set (4,935 hours total), and we were able to localize a person 53.2% of the time with 69.8% precision. Wefurther performed video summarization experiments based on our tracking output. Results on 116.25 hours of video showed that wewere able to generate a reasonable visual diary (i.e. a summary of what a person did) for different people, thus potentially opening thedoor to automatic summarization of the vast amount of surveillance video generated every day


2012 ◽  
Vol 68 (1) ◽  
pp. 135-158 ◽  
Author(s):  
Mukesh Saini ◽  
Pradeep K. Atrey ◽  
Sharad Mehrotra ◽  
Mohan Kankanhalli

Author(s):  
V. V. S. Murthy ◽  
CH. Aravind ◽  
K. Jayasri ◽  
K. Mounika ◽  
T. V. V. R. Akhil

Sign in / Sign up

Export Citation Format

Share Document