Moving Objects Representation for Object Based Surveillance Video Retrieval System

2014 ◽  
Vol 8 (2) ◽  
pp. 315-322
Author(s):  
Jianping Han ◽  
Tian Tan ◽  
Longfei Chen ◽  
Daxing Zhang
Author(s):  
Jacky S-C. Yuk ◽  
Kwan-Yee K. Wong ◽  
Ronald H-Y. Chung ◽  
K. P. Chow ◽  
Francis Y-L. Chin ◽  
...  

2019 ◽  
Vol 9 (10) ◽  
pp. 2003 ◽  
Author(s):  
Tung-Ming Pan ◽  
Kuo-Chin Fan ◽  
Yuan-Kai Wang

Intelligent analysis of surveillance videos over networks requires high recognition accuracy by analyzing good-quality videos that however introduce significant bandwidth requirement. Degraded video quality because of high object dynamics under wireless video transmission induces more critical issues to the success of smart video surveillance. In this paper, an object-based source coding method is proposed to preserve constant quality of video streaming over wireless networks. The inverse relationship between video quality and object dynamics (i.e., decreasing video quality due to the occurrence of large and fast-moving objects) is characterized statistically as a linear model. A regression algorithm that uses robust M-estimator statistics is proposed to construct the linear model with respect to different bitrates. The linear model is applied to predict the bitrate increment required to enhance video quality. A simulated wireless environment is set up to verify the proposed method under different wireless situations. Experiments with real surveillance videos of a variety of object dynamics are conducted to evaluate the performance of the method. Experimental results demonstrate significant improvement of streaming videos relative to both visual and quantitative aspects.


Author(s):  
Shefali Gandhi ◽  
Tushar V. Ratanpara

Video synopsis provides representation of the long surveillance video, while preserving the essential activities of the original video. The activity in the original video is covered into a shorter period by simultaneously displaying multiple activities, which originally occurred at different time segments. As activities are to be displayed in different time segments than original video, the process begins with extracting moving objects. Temporal median algorithm is used to model background and foreground objects are detected using background subtraction method. Each moving object is represented as a space-time activity tube in the video. The concept of genetic algorithm is used for optimized temporal shifting of activity tubes. The temporal arrangement of tubes which results in minimum collision and maintains chronological order of events is considered as the best solution. The time-lapse background video is generated next, which is used as background for the synopsis video. Finally, the activity tubes are stitched on the time-lapse background video using Poisson image editing.


2013 ◽  
Vol 321-324 ◽  
pp. 1041-1045
Author(s):  
Jian Rong Cao ◽  
Yang Xu ◽  
Cai Yun Liu

After background modeling and segmenting of moving object for surveillance video, this paper firstly presented a noninteractive matting algorithm of video moving object based on GrabCut. These matted moving objects then were placed in a background image on the condition of nonoverlapping arrangement, so a frame could be obtained with several moving objects placed in a background image. Finally, a series of these frame images could be achieved in timeline and a single camera surveillance video synopsis could be formed. The experimental results show that this video synopsis has the features of conciseness and readable concentrated form and the efficiency of browsing and retrieval can be improved.


Author(s):  
Shefali Gandhi ◽  
Tushar V. Ratanpara

Video synopsis provides representation of the long surveillance video, while preserving the essential activities of the original video. The activity in the original video is covered into a shorter period by simultaneously displaying multiple activities, which originally occurred at different time segments. As activities are to be displayed in different time segments than original video, the process begins with extracting moving objects. Temporal median algorithm is used to model background and foreground objects are detected using background subtraction method. Each moving object is represented as a space-time activity tube in the video. The concept of genetic algorithm is used for optimized temporal shifting of activity tubes. The temporal arrangement of tubes which results in minimum collision and maintains chronological order of events is considered as the best solution. The time-lapse background video is generated next, which is used as background for the synopsis video. Finally, the activity tubes are stitched on the time-lapse background video using Poisson image editing.


Sign in / Sign up

Export Citation Format

Share Document