Dynamic background subtraction using texton co-occurrence matrix

Author(s):  
Deepak Kumar Panda ◽  
Sukadev Meher

Background subtraction is a key part to detect moving objects from the video in computer vision field. It is used to subtract reference frame to every new frame of video scenes. There are wide varieties of background subtraction techniques available in literature to solve real life applications like crowd analysis, human activity tracking system, traffic analysis and many more. Moreover, there were not enough benchmark datasets available which can solve all the challenges of subtraction techniques for object detection. Thus challenges were found in terms of dynamic background, illumination changes, shadow appearance, occlusion and object speed. In this perspective, we have tried to provide exhaustive literature survey on background subtraction techniques for video surveillance applications to solve these challenges in real situations. Additionally, we have surveyed eight benchmark video datasets here namely Wallflower, BMC, PET, IBM, CAVIAR, CD.Net, SABS and RGB-D along with their available ground truth. This study evaluates the performance of five background subtraction methods using performance parameters such as specificity, sensitivity, FNR, PWC and F-Score in order to identify an accurate and efficient method for detecting moving objects in less computational time.


Author(s):  
Sameh Zarif ◽  
Mina Ibrahim

Reconstructing and repairing of corrupted or missing parts after object removal in digital video is an important trend in artwork restoration. Video inpainting is an active subject in video processing, which deals with the recovery of the corrupted or missing data. Most previous video inpainting approaches consume more time in extensive search to find the best patch to restore the damaged frames. In addition to that, most of them cannot handle the gradual and sudden illumination changes, dynamic background, full object occlusion, and object changes in scale. In this paper, we present a complete video inpainting framework without the extensive search process. The proposed framework consists of a segmentation stage based on low-resolution version and background subtraction. A background inpainting stage is applied to restore the damaged background regions after static or moving object removal based on the gray-level co-occurrence matrix (GLCM). A foreground inpainting stage is based on objects repository. GLCM is used to complete the moving occluded objects during the occlusion. The proposed method reduces the inpainting time from hours to a few seconds and maintains the spatial and temporal consistency. It works well when the background has clutter or fake motion, and it can handle the changes in object size and in posture. Moreover, it is able to handle full occlusion and illumination changes.


Sign in / Sign up

Export Citation Format

Share Document