Video Inpainting: A Complete Framework

Author(s):  
Sameh Zarif ◽  
Mina Ibrahim

Reconstructing and repairing of corrupted or missing parts after object removal in digital video is an important trend in artwork restoration. Video inpainting is an active subject in video processing, which deals with the recovery of the corrupted or missing data. Most previous video inpainting approaches consume more time in extensive search to find the best patch to restore the damaged frames. In addition to that, most of them cannot handle the gradual and sudden illumination changes, dynamic background, full object occlusion, and object changes in scale. In this paper, we present a complete video inpainting framework without the extensive search process. The proposed framework consists of a segmentation stage based on low-resolution version and background subtraction. A background inpainting stage is applied to restore the damaged background regions after static or moving object removal based on the gray-level co-occurrence matrix (GLCM). A foreground inpainting stage is based on objects repository. GLCM is used to complete the moving occluded objects during the occlusion. The proposed method reduces the inpainting time from hours to a few seconds and maintains the spatial and temporal consistency. It works well when the background has clutter or fake motion, and it can handle the changes in object size and in posture. Moreover, it is able to handle full occlusion and illumination changes.

Author(s):  
K. Anuradha ◽  
N.R. Raajan

<p>Video processing has gained a lot of significance because of its applications in various areas of research. This includes monitoring movements in public places for surveillance. Video sequences from various standard datasets such as I2R, CAVIAR and UCSD are often referred for video processing applications and research. Identification of actors as well as the movements in video sequences should be accomplished with the static and dynamic background. The significance of research in video processing lies in identifying the foreground movement of actors and objects in video sequences. Foreground identification can be done with a static or dynamic background. This type of identification becomes complex while detecting the movements in video sequences with a dynamic background. For identification of foreground movement in video sequences with dynamic background, two algorithms are proposed in this article. The algorithms are termed as Frame Difference between Neighboring Frames using Hue, Saturation and Value (FDNF-HSV) and Frame Difference between Neighboring Frames using Greyscale (FDNF-G). With regard to F-measure, recall and precision, the proposed algorithms are evaluated with state-of-art techniques. Results of evaluation show that, the proposed algorithms have shown enhanced performance.</p>


2013 ◽  
Vol 81 (13) ◽  
pp. 28-30
Author(s):  
Vaishali U.Gaikwad ◽  
P. V. kulkarni

Background subtraction is a key part to detect moving objects from the video in computer vision field. It is used to subtract reference frame to every new frame of video scenes. There are wide varieties of background subtraction techniques available in literature to solve real life applications like crowd analysis, human activity tracking system, traffic analysis and many more. Moreover, there were not enough benchmark datasets available which can solve all the challenges of subtraction techniques for object detection. Thus challenges were found in terms of dynamic background, illumination changes, shadow appearance, occlusion and object speed. In this perspective, we have tried to provide exhaustive literature survey on background subtraction techniques for video surveillance applications to solve these challenges in real situations. Additionally, we have surveyed eight benchmark video datasets here namely Wallflower, BMC, PET, IBM, CAVIAR, CD.Net, SABS and RGB-D along with their available ground truth. This study evaluates the performance of five background subtraction methods using performance parameters such as specificity, sensitivity, FNR, PWC and F-Score in order to identify an accurate and efficient method for detecting moving objects in less computational time.


2009 ◽  
Author(s):  
Sang-Heon Lee ◽  
Soon-Young Lee ◽  
Jun-Hee Heu ◽  
Chang-Su Kim ◽  
Sang-Uk Lee

Sign in / Sign up

Export Citation Format

Share Document