video inpainting
Recently Published Documents


TOTAL DOCUMENTS

117
(FIVE YEARS 42)

H-INDEX

14
(FIVE YEARS 4)

Author(s):  
Gajanan Tudavekar ◽  
Santosh S. Saraf ◽  
Sanjay R. Patil

Video inpainting aims to complete in a visually pleasing way the missing regions in video frames. Video inpainting is an exciting task due to the variety of motions across different frames. The existing methods usually use attention models to inpaint videos by seeking the damaged content from other frames. Nevertheless, these methods suffer due to irregular attention weight from spatio-temporal dimensions, thus giving rise to artifacts in the inpainted video. To overcome the above problem, Spatio-Temporal Inference Transformer Network (STITN) has been proposed. The STITN aligns the frames to be inpainted and concurrently inpaints all the frames, and a spatio-temporal adversarial loss function improves the STITN. Our method performs considerably better than the existing deep learning approaches in quantitative and qualitative evaluation.


2021 ◽  
Author(s):  
Cheng Chen ◽  
Jiayin Cai ◽  
Yao Hu ◽  
Xu Tang ◽  
Xinggang Wang ◽  
...  

2021 ◽  
Vol 18 (5) ◽  
pp. 172988142110531
Author(s):  
Rui Zhao ◽  
Hengyu Li ◽  
Jingyi Liu ◽  
Huayan Pu ◽  
Shaorong Xie ◽  
...  

In this article, the problem of video inpainting combines multiview spatial information and interframe information between video sequences. A vision system is an important way for autonomous vehicles to obtain information about the external environment. Loss or distortion of visual images caused by camera damage or pollution seriously makes an impact on the vision system ability to correctly perceive and understand the external environment. In this article, we solve the problem of image restoration by combining the optical flow information between frames in the video with the spatial information from multiple perspectives. To solve the problems of noise in the single-frame images of video frames, we propose a complete two-stage video repair method. We combine the spatial information of images from different perspectives and the optical flow information of the video sequence to assist and constrain the repair of damaged images in the video. This method combines the interframe information of the front and rear image frames with the multiview image information in the video and performs video repair based on optical flow and a conditional generation adversarial network. This method regards video inpainting as a pixel propagation problem, uses the interframe information in the video for video inpainting, and introduces multiview information to assist the repair based on a conditional generative adversarial network. This method was trained and tested in Zurich using a data set recorded by a pair of cameras mounted on a mobile platform.


2021 ◽  
Vol E104.D (8) ◽  
pp. 1349-1358
Author(s):  
Yusuke HARA ◽  
Xueting WANG ◽  
Toshihiko YAMASAKI

2021 ◽  
Author(s):  
Xueyan Zou ◽  
Linjie Yang ◽  
Ding Liu ◽  
Yong Jae Lee

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 233
Author(s):  
Haoran Xu ◽  
Yanbai He ◽  
Xinya Li ◽  
Xiaoying Hu ◽  
Chuanyan Hao ◽  
...  

Subtitles are crucial for video content understanding. However, a large amount of videos have only burned-in, hardcoded subtitles that prevent video re-editing, translation, etc. In this paper, we construct a deep-learning-based system for the inverse conversion of a burned-in subtitle video to a subtitle file and an inpainted video, by coupling three deep neural networks (CTPN, CRNN, and EdgeConnect). We evaluated the performance of the proposed method and found that the deep learning method achieved high-precision separation of the subtitles and video frames and significantly improved the video inpainting results compared to the existing methods. This research fills a gap in the application of deep learning to burned-in subtitle video reconstruction and is expected to be widely applied in the reconstruction and re-editing of videos with subtitles, advertisements, logos, and other occlusions.


Author(s):  
Xiangling Ding ◽  
Yifeng Pan ◽  
Kui Luo ◽  
Yanming Huang ◽  
Junlin Ouyang ◽  
...  
Keyword(s):  

Author(s):  
Sameh Zarif ◽  
Mina Ibrahim

Reconstructing and repairing of corrupted or missing parts after object removal in digital video is an important trend in artwork restoration. Video inpainting is an active subject in video processing, which deals with the recovery of the corrupted or missing data. Most previous video inpainting approaches consume more time in extensive search to find the best patch to restore the damaged frames. In addition to that, most of them cannot handle the gradual and sudden illumination changes, dynamic background, full object occlusion, and object changes in scale. In this paper, we present a complete video inpainting framework without the extensive search process. The proposed framework consists of a segmentation stage based on low-resolution version and background subtraction. A background inpainting stage is applied to restore the damaged background regions after static or moving object removal based on the gray-level co-occurrence matrix (GLCM). A foreground inpainting stage is based on objects repository. GLCM is used to complete the moving occluded objects during the occlusion. The proposed method reduces the inpainting time from hours to a few seconds and maintains the spatial and temporal consistency. It works well when the background has clutter or fake motion, and it can handle the changes in object size and in posture. Moreover, it is able to handle full occlusion and illumination changes.


Sign in / Sign up

Export Citation Format

Share Document