Continuous Bidirectional Optical Flow for Video Frame Sequence Interpolation

Author(s):  
Donghao Gu ◽  
ZhaoJing Wen ◽  
Wenxue Cui ◽  
Rui Wang ◽  
Feng Jiang ◽  
...  
Author(s):  
Rajkumar Kannan ◽  
Sridhar Swaminathan ◽  
Gheorghita Ghinea ◽  
Frederic Andres ◽  
Kalaiarasi Sonai Muthu Anbananthen

Video summarization condenses a video by extracting its informative and interesting segments. In this article, a novel video summarization approach is proposed based on spatiotemporal salient region detection. The proposed approach first segments a video into a set of shots which are ranked with spatiotemporal saliency scores. The score for a shot is computed by aggregating the frame level spatiotemporal saliency scores. This approach detects spatial and temporal salient regions separately using different saliency theories related to objects present in a visual scenario. The spatial saliency of a video frame is computed using color contrast and color distribution estimations and center prior integration. The temporal saliency of a video frame is estimated as an integration of local and global temporal saliencies computed using patch level optical flow abstractions. Finally, top ranked shots with the highest saliency scores are selected for generating the video summary. The objective and subjective experimental results demonstrate the efficacy of the proposed approach.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1251 ◽  
Author(s):  
Ahn ◽  
Jeong ◽  
Kim ◽  
Kwon ◽  
Yoo

Recently, video frame interpolation research developed with a convolutional neural network has shown remarkable results. However, these methods demand huge amounts of memory and run time for high-resolution videos, and are unable to process a 4K frame in a single pass. In this paper, we propose a fast 4K video frame interpolation method, based upon a multi-scale optical flow reconstruction scheme. The proposed method predicts low resolution bi-directional optical flow, and reconstructs it into high resolution. We also proposed consistency and multi-scale smoothness loss to enhance the quality of the predicted optical flow. Furthermore, we use adversarial loss to make the interpolated frame more seamless and natural. We demonstrated that the proposed method outperforms the existing state-of-the-art methods in quantitative evaluation, while it runs up to 4.39× faster than those methods for 4K videos.


Author(s):  
Kazuhiko Kawamoto ◽  
◽  
Naoya Ohnishi ◽  
Atsushi Imiya ◽  
Reinhard Klette ◽  
...  

A matching algorithm that evaluates the difference between model and calculated flows for obstacle detection in video sequences is presented. A stabilization method for obstacle detection by median filtering to overcome instability in the computation of optical flow is also presented. Since optical flow is a scene-independent measurement, the proposed algorithm can be applied to various situations, whereas most of existing color- and texture-based algorithms depend on specific scenes, such as roadway and indoor scenes. An experiment is conducted with three real image sequences, in which a static box or a moving toy car appears, to evaluate the performance in terms of accuracy under varying thresholds using a receiver operating characteristic (ROC) curve. For the three image sequences, the ROC curves show, in the best case, that the false positive fraction and the true positive fraction is 19.0% and 79.6%, 11.4% and 84.5%, 19.0% and 85.4%, respectively. The processing time per frame is 19.38msec. on 2.0GHz Pentium 4, which is less than the video-frame rate.


Author(s):  
Srivatsa Prativadibhayankaram ◽  
Huynh Van Luong ◽  
Thanh-Ha Le ◽  
André Kaup

In the context of video background-foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method can process per video frame from a small set of measurements, in contrast to conventional batch-based RPCA, which processes the full data. The proposed method also leverages multiple prior information by incorporating previously separated background and foreground frames in an n-l1 minimization problem. Moreover, optical flow is utilized to estimate motions between the previous foreground frames and then compensate the motions to achieve higher quality prior foregrounds for improving the separation. Our method is tested on several video sequences in different scenarios for online background-foreground separation given compressive measurements. The visual and quantitative results show that the proposed method outperforms other existing methods.


Author(s):  
Xiao-Yu Zhang ◽  
Haichao Shi ◽  
Changsheng Li ◽  
Kai Zheng ◽  
Xiaobin Zhu ◽  
...  

Action recognition in videos has attracted a lot of attention in the past decade. In order to learn robust models, previous methods usually assume videos are trimmed as short sequences and require ground-truth annotations of each video frame/sequence, which is quite costly and time-consuming. In this paper, given only video-level annotations, we propose a novel weakly supervised framework to simultaneously locate action frames as well as recognize actions in untrimmed videos. Our proposed framework consists of two major components. First, for action frame localization, we take advantage of the self-attention mechanism to weight each frame, such that the influence of background frames can be effectively eliminated. Second, considering that there are trimmed videos publicly available and also they contain useful information to leverage, we present an additional module to transfer the knowledge from trimmed videos for improving the classification performance in untrimmed ones. Extensive experiments are conducted on two benchmark datasets (i.e., THUMOS14 and ActivityNet1.3), and experimental results clearly corroborate the efficacy of our method.


2013 ◽  
Vol 427-429 ◽  
pp. 1789-1793
Author(s):  
Shuang Jun Liu ◽  
Rong Yi Cui

Based on video frame differential optical flow field, a method of crucial area detection for surveillance video images of examination room is proposed in this paper. Firstly, the optical flow field was calculated with the difference between two adjacent frames. Secondly, the scene was divided roughly into several blocks, and the blocks of which centroid speed is higher than given threshold were further divided into fine sub-blocks, and furthermore, the sub-block which has maximum centroid speed in the block was marked as the area of abnormal target. Finally, the sub-blocks with exceptional speed in the same observation time slice were judged to be the correlate areas with abnormal speed (CAAS), and the intersection of adjacent CAAS were determined as the crucial area. Experimental results show that the proposed method can effectively detect the abnormal movement area, and can accurately position the crucial area affecting other targets movement.


Sign in / Sign up

Export Citation Format

Share Document