scholarly journals Enhanced Real-Time Intermediate Flow Estimation for Video Frame Interpolation

Author(s):  
Minseop Kim ◽  
Haechul Choi

Recently, the demand for high-quality video content has rapidly been increasing, led by the development of network technology and the growth in video streaming platforms. In particular, displays with a high refresh rate, such as 120 Hz, have become popular. However, the visual quality is only enhanced if the video stream is produced at the same high frame rate. For the high quality, conventional videos with a low frame rate should be converted into a high frame rate in real time. This paper introduces a bidirectional intermediate flow estimation method for real-time video frame interpolation. A bidirectional intermediate optical flow is directly estimated to predict an accurate intermediate frame. For real-time processing, multiple frames are interpolated with a single intermediate optical flow and parts of the network are implemented in 16-bit floating-point precision. Perceptual loss is also applied to improve the cognitive performance of the interpolated frames. The experimental results showed a high prediction accuracy of 35.54 dB on the Vimeo90K triplet benchmark dataset. The interpolation speed of 84 fps was achieved for 480p resolution.

2012 ◽  
Vol 24 (4) ◽  
pp. 686-698 ◽  
Author(s):  
Lei Chen ◽  
◽  
Hua Yang ◽  
Takeshi Takaki ◽  
Idaku Ishii

In this paper, we propose a novel method for accurate optical flow estimation in real time for both high-speed and low-speed moving objects based on High-Frame-Rate (HFR) videos. We introduce a multiframe-straddling function to select several pairs of images with different frame intervals from an HFR image sequence even when the estimated optical flow is required to output at standard video rates (NTSC at 30 fps and PAL at 25 fps). The multiframestraddling function can remarkably improve the measurable range of velocities in optical flow estimation without heavy computation by adaptively selecting a small frame interval for high-speed objects and a large frame interval for low-speed objects. On the basis of the relationship between the frame intervals and the accuracies of the optical flows estimated by the Lucas–Kanade method, we devise a method to determine multiple frame intervals in optical flow estimation and select an optimal frame interval from these intervals according to the amplitude of the estimated optical flow. Our method was implemented using software on a high-speed vision platform, IDP Express. The estimated optical flows were accurately outputted at intervals of 40 ms in real time by using three pairs of 512×512 images; these images were selected by frame-straddling a 2000-fps video with intervals of 0.5, 1.5, and 5 ms. Several experiments were performed for high-speed movements to verify that our method can remarkably improve the measurable range of velocities in optical flow estimation, compared to optical flows estimated for 25-fps videos with the Lucas–Kanade method.


2021 ◽  
Vol 5 (2) ◽  
pp. 50-61
Author(s):  
Uroš Hudomalj ◽  
Christopher Mandla ◽  
Markus Plattner

This paper presents FPGA implementations of image filtering and image averaging – two widely applied image preprocessing algorithms. The implementations are targeted for real time processing of high frame rate and high resolution image streams. The developed implementations are evaluated in terms of resource usage, power consumption, and achievable frame rates. For the evaluation, Microsemi’s Smartfusion2 Advanced Development Kit is used. It includes a SmartFusion2 M2S150 SoC FPGA. The performance of the developed implementation of image filtering algorithm is compared to a solution provided by MATLAB’s Vision HDL Toolbox, which is evaluated on the same platform. The performance of the developed implementations are also compared with FPGA implementations found in existing publications, although those are evaluated on different FPGA platforms. Difficulties with performance comparison between implementations on different platforms are addressed and limitations of processing image streams with FPGA platforms discussed.


2021 ◽  
Vol 22 (1-2) ◽  
pp. 21-28
Author(s):  
Malwina Kubas ◽  
Grzegorz Sarwas

The problem of video inter-frame interpolation is an essential task in the field of image processing. Correctlyincreasing the number of frames in the recording while maintaining smooth movement allows to improve thequality of played video sequence, enables more effective compression and creating a slow-motion recording. Thispaper proposes the FastRIFE algorithm, which is some speed improvement of the RIFE (Real-Time IntermediateFlow Estimation) model. The novel method was examined and compared with other recently published algorithms.All source codes are available at:https://gitlab.com/malwinq/interpolation-of-images-for-slow-motion-videos.


2020 ◽  
Vol 34 (07) ◽  
pp. 10663-10671 ◽  
Author(s):  
Myungsub Choi ◽  
Heewon Kim ◽  
Bohyung Han ◽  
Ning Xu ◽  
Kyoung Mu Lee

Prevailing video frame interpolation techniques rely heavily on optical flow estimation and require additional model complexity and computational cost; it is also susceptible to error propagation in challenging scenarios with large motion and heavy occlusion. To alleviate the limitation, we propose a simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component. Our algorithm employs a special feature reshaping operation, referred to as PixelShuffle, with a channel attention, which replaces the optical flow computation module. The main idea behind the design is to distribute the information in a feature map into multiple channels and extract motion information by attending the channels for pixel-level frame synthesis. The model given by this principle turns out to be effective in the presence of challenging motion and occlusion. We construct a comprehensive evaluation benchmark and demonstrate that the proposed approach achieves outstanding performance compared to the existing models with a component for optical flow computation.


Sign in / Sign up

Export Citation Format

Share Document