frame interpolation
Recently Published Documents


TOTAL DOCUMENTS

290
(FIVE YEARS 107)

H-INDEX

20
(FIVE YEARS 5)

Author(s):  
Jiawen Xu ◽  
Zhiyuan You ◽  
Xinyi Le ◽  
Cailian Chen ◽  
Xinping Guan

2021 ◽  
Vol 40 (6) ◽  
pp. 1-13
Author(s):  
Karlis Martins Briedis ◽  
Abdelaziz Djelouah ◽  
Mark Meyer ◽  
Ian McGonigal ◽  
Markus Gross ◽  
...  
Keyword(s):  

Author(s):  
Minseop Kim ◽  
Haechul Choi

Recently, the demand for high-quality video content has rapidly been increasing, led by the development of network technology and the growth in video streaming platforms. In particular, displays with a high refresh rate, such as 120 Hz, have become popular. However, the visual quality is only enhanced if the video stream is produced at the same high frame rate. For the high quality, conventional videos with a low frame rate should be converted into a high frame rate in real time. This paper introduces a bidirectional intermediate flow estimation method for real-time video frame interpolation. A bidirectional intermediate optical flow is directly estimated to predict an accurate intermediate frame. For real-time processing, multiple frames are interpolated with a single intermediate optical flow and parts of the network are implemented in 16-bit floating-point precision. Perceptual loss is also applied to improve the cognitive performance of the interpolated frames. The experimental results showed a high prediction accuracy of 35.54 dB on the Vimeo90K triplet benchmark dataset. The interpolation speed of 84 fps was achieved for 480p resolution.


2021 ◽  
Vol 30 (06) ◽  
Author(s):  
Haoran Zhang ◽  
Xiaohui Yang ◽  
Zhiquan Feng

2021 ◽  
Author(s):  
Martin Priessner ◽  
David C.A. Gaboriau ◽  
Arlo Sheridan ◽  
Tchern Lenn ◽  
Jonathan R. Chubb ◽  
...  

The development of high-resolution microscopes has made it possible to investigate cellular processes in 4D (3D over time). However, observing fast cellular dynamics remains challenging as a consequence of photobleaching and phototoxicity. These issues become increasingly problematic with the depth of the volume acquired and the speed of the biological events of interest. Here, we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo (ZS) and Depth-Aware Video Frame Interpolation (DAIN), based on combinations of recurrent neural networks, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series as a post-acquisition analysis step. We show that CAFI predictions are capable of understanding the motion context of biological structures to perform better than standard interpolation methods. We benchmark CAFI's performance on six different datasets, obtained from three different microscopy modalities (point-scanning confocal, spinning-disc confocal and confocal brightfield microscopy). We demonstrate its capabilities for single-particle tracking methods applied to the study of lysosome trafficking. CAFI therefore allows for reduced light exposure and phototoxicity on the sample and extends the possibility of long-term live-cell imaging. Both DAIN and ZS as well as the training and testing data are made available for use by the wider community via the ZeroCostDL4Mic platform.


2021 ◽  
Author(s):  
Chengcheng Zhou ◽  
Zongqing Lu ◽  
Linge Li ◽  
Qiangyu Yan ◽  
Jing-Hao Xue

2021 ◽  
Vol 11 (20) ◽  
pp. 9665
Author(s):  
Soo-Young Cho ◽  
Dae-Yeol Kim ◽  
Su-Yeong Oh ◽  
Chae-Bong Sohn

Recently, as non-face-to-face work has become more common, the development of streaming services has become a significant issue. As these services are applied in increasingly diverse fields, various problems are caused by the overloading of systems when users try to transmit high-quality images. In this paper, SRGAN (Super Resolution Generative Adversarial Network) and DAIN (Depth-Aware Video Frame Interpolation) deep learning were used to reduce the overload that occurs during real-time video transmission. Images were divided into a FoV (Field of view) region and a non-FoV (Non-Field of view) region, and SRGAN was applied to the former, DAIN to the latter. Through this process, image quality was improved and system load was reduced.


Sign in / Sign up

Export Citation Format

Share Document