scholarly journals How Video Super-Resolution and Frame Interpolation Mutually Benefit

2021 ◽  
Author(s):  
Chengcheng Zhou ◽  
Zongqing Lu ◽  
Linge Li ◽  
Qiangyu Yan ◽  
Jing-Hao Xue
Author(s):  
Abdelhak Belhi ◽  
Abdelaziz Bouras

Museums and cultural institutions, in general, are in a constant challenge of adding more value to their collections. The attractiveness of assets is practically tightly related to their value obeying the offer and demand law. New digital visualization technologies are found to give more excitements, especially to the younger generation as it is proven by multiple studies. Nowadays, museums around the world are currently trying to promote their collections through new multimedia and digital technologies such as 3D modeling, Virtual Reality (VR), Augmented Reality (AR), serious games, etc. However, the difficulty and the resources required to implement such technologies present a real challenge. Through this poster, we propose a 3D acquisition and visualization framework aiming mostly at increasing the value of cultural collections. This framework preserves cost-effectiveness and time constraints while still introducing new ways of visualization and interaction with high-quality 3D models of cultural objects. Our framework leverages a new acquisition setup to simplify the visual capturing process by using consumer-level hardware. The acquired images are enhanced using frame interpolation and super-resolution. A photogrammetry tool is then used to generate the asset 3D model. This model is displayed in a screen attached to the leap motion controller, which allows hand interaction without having to deal with sophisticated controllers or headgear allowing almost natural interaction.


2021 ◽  
Vol 11 (20) ◽  
pp. 9665
Author(s):  
Soo-Young Cho ◽  
Dae-Yeol Kim ◽  
Su-Yeong Oh ◽  
Chae-Bong Sohn

Recently, as non-face-to-face work has become more common, the development of streaming services has become a significant issue. As these services are applied in increasingly diverse fields, various problems are caused by the overloading of systems when users try to transmit high-quality images. In this paper, SRGAN (Super Resolution Generative Adversarial Network) and DAIN (Depth-Aware Video Frame Interpolation) deep learning were used to reduce the overload that occurs during real-time video transmission. Images were divided into a FoV (Field of view) region and a non-FoV (Non-Field of view) region, and SRGAN was applied to the former, DAIN to the latter. Through this process, image quality was improved and system load was reduced.


2020 ◽  
Vol 34 (07) ◽  
pp. 11278-11286 ◽  
Author(s):  
Soo Ye Kim ◽  
Jihyong Oh ◽  
Munchurl Kim

Super-resolution (SR) has been widely used to convert low-resolution legacy videos to high-resolution (HR) ones, to suit the increasing resolution of displays (e.g. UHD TVs). However, it becomes easier for humans to notice motion artifacts (e.g. motion judder) in HR videos being rendered on larger-sized display devices. Thus, broadcasting standards support higher frame rates for UHD (Ultra High Definition) videos (4K@60 fps, 8K@120 fps), meaning that applying SR only is insufficient to produce genuine high quality videos. Hence, to up-convert legacy videos for realistic applications, not only SR but also video frame interpolation (VFI) is necessitated. In this paper, we first propose a joint VFI-SR framework for up-scaling the spatio-temporal resolution of videos from 2K 30 fps to 4K 60 fps. For this, we propose a novel training scheme with a multi-scale temporal loss that imposes temporal regularization on the input video sequence, which can be applied to any general video-related task. The proposed structure is analyzed in depth with extensive experiments.


2021 ◽  
Author(s):  
Martin Priessner ◽  
David C.A. Gaboriau ◽  
Arlo Sheridan ◽  
Tchern Lenn ◽  
Jonathan R. Chubb ◽  
...  

The development of high-resolution microscopes has made it possible to investigate cellular processes in 4D (3D over time). However, observing fast cellular dynamics remains challenging as a consequence of photobleaching and phototoxicity. These issues become increasingly problematic with the depth of the volume acquired and the speed of the biological events of interest. Here, we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo (ZS) and Depth-Aware Video Frame Interpolation (DAIN), based on combinations of recurrent neural networks, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series as a post-acquisition analysis step. We show that CAFI predictions are capable of understanding the motion context of biological structures to perform better than standard interpolation methods. We benchmark CAFI's performance on six different datasets, obtained from three different microscopy modalities (point-scanning confocal, spinning-disc confocal and confocal brightfield microscopy). We demonstrate its capabilities for single-particle tracking methods applied to the study of lysosome trafficking. CAFI therefore allows for reduced light exposure and phototoxicity on the sample and extends the possibility of long-term live-cell imaging. Both DAIN and ZS as well as the training and testing data are made available for use by the wider community via the ZeroCostDL4Mic platform.


Acta Naturae ◽  
2017 ◽  
Vol 9 (4) ◽  
pp. 42-51
Author(s):  
S. S. Ryabichko ◽  
◽  
A. N. Ibragimov ◽  
L. A. Lebedeva ◽  
E. N. Kozlov ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document