Mosaic of UAV Aerial Video by Integrating Optical Flow Computation and Fourier-Mellin Transformation

2014 ◽  
Vol 556-562 ◽  
pp. 4352-4356
Author(s):  
Jun Wu ◽  
Ming Cheng Luo ◽  
Jun Li

UAV Video is rapidly emerging as a widely used source of imagery for many applications in recent years. This paper presents our research on the mosaic of UAV video for the purpose of harbor surveillance. First, one new framework on estimating video frame transformation with Optical flow is presented in this paper. For this new framework, fewer number of Gaussian pyramid is created for implementing for the multiresolution approach and thus, more details for optical flow computation is well kept; Second, we make a discussion on using Fourier-Mellin Transformation in image frequency domain to estimate initial motion parameter of adjacent video frames and with those initial motion parameters, small displacements for optical flow computation can be achieved; The experimental results demonstrated that the mosaic image generated from aerial video shows satisfied visual quality and its surveillance application for fast response to time-critical event, e.g., flood, is descried.

2020 ◽  
Vol 34 (07) ◽  
pp. 10663-10671 ◽  
Author(s):  
Myungsub Choi ◽  
Heewon Kim ◽  
Bohyung Han ◽  
Ning Xu ◽  
Kyoung Mu Lee

Prevailing video frame interpolation techniques rely heavily on optical flow estimation and require additional model complexity and computational cost; it is also susceptible to error propagation in challenging scenarios with large motion and heavy occlusion. To alleviate the limitation, we propose a simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component. Our algorithm employs a special feature reshaping operation, referred to as PixelShuffle, with a channel attention, which replaces the optical flow computation module. The main idea behind the design is to distribute the information in a feature map into multiple channels and extract motion information by attending the channels for pixel-level frame synthesis. The model given by this principle turns out to be effective in the presence of challenging motion and occlusion. We construct a comprehensive evaluation benchmark and demonstrate that the proposed approach achieves outstanding performance compared to the existing models with a component for optical flow computation.


2007 ◽  
Vol 2 (4) ◽  
pp. 259-270 ◽  
Author(s):  
Julio C. Sosa ◽  
Jose A. Boluda ◽  
Fernando Pardo ◽  
Rocío Gómez-Fabela

Sign in / Sign up

Export Citation Format

Share Document