Comparative Analysis of Video Stabilization using SIFT Flow and Optical Flow

Author(s):  
Muhammad Awais Rehman ◽  
Muhammad Ahmed Raza ◽  
Mughees Ahmed ◽  
Muhammad Adeel Ijaz ◽  
Mafaz Ahmad ◽  
...  
2017 ◽  
Vol 29 (3) ◽  
pp. 566-579 ◽  
Author(s):  
Sarthak Pathak ◽  
◽  
Alessandro Moro ◽  
Hiromitsu Fujii ◽  
Atsushi Yamashita ◽  
...  

[abstFig src='/00290003/12.jpg' width='300' text='Spherical video stabilization' ] We propose a method for stabilizing spherical videos by estimating and removing the effect of camera rotation using dense optical flow fields. By derotating each frame in the video to the orientation of its previous frame in two dense approaches, we estimate the complete 3 DoF rotation of the camera and remove it to stabilize the spherical video. Following this, any chosen area on the spherical video (equivalent of a normal camera’s field of view) is unwarped to result in a ‘rotation-less virtual camera’ that can be oriented independent of the camera motion. This can help in perception of the environment and camera motion much better. In order to achieve this, we use dense optical flow, which can provide important information about camera motion in a static environment and can have several advantages over sparse feature-point based approaches. The spatial regularization property of dense optical flow provides more stable motion information as compared to tracking sparse points and negates the effect of feature point outliers. We show superior results as compared to using sparse feature points alone.


2017 ◽  
Vol 16 (6) ◽  
pp. 1975-1985 ◽  
Author(s):  
Anli Lim ◽  
Bharath Ramesh ◽  
Yue Yang ◽  
Cheng Xiang ◽  
Zhi Gao ◽  
...  

2013 ◽  
pp. 43-58
Author(s):  
Marcelo Saval-Calvo ◽  
Jorge Azorín-López ◽  
Andrés Fuster-Guilló

In this chapter, a comparative analysis of basic segmentation methods of video sequences and their combinations is carried out. Analysis of different algorithms is based on the efficiency (true positive and false positive rates) and temporal cost to provide regions in the scene. These are two of the most important requirements of the design to provide to the tracking with segmentation in an efficient and timely manner constrained to the application. Specifically, methods using temporal information as Background Subtraction, Temporal Differencing, Optical Flow, and the four combinations of them have been analyzed. Experimentation has been done using image sequences of CAVIAR project database. Efficiency results show that Background Subtraction achieves the best individual result whereas the combination of the three basic methods is the best result in general. However, combinations with Optical Flow should be considered depending of application, because its temporal cost is too high with respect to efficiency provided to the combination.


2019 ◽  
Vol 31 (1) ◽  
pp. 1-1
Author(s):  
Editorial Office

We are pleased to announce that the 11th Journal of Robotics and Mechatronics Best Paper Award (JRM Best Paper Award 2018) has been decided by the JRM editorial committee. The following paper won the JRM Best Paper Award 2018, severely selected from among all 97 papers published in Vol.29 (2017). The Best Paper Award ceremony was held in Gakushi-Kaikan, Tokyo, Japan, on December 17, 2018, attended by the authors and JRM editorial committee members who took part in the selection process. The award winner will also be announced on the JRM website and was given a certificate and a nearly US$1,000 honorarium. JRM Best Paper Award 2018 Title: Spherical Video Stabilization by Estimating Rotation from Dense Optical Flow Fields Authors: Sarthak Pathak, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama J. Robot. Mechatron., Vol.29 No.3, pp. 566-579, June 2017


2014 ◽  
Vol 519-520 ◽  
pp. 640-643 ◽  
Author(s):  
Jing Dong ◽  
Yang Xia

In this paper, a real-time video stabilization algorithm based on smoothing feature trajectories is proposed. For each input frame, our approach generates multiple feature trajectories by performing inter-frame template match and optical flow. A Kalman filter is then performed to smooth these feature trajectories. Finally, at the stage of image composition, the motion consistency of the feature trajectory is considered for achieving a visually plausible stabilized video. The proposed method can offer real-time video stabilization and its removed the delays for caching coming images. Experiments show that our approach can offer real-time stabilizing for videos with various complicated scenes.


2015 ◽  
Vol 15 (7) ◽  
pp. 23-34
Author(s):  
Atanas Nikolov ◽  
Dimo Dimov

Abstract The current research concerns the problem of video stabilization “in a point”, which aims to stabilize all video frames according to one chosen reference frame to produce a new video, as by a static camera. Similar task importance relates providing static background in the video sequence that can be usable for correct measurements in the frames when studying dynamic objects in the video. For this aim we propose an efficient combined approach, called “3×3OF9×9”. It fuses our the previous development for fast and rigid 2D video stabilization [2] with the well-known Optical Flow approach, applied by parts via Otsu segmentation, for eliminating the influence of moving objects in the video. The obtained results are compared with those, produced by the commercial software Warp Stabilizer of Adobe-After-Effects CS6.


Sign in / Sign up

Export Citation Format

Share Document