scholarly journals RELIABLE DETECTION OF CAMERA MOTION BASED ON WEIGHTED OPTICAL FLOW FITTING

2017 ◽  
Vol 29 (3) ◽  
pp. 566-579 ◽  
Author(s):  
Sarthak Pathak ◽  
◽  
Alessandro Moro ◽  
Hiromitsu Fujii ◽  
Atsushi Yamashita ◽  
...  

[abstFig src='/00290003/12.jpg' width='300' text='Spherical video stabilization' ] We propose a method for stabilizing spherical videos by estimating and removing the effect of camera rotation using dense optical flow fields. By derotating each frame in the video to the orientation of its previous frame in two dense approaches, we estimate the complete 3 DoF rotation of the camera and remove it to stabilize the spherical video. Following this, any chosen area on the spherical video (equivalent of a normal camera’s field of view) is unwarped to result in a ‘rotation-less virtual camera’ that can be oriented independent of the camera motion. This can help in perception of the environment and camera motion much better. In order to achieve this, we use dense optical flow, which can provide important information about camera motion in a static environment and can have several advantages over sparse feature-point based approaches. The spatial regularization property of dense optical flow provides more stable motion information as compared to tracking sparse points and negates the effect of feature point outliers. We show superior results as compared to using sparse feature points alone.


Author(s):  
A. Radgui ◽  
C. Demonceaux ◽  
E. Mouaddib ◽  
M. Rziza ◽  
D. Aboutajdine

Egomotion estimation is based principally on the estimation of the optical flow in the image. Recent research has shown that the use of omnidirectional systems with large fields of view allow overcoming the limitation presented in planar-projection imagery in order to address the problem of motion analysis. For omnidirectional images, the 2D motion is often estimated using methods developed for perspective images. This paper adapts motion field calculated using adapted method which takes into account the distortions existing in the omnidirectional image. This 2D motion field is then used as input to the egomotion estimation process using spherical representation of the motion equation. Experimental results are shown and comparison of error measures are given to confirm that succeeded estimation of camera motion will be obtained when using an adapted method to estimate optical flow.


2008 ◽  
Vol 08 (04) ◽  
pp. 573-600 ◽  
Author(s):  
NINAD THAKOOR ◽  
JEAN X. GAO

Automatic moving object extraction has been explored extensively in the image processing and computer vision community. Generally, moving object extraction schemes rely on either optical flow or frame difference. Optical flow methods can deal with moving cameras, but they are inconsistent at object boundaries and the object segmentation tends to be inaccurate. Although frame difference approaches can detect object boundaries, they cannot detect the uniform intensity interior regions. Additionally, the frame difference approaches cannot deal with moving cameras. We present a novel technique for the automatic extraction of a moving object captured by a moving camera by blending the information from the optical flow, the frame differences, and the spatial segmentation. The optical flow is used to compensate the camera motion and to generate a model for the background. Next, the differences in the compensated frames are compared with the background model to detect the changes in the frame. Finally, the detected changes and the spatial segmentation are combined to identify the moving uniform intensity regions. Experimental results of the proposed moving object extraction method for a variety of videos are presented.


Author(s):  
Liang Liu ◽  
Guangyao Zhai ◽  
Wenlong Ye ◽  
Yong Liu

Scene flow estimation in the dynamic scene remains a challenging task. Computing scene flow by a combination of 2D optical flow and depth has shown to be considerably faster with acceptable performance. In this work, we present a unified framework for joint unsupervised learning of stereo depth and optical flow with explicit local rigidity to estimate scene flow. We estimate camera motion directly by a Perspective-n-Point method from the optical flow and depth predictions, with RANSAC outlier rejection scheme. In order to disambiguate the object motion and the camera motion in the scene, we distinguish the rigid region by the re-project error and the photometric similarity. By joint learning with the local rigidity, both depth and optical networks can be refined. This framework boosts all four tasks: depth, optical flow, camera motion estimation, and object motion segmentation. Through the evaluation on the KITTI benchmark, we show that the proposed framework achieves state-of-the-art results amongst unsupervised methods. Our models and code are available at https://github.com/lliuz/unrigidflow.


Author(s):  
A. Radgui ◽  
C. Demonceaux ◽  
E. Mouaddib ◽  
M. Rziza ◽  
D. Aboutajdine

Egomotion estimation is based principally on the estimation of the optical flow in the image. Recent research has shown that the use of omnidirectional systems with large fields of view allow overcoming the limitation presented in planar-projection imagery in order to address the problem of motion analysis. For omnidirectional images, the 2D motion is often estimated using methods developed for perspective images. This paper adapts motion field calculated using adapted method which takes into account the distortions existing in the omnidirectional image. This 2D motion field is then used as input to the egomotion estimation process using spherical representation of the motion equation. Experimental results are shown and comparison of error measures are given to confirm that succeeded estimation of camera motion will be obtained when using an adapted method to estimate optical flow.


Sign in / Sign up

Export Citation Format

Share Document