scholarly journals A general approach for egomotion estimation with omnidirectional images

Author(s):  
R.F. Vassallo ◽  
J. Santos-Victor ◽  
H.J. Schneebeli
Author(s):  
A. Radgui ◽  
C. Demonceaux ◽  
E. Mouaddib ◽  
M. Rziza ◽  
D. Aboutajdine

Egomotion estimation is based principally on the estimation of the optical flow in the image. Recent research has shown that the use of omnidirectional systems with large fields of view allow overcoming the limitation presented in planar-projection imagery in order to address the problem of motion analysis. For omnidirectional images, the 2D motion is often estimated using methods developed for perspective images. This paper adapts motion field calculated using adapted method which takes into account the distortions existing in the omnidirectional image. This 2D motion field is then used as input to the egomotion estimation process using spherical representation of the motion equation. Experimental results are shown and comparison of error measures are given to confirm that succeeded estimation of camera motion will be obtained when using an adapted method to estimate optical flow.


Author(s):  
A. Radgui ◽  
C. Demonceaux ◽  
E. Mouaddib ◽  
M. Rziza ◽  
D. Aboutajdine

Egomotion estimation is based principally on the estimation of the optical flow in the image. Recent research has shown that the use of omnidirectional systems with large fields of view allow overcoming the limitation presented in planar-projection imagery in order to address the problem of motion analysis. For omnidirectional images, the 2D motion is often estimated using methods developed for perspective images. This paper adapts motion field calculated using adapted method which takes into account the distortions existing in the omnidirectional image. This 2D motion field is then used as input to the egomotion estimation process using spherical representation of the motion equation. Experimental results are shown and comparison of error measures are given to confirm that succeeded estimation of camera motion will be obtained when using an adapted method to estimate optical flow.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


2021 ◽  
Vol 28 ◽  
pp. 334-338
Author(s):  
Hong-Xiang Chen ◽  
Kunhong Li ◽  
Zhiheng Fu ◽  
Mengyi Liu ◽  
Zonghao Chen ◽  
...  

2008 ◽  
Author(s):  
Vijayaraghavan Thirumalai ◽  
Ivana Tosic ◽  
Pascal Frossard

Author(s):  
N. S. Gopaul ◽  
J. G. Wang ◽  
B. Hu

An image-aided inertial navigation implies that the errors of an inertial navigator are estimated via the Kalman filter using the aiding measurements derived from images. The standard Kalman filter runs under the assumption that the process noise vector and measurement noise vector are white, i.e. independent and normally distributed with zero means. However, this does not hold in the image-aided inertial navigation. In the image-aided inertial integrated navigation, the relative positions from optic-flow egomotion estimation or visual odometry are <i>pairwise</i> correlated in terms of time. It is well-known that the solution of the standard Kalman filter becomes suboptimal if the measurements are colored or time-correlated. Usually, a shaping filter is used to model timecorrelated errors. However, the commonly used shaping filter assume that the measurement noise vector at epoch <i>k</i> is not only correlated with the one from epoch <i>k</i> &ndash; 1 but also with the ones before epoch <i>k</i> &ndash; 1 . The shaping filter presented in this paper uses Cholesky factors under the assumption that the measurement noise vector is pairwise time-correlated i.e. the measurement noise are only correlated with the ones from previous epoch. Simulation results show that the new algorithm performs better than the existing algorithms and is optimal.


Sign in / Sign up

Export Citation Format

Share Document