scholarly journals Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7603
Author(s):  
Yonhon Ng ◽  
Hongdong Li ◽  
Jonghyuk Kim

This paper presents a novel dense optical-flow algorithm to solve the monocular simultaneous localisation and mapping (SLAM) problem for ground or aerial robots. Dense optical flow can effectively provide the ego-motion of the vehicle while enabling collision avoidance with the potential obstacles. Existing research has not fully utilised the uncertainty of the optical flow—at most, an isotropic Gaussian density model has been used. We estimate the full uncertainty of the optical flow and propose a new eight-point algorithm based on the statistical Mahalanobis distance. Combined with the pose-graph optimisation, the proposed method demonstrates enhanced robustness and accuracy for the public autonomous car dataset (KITTI) and aerial monocular dataset.

Author(s):  
Umut Ulutas ◽  
Mustafa Tosun ◽  
Vecdi Emre Levent ◽  
Duygu Büyükaydın ◽  
Toygar Akgün ◽  
...  

2005 ◽  
Vol 44 (S 01) ◽  
pp. S46-S50 ◽  
Author(s):  
M. Dawood ◽  
N. Lang ◽  
F. Büther ◽  
M. Schäfers ◽  
O. Schober ◽  
...  

Summary:Motion in PET/CT leads to artifacts in the reconstructed PET images due to the different acquisition times of positron emission tomography and computed tomography. The effect of motion on cardiac PET/CT images is evaluated in this study and a novel approach for motion correction based on optical flow methods is outlined. The Lukas-Kanade optical flow algorithm is used to calculate the motion vector field on both simulated phantom data as well as measured human PET data. The motion of the myocardium is corrected by non-linear registration techniques and results are compared to uncorrected images.


Author(s):  
A. V. Bratulin ◽  
◽  
M. B. Nikiforov ◽  
A. I. Efimov ◽  
◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2407
Author(s):  
Hojun You ◽  
Dongsu Kim

Fluvial remote sensing has been used to monitor diverse riverine properties through processes such as river bathymetry and visual detection of suspended sediment, algal blooms, and bed materials more efficiently than laborious and expensive in-situ measurements. Red–green–blue (RGB) optical sensors have been widely used in traditional fluvial remote sensing. However, owing to their three confined bands, they rely on visual inspection for qualitative assessments and are limited to performing quantitative and accurate monitoring. Recent advances in hyperspectral imaging in the fluvial domain have enabled hyperspectral images to be geared with more than 150 spectral bands. Thus, various riverine properties can be quantitatively characterized using sensors in low-altitude unmanned aerial vehicles (UAVs) with a high spatial resolution. Many efforts are ongoing to take full advantage of hyperspectral band information in fluvial research. Although geo-referenced hyperspectral images can be acquired for satellites and manned airplanes, few attempts have been made using UAVs. This is mainly because the synthesis of line-scanned images on top of image registration using UAVs is more difficult owing to the highly sensitive and heavy image driven by dense spatial resolution. Therefore, in this study, we propose a practical technique for achieving high spatial accuracy in UAV-based fluvial hyperspectral imaging through efficient image registration using an optical flow algorithm. Template matching algorithms are the most common image registration technique in RGB-based remote sensing; however, they require many calculations and can be error-prone depending on the user, as decisions regarding various parameters are required. Furthermore, the spatial accuracy of this technique needs to be verified, as it has not been widely applied to hyperspectral imagery. The proposed technique resulted in an average reduction of spatial errors by 91.9%, compared to the case where the image registration technique was not applied, and by 78.7% compared to template matching.


2008 ◽  
Vol 05 (03) ◽  
pp. 223-233 ◽  
Author(s):  
RONG LIU ◽  
MAX Q. H. MENG

Time-to-contact (TTC) provides vital information for obstacle avoidance and for the visual navigation of a robot. In this paper, we present a novel method to estimate the TTC information of a moving object for monocular mobile robots. In specific, the contour of the moving object is extracted first using an active contour model; then the height of the motion contour and its temporal derivative are evaluated to generate the desired TTC estimates. Compared with conventional techniques employing the first-order derivatives of optical flow, the proposed estimator is less prone to errors of optical flow. Experiments using real-world images are conducted and the results demonstrate that the developed method can successfully achieve TTC with an average relative error (ARVE) of 0.039 with a single calibrated camera.


2021 ◽  
Author(s):  
Tian Shen ◽  
Cui Long ◽  
Liu Zhaoming ◽  
Wang Hongwei ◽  
Zhang Feng ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document