Motion Estimation of Moving Objects

Author(s):  
Jian Chen ◽  
Bingxi Jia ◽  
Kaixiang Zhang
SMPTE Journal ◽  
1998 ◽  
Vol 107 (6) ◽  
pp. 340-347 ◽  
Author(s):  
H. Sonehara ◽  
Y. Nojiri ◽  
K. Iguchi ◽  
Y. Sugiura ◽  
H. Hirabayashi

Author(s):  
Zhiyong Shi ◽  
Fengchun Tian ◽  
Minjun Deng ◽  
Depeng Liu

To solve the uncertainty of motion for objects displayed on liquid crystal display (LCD), the motion estimation (ME) algorithm on the basis of global entirety unidirectional motion and local fast bidirectional motion (GEU-LFB) is proposed for motion-compensated temporal frame interpolation (MCTFI). Firstly, by observing the moving objects in the scene, we will obtain a set of global motion vectors (MVs). Secondly, the fast local search based on the bidirectional ME is executed. Additionally, to make up for the defect of the bidirectional ME, a method to generate the exposure mask and the occlusion mask is proposed by cleverly using the matching criterion of absolute difference and the set of global MVs. Next, the optimal MV field is smoothed by the vector median filter. Finally, the temporal interpolation frame obtained through the weighted filtering compensation method is further improved by using masks. Overall, the experimental results show that the proposed algorithm has better performance than the existing methods in terms of objective and subjective criteria. Moreover, it can solve problems of exposure and occlusion, and get better frame interpolation for the video sequence with fast moving targets.


2018 ◽  
Vol 10 (2) ◽  
pp. 157-170 ◽  
Author(s):  
Michael Chojnacki ◽  
Vadim Indelman

This paper presents a vision-based, computationally efficient method for simultaneous robot motion estimation and dynamic target tracking while operating in GPS-denied unknown or uncertain environments. While numerous vision-based approaches are able to achieve simultaneous ego-motion estimation along with detection and tracking of moving objects, many of them require performing a bundle adjustment optimization, which involves the estimation of the 3D points observed in the process. One of the main concerns in robotics applications is the computational effort required to sustain extended operation. Considering applications for which the primary interest is highly accurate online navigation rather than mapping, the number of involved variables can be considerably reduced by avoiding the explicit 3D structure reconstruction and consequently save processing time. We take advantage of the light bundle adjustment method, which allows for ego-motion calculation without the need for 3D points online reconstruction, and thus, to significantly reduce computational time compared to bundle adjustment. The proposed method integrates the target tracking problem into the light bundle adjustment framework, yielding a simultaneous ego-motion estimation and tracking process, in which the target is the only explicitly online reconstructed 3D point. Our approach is compared to bundle adjustment with target tracking in terms of accuracy and computational complexity, using simulated aerial scenarios and real-imagery experiments.


Sign in / Sign up

Export Citation Format

Share Document