INDEPENDENT COMPONENTS OF OPTICAL FLOW IN A MULTIRESOLUTION IMAGE SEQUENCE

Author(s):  
NAOYA OHNISHI ◽  
ATSUSHI IMIYA

In this paper, we present an algorithm for the hierarchical recognition of an environment using independent components of optical flow fields for the visual navigation of a mobile robot. For the computation of optical flow, the pyramid transform of an image sequence is used for the analysis of global and local motion. Our algorithm detects the planar region and obstacles in the image from optical flow fields at each layer in the pyramid. Therefore, our algorithm allows us to achieve both global perception and local perception for robot vision. We show experimental results for both test image sequences and real image sequences captured by a mobile robot. Furthermore, we show some aspects of this work from the viewpoint of information theory.

Author(s):  
HANS-HELLMUT NAGEL

Many investigations of image sequences can be understood on the basis of a few concepts for which computational approaches become increasingly available. The estimation of optical flow fields is discussed, exhibiting a common foundation for feature-based and differential approaches. The interpretation of optical flow fields is mostly concerned so far with approaches which infer the 3-D structure of a rigid point configuration in 3-D space and its relative motion with respect to the image sensor from an image sequence. The combination of stereo and motion provides additional incentives to evaluate image sequences, especially for the control of robots and autonomous vehicles. Advances in all these areas lead to the desire to describe the spatio-temporal development recorded by an image sequence not only at the level of geometry, but also at higher conceptual levels, for example by natural language descriptions.


2014 ◽  
Vol 538 ◽  
pp. 375-378 ◽  
Author(s):  
Xi Yuan Chen ◽  
Jing Peng Gao ◽  
Yuan Xu ◽  
Qing Hua Li

This paper proposed a new algorithm for optical flow-based monocular vision (MV)/ inertial navigation system (INS) integrated navigation. In this mode, a downward-looking camera is used to get the image sequences, which is used to estimate the velocity of the mobile robot by using optical flow algorithm. INS is employed for the yaw variation. In order to evaluate the performance of the proposed method, a real indoor test has done. The result shows that the proposed method has good performance for velocity estimation. It can be applied to the autonomous navigation of mobile robots when the Global Positioning System (GPS) and code wheel is unavailable.


Author(s):  
Thomas Sangild Sorensen ◽  
Karsten Ostergaard Noe ◽  
Christian P.V. Christoffersen ◽  
Martin Kristiansen ◽  
Kim Mouridsen ◽  
...  

2020 ◽  
Vol 128 (4) ◽  
pp. 873-890 ◽  
Author(s):  
Anurag Ranjan ◽  
David T. Hoffmann ◽  
Dimitrios Tzionas ◽  
Siyu Tang ◽  
Javier Romero ◽  
...  

AbstractThe optical flow of humans is well known to be useful for the analysis of human action. Recent optical flow methods focus on training deep networks to approach the problem. However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset. We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single- and multi-person images. We then train optical flow networks to estimate human flow fields from pairs of images. We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. The code, trained models and the dataset are available for research.


2011 ◽  
Vol 55-57 ◽  
pp. 1699-1704
Author(s):  
Jie Zhao ◽  
Zhen Feng Han ◽  
Gang Feng Liu ◽  
Yong Min Yang

To move efficiently in an unknown environment, a mobile robot must use observations taken by various sensors to detect obstacles. This paper describes a new approach to detect obstacles for serpentine robot. It captures the image sequence and analyzed the optical flow modules to estimate the deepness of the scene. This avoids one or higher order differential in the traditional optical flow calculation. The data of ultrasonic sensor and attitude transducer sensor are fused into the algorithm to improve the real-time capability and the robustness. The detecting results are presented by fuzzy diagrams which is concise and convenient. Indoor and outdoor experimental results demonstrate that this method can provide useful and comprehensive environment perception for the robot.


Sign in / Sign up

Export Citation Format

Share Document