Obstacle detection by evaluation of optical flow fields from image sequences

Author(s):  
Wilfried Enkelmann
Author(s):  
Kazuhiko Kawamoto ◽  
◽  
Naoya Ohnishi ◽  
Atsushi Imiya ◽  
Reinhard Klette ◽  
...  

A matching algorithm that evaluates the difference between model and calculated flows for obstacle detection in video sequences is presented. A stabilization method for obstacle detection by median filtering to overcome instability in the computation of optical flow is also presented. Since optical flow is a scene-independent measurement, the proposed algorithm can be applied to various situations, whereas most of existing color- and texture-based algorithms depend on specific scenes, such as roadway and indoor scenes. An experiment is conducted with three real image sequences, in which a static box or a moving toy car appears, to evaluate the performance in terms of accuracy under varying thresholds using a receiver operating characteristic (ROC) curve. For the three image sequences, the ROC curves show, in the best case, that the false positive fraction and the true positive fraction is 19.0% and 79.6%, 11.4% and 84.5%, 19.0% and 85.4%, respectively. The processing time per frame is 19.38msec. on 2.0GHz Pentium 4, which is less than the video-frame rate.


2020 ◽  
Vol 128 (4) ◽  
pp. 873-890 ◽  
Author(s):  
Anurag Ranjan ◽  
David T. Hoffmann ◽  
Dimitrios Tzionas ◽  
Siyu Tang ◽  
Javier Romero ◽  
...  

AbstractThe optical flow of humans is well known to be useful for the analysis of human action. Recent optical flow methods focus on training deep networks to approach the problem. However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset. We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single- and multi-person images. We then train optical flow networks to estimate human flow fields from pairs of images. We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. The code, trained models and the dataset are available for research.


Author(s):  
NAOYA OHNISHI ◽  
ATSUSHI IMIYA

In this paper, we present an algorithm for the hierarchical recognition of an environment using independent components of optical flow fields for the visual navigation of a mobile robot. For the computation of optical flow, the pyramid transform of an image sequence is used for the analysis of global and local motion. Our algorithm detects the planar region and obstacles in the image from optical flow fields at each layer in the pyramid. Therefore, our algorithm allows us to achieve both global perception and local perception for robot vision. We show experimental results for both test image sequences and real image sequences captured by a mobile robot. Furthermore, we show some aspects of this work from the viewpoint of information theory.


Author(s):  
HANS-HELLMUT NAGEL

Many investigations of image sequences can be understood on the basis of a few concepts for which computational approaches become increasingly available. The estimation of optical flow fields is discussed, exhibiting a common foundation for feature-based and differential approaches. The interpretation of optical flow fields is mostly concerned so far with approaches which infer the 3-D structure of a rigid point configuration in 3-D space and its relative motion with respect to the image sensor from an image sequence. The combination of stereo and motion provides additional incentives to evaluate image sequences, especially for the control of robots and autonomous vehicles. Advances in all these areas lead to the desire to describe the spatio-temporal development recorded by an image sequence not only at the level of geometry, but also at higher conceptual levels, for example by natural language descriptions.


Sign in / Sign up

Export Citation Format

Share Document