Embedded Implementation of a Resource-Efficient Optical Flow Extraction Method

MACRo 2015 ◽  
2015 ◽  
Vol 1 (1) ◽  
pp. 163-175
Author(s):  
László Bakó ◽  
Sándor-Tihamér Brassai ◽  
Călin Enăchescu

AbstractThe main goal of the proposed project is to enhance a mobile robot with evolutionary optimization capabilities for tasks like egomotion estimation and/or obstacle avoidance. The robot will learn to navigate different environments and will adapt to changing conditions. This implies the implementation of vision-based navigation of robots using artificial vision, computed with on-board FPGAs. The current paper aim to contribute on the implementation of a real-time motion extraction from video a feed using embedded FPGA circuits.

2011 ◽  
Vol 464 ◽  
pp. 204-207
Author(s):  
Huan Xun Li ◽  
Jun Jie Shen ◽  
Shuai Guo

In order to improve the accuracy and security when autonomous mobile robot moves in narrow area, a real-time navigation and obstacle avoidance algorithm is put forward. The feature extraction method is used to search for the path points, and the angle potential field method is used to search for the target angle. Based on the two methods more accurate environment modeling and navigation for mobile robot in narrow area is realized. The algorithm has been used successfully in the household robot, and the experiment results show it’s accurate and real-time.


2016 ◽  
Vol 22 ◽  
pp. 897-904 ◽  
Author(s):  
Laszlo Bako ◽  
Szabolcs Hajdu ◽  
Sandor-Tihamer Brassai ◽  
Fearghal Morgan ◽  
Calin Enachescu

2014 ◽  
Vol 62 (1) ◽  
pp. 139-150 ◽  
Author(s):  
S.A. Mahmoudi ◽  
M. Kierzynka ◽  
P. Manneback ◽  
K. Kurowski

Abstract Motion tracking algorithms are widely used in computer vision related research. However, the new video standards, especially those in high resolutions, cause that current implementations, even running on modern hardware, no longer meet the needs of real-time processing. To overcome this challenge several GPU (Graphics Processing Unit) computing approaches have recently been proposed. Although they present a great potential of a GPU platform, hardly any is able to process high definition video sequences efficiently. Thus, a need arose to develop a tool being able to address the outlined problem. In this paper we present software that implements optical flow motion tracking using the Lucas-Kanade algorithm. It is also integrated with the Harris corner detector and therefore the algorithm may perform sparse tracking, i.e. tracking of the meaningful pixels only. This allows to substantially lower the computational burden of the method. Moreover, both parts of the algorithm, i.e. corner selection and tracking, are implemented on GPU and, as a result, the software is immensely fast, allowing for real-time motion tracking on videos in Full HD or even 4K format. In order to deliver the highest performance, it also supports multiple GPU systems, where it scales up very well


2014 ◽  
Vol 926-930 ◽  
pp. 3302-3305 ◽  
Author(s):  
Dong Ming Liu ◽  
Chao Liu ◽  
Hai Wei Mu

With the FPGA technology progressing, the speed, the internal multiplier and the internal RAM of the FPGA are increasing. Its internal resource can be allocated flexibility, and there is no limit on the pipeline stages, so it is more suitable for real-time video processing comparing with the previous DSP and PC. By this reason, the DE2 development system is selected as the real-time video processing platform, which has a core of the Cyclone II series FPGA, in which the calculation of LK-algorithm-based real-time optical flow is implemented. Finally, by the reasonable overall arrangement for the pipeline and the sub-pipeline, the system achieves the real-time video motion tracking for the 640×480 resolution 30 frames/s images.


2011 ◽  
Vol 55-57 ◽  
pp. 1699-1704
Author(s):  
Jie Zhao ◽  
Zhen Feng Han ◽  
Gang Feng Liu ◽  
Yong Min Yang

To move efficiently in an unknown environment, a mobile robot must use observations taken by various sensors to detect obstacles. This paper describes a new approach to detect obstacles for serpentine robot. It captures the image sequence and analyzed the optical flow modules to estimate the deepness of the scene. This avoids one or higher order differential in the traditional optical flow calculation. The data of ultrasonic sensor and attitude transducer sensor are fused into the algorithm to improve the real-time capability and the robustness. The detecting results are presented by fuzzy diagrams which is concise and convenient. Indoor and outdoor experimental results demonstrate that this method can provide useful and comprehensive environment perception for the robot.


2009 ◽  
Vol 2 (1) ◽  
pp. 63-78 ◽  
Author(s):  
Boyoon Jung ◽  
Gaurav S. Sukhatme

Sign in / Sign up

Export Citation Format

Share Document