A real-time monocular vision-based frontal obstacle detection and avoidance for low cost UAVs in GPS denied environment

Author(s):  
Suman Saha ◽  
Ashutosh Natraj ◽  
Sonia Waharte
2014 ◽  
Vol 31 (3) ◽  
pp. 281-293 ◽  
Author(s):  
Baozhi Jia ◽  
Rui Liu ◽  
Ming Zhu

2018 ◽  
Vol 06 (04) ◽  
pp. 267-275
Author(s):  
Ajay Shankar ◽  
Mayank Vatsa ◽  
P. B. Sujit

Development of low-cost robots with the capability to detect and avoid obstacles along their path is essential for autonomous navigation. These robots have limited computational resources and payload capacity. Further, existing direct range-finding methods have the trade-off of complexity against range. In this paper, we propose a vision-based system for obstacle detection which is lightweight and useful for low-cost robots. Currently, monocular vision approaches used in the literature suffer from various environmental constraints such as texture and color. To mitigate these limitations, a novel algorithm is proposed, termed as Pyramid Histogram of Oriented Optical Flow ([Formula: see text]-HOOF), which distinctly captures motion vectors from local image patches and provides a robust descriptor capable of discriminating obstacles from nonobstacles. A support vector machine (SVM) classifier that uses [Formula: see text]-HOOF for real-time obstacle classification is utilized. To avoid obstacles, a behavior-based collision avoidance mechanism is designed that updates the probability of encountering an obstacle while navigating. The proposed approach depends only on the relative motion of the robot with respect to its surroundings, and therefore is suitable for both indoor and outdoor applications and has been validated through simulated and hardware experiments.


2021 ◽  
Vol 11 (16) ◽  
pp. 7225
Author(s):  
Eugenio Tramacere ◽  
Sara Luciani ◽  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati

Self-driving vehicles have experienced an increase in research interest in the last decades. Nevertheless, fully autonomous vehicles are still far from being a common means of transport. This paper presents the design and experimental validation of a processor-in-the-loop (PIL) architecture for an autonomous sports car. The considered vehicle is an all-wheel drive full-electric single-seater prototype. The retained PIL architecture includes all the modules required for autonomous driving at system level: environment perception, trajectory planning, and control. Specifically, the perception pipeline exploits obstacle detection algorithms based on Artificial Intelligence (AI), and the trajectory planning is based on a modified Rapidly-exploring Random Tree (RRT) algorithm based on Dubins curves, while the vehicle is controlled via a Model Predictive Control (MPC) strategy. The considered PIL layout is implemented firstly using a low-cost card-sized computer for fast code verification purposes. Furthermore, the proposed PIL architecture is compared in terms of performance to an alternative PIL using high-performance real-time target computing machine. Both PIL architectures exploit User Datagram Protocol (UDP) protocol to properly communicate with a personal computer. The latter PIL architecture is validated in real-time using experimental data. Moreover, they are also validated with respect to the general autonomous pipeline that runs in parallel on the personal computer during numerical simulation.


Author(s):  
Robert D. Leary ◽  
Sean Brennan

Currently, there is a lack of low-cost, real-time solutions for accurate autonomous vehicle localization. The fusion of a precise a priori map and a forward-facing camera can provide an alternative low-cost method for achieving centimeter-level localization. This paper analyzes the position and orientation bounds, or region of attraction, with which a real-time vehicle pose estimator can localize using monocular vision and a lane marker map. A pose estimation algorithm minimizes the residual pixel-level error between the estimated and detected lane marker features via Gauss-Newton nonlinear least-squares. Simulations of typical road scenes were used as ground truth to ensure the pose estimator will converge to the true vehicle pose. A successful convergence was defined as a pose estimate that fell within 5 cm and 0.25 degrees of the true vehicle pose. The results show that the longitudinal vehicle state is weakly observable with the smallest region of attraction. Estimating the remaining five vehicle states gives repeatable convergence within the prescribed convergence bounds over a relatively large region of attraction, even for the simple lane detection methods used herein. A main contribution of this paper is to demonstrate a repeatable and verifiable method to assess and compare lane-based vehicle localization strategies.


Author(s):  
Gabriel de Almeida Souza ◽  
Larissa Barbosa ◽  
Glênio Ramalho ◽  
Alexandre Zuquete Guarato

2007 ◽  
Author(s):  
R. E. Crosbie ◽  
J. J. Zenor ◽  
R. Bednar ◽  
D. Word ◽  
N. G. Hingorani

Sign in / Sign up

Export Citation Format

Share Document