Stereo vision based negative obstacle detection

Author(s):  
Hasith Karunasekera ◽  
Handuo Zhang ◽  
Tao Xi ◽  
Han Wang
Author(s):  
Taylor E. Baum ◽  
Kelilah L. Wolkowicz ◽  
Joseph P. Chobot ◽  
Sean N. Brennan

The objective of this work is to develop a negative obstacle detection algorithm for a robotic wheelchair. Negative obstacles — depressions in the surrounding terrain including descending stairwells, and curb drop-offs — present highly dangerous navigation scenarios because they exhibit wide characteristic variability, are perceptible only at close distances, and are difficult to detect at normal operating speeds. Negative obstacle detection on robotic wheelchairs could greatly increase the safety of the devices. The approach presented in this paper uses measurements from a single-scan laser range-finder and a microprocessor to detect negative obstacles. A real-time algorithm was developed that monitors time-varying changes in the measured distances and functions through the assumption that sharp increases in this monitored value represented a detected negative obstacle. It was found that LiDAR sensors with slight beam divergence and significant error produced impressive obstacle detection accuracy, detecting controlled examples of negative obstacles with 88% accuracy for 6 cm obstacles and above on a robotic development platform and 90% accuracy for 7.5 cm obstacles and above on a robotic wheelchair. The implementation of this algorithm could prevent life-changing injuries to robotic wheelchair users caused by negative obstacles.


Author(s):  
Mathias Perrollaz ◽  
Raphael Labayrade ◽  
Dominique Gruyer ◽  
Alain Lambert ◽  
Didier Aubert

2005 ◽  
Vol 48 (6) ◽  
pp. 2389-2397 ◽  
Author(s):  
Wei ◽  
F. Rovira-Mas ◽  
J. F. Reid ◽  
S. Han

Author(s):  
Muthukkumar S. Kadavasal ◽  
Abhishek Seth ◽  
James H. Oliver

A multi modal teleoperation interface is introduced featuring an integrated virtual reality based simulation augmented by sensors and image processing capabilities on-board the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view thereby allowing the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. As both the vehicle and the operator share absolute autonomy in stages, the operation is referred to as mixed autonomous. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The system effectively balances the autonomy between human and on board vehicle intelligence. The stereo vision based obstacle avoidance system is initially implemented on video based teleoperation architecture and experimental results are presented. The VR based multi modal teleoperation interface is expected to be more adaptable and intuitive when compared to other interfaces.


Sign in / Sign up

Export Citation Format

Share Document