Obstacle Detection Techniques for Vision Based Autonomous Navigation Systems

Author(s):  
R. Karthikeyan ◽  
B. Sheela Rani ◽  
K. Renganathan
2020 ◽  
Vol 166 ◽  
pp. 05004
Author(s):  
Martin Bogdanovskyi ◽  
Andrii Tkachuk ◽  
Oleksandr Dobrzhanskyi ◽  
Anna Humeniuk

The task of achieving greater flexibility and maneuverability of small transport and service units’ motion in modern factories by developing small autonomous navigation systems plays crucial role in complex automation of transport logistics nowadays. To solve navigation task, it was proposed the following approach, where as a means of assessing the environment was used computer vision system based on 5-megapixel CMOS image sensor and for the front obstacle detection was used auxiliary ultrasonic sensor as a limit switch. Authors solved the problem of yawing using artificial marking approach as along two-colored leading lines. For maneuverability increase during the turn was used speed movement control based on lines perspective. The basic design and technical characteristics of the four-wheel drive platform and the algorithm of the Raspberry PI 3/Arduino Nano hybrid control system are presented. Experimental results proved the viability of the proposed approach.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5511
Author(s):  
Eduardo Tondin Ferreira Dias ◽  
Hugo Vieira Neto ◽  
Fábio Kurt Schneider

Methods for autonomous navigation systems using sonars in air traditionally use the time-of-flight technique for obstacle detection and environment mapping. However, this technique suffers from constructive and destructive interference of ultrasonic reflections from multiple obstacles in the environment, requiring several acquisitions for proper mapping. This paper presents a novel approach for obstacle detection and localisation using inverse problems and compressed sensing concepts. Experiments were conducted with multiple obstacles present in a controlled environment using a hardware platform with four transducers, which was specially designed for sending, receiving and acquiring raw ultrasonic signals. A comparison between the performance of compressed sensing using Orthogonal Matching Pursuit and two traditional image reconstruction methods was conducted. The reconstructed 2D images representing the cross-section of the sensed environment were quantitatively assessed, showing promising results for robotic mapping tasks using compressed sensing.


Author(s):  
Vladimir T. Minligareev ◽  
Elena N. Khotenko ◽  
Vadim V. Tregubov ◽  
Tatyana V. Sazonova ◽  
Vaclav L. Kravchenok

2021 ◽  
Vol 7 (4) ◽  
pp. 61
Author(s):  
David Urban ◽  
Alice Caplier

As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network’s ability to adapt to new sceneries with different types of obstacles with supervised learning.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2947
Author(s):  
Ming Hua ◽  
Kui Li ◽  
Yanhong Lv ◽  
Qi Wu

Generally, in order to ensure the reliability of Navigation system, vehicles are usually equipped with two or more sets of inertial navigation systems (INSs). Fusion of navigation measurement information from different sets of INSs can improve the accuracy of autonomous navigation effectively. However, due to the existence of misalignment angles, the coordinate axes of different systems are usually not in coincidence with each other absolutely, which would lead to serious problems when integrating the attitudes information. Therefore, it is necessary to precisely calibrate and compensate the misalignment angles between different systems. In this paper, a dynamic calibration method of misalignment angles between two systems was proposed. This method uses the speed and attitude information of two sets of INSs during the movement of the vehicle as measurements to dynamically calibrate the misalignment angles of two systems without additional information sources or other external measuring equipment, such as turntable. A mathematical model of misalignment angles between two INSs was established. The simulation experiment and the INSs vehicle experiments were conducted to verify the effectiveness of the method. The results show that the calibration accuracy of misalignment angles between the two sets of systems can reach to 1″ while using the proposed method.


2021 ◽  
Author(s):  
Hao Wu ◽  
Jiangming Jin ◽  
Jidong Zhai ◽  
Yifan Gong ◽  
Wei Liu

Data ◽  
2018 ◽  
Vol 4 (1) ◽  
pp. 4 ◽  
Author(s):  
Viacheslav Moskalenko ◽  
Alona Moskalenko ◽  
Artem Korobov ◽  
Viktor Semashko

Trainable visual navigation systems based on deep learning demonstrate potential for robustness of onboard camera parameters and challenging environment. However, a deep model requires substantial computational resources and large labelled training sets for successful training. Implementation of the autonomous navigation and training-based fast adaptation to the new environment for a compact drone is a complicated task. The article describes an original model and training algorithms adapted to the limited volume of labelled training set and constrained computational resource. This model consists of a convolutional neural network for visual feature extraction, extreme-learning machine for estimating the position displacement and boosted information-extreme classifier for obstacle prediction. To perform unsupervised training of the convolution filters with a growing sparse-coding neural gas algorithm, supervised learning algorithms to construct the decision rules with simulated annealing search algorithm used for finetuning are proposed. The use of complex criterion for parameter optimization of the feature extractor model is considered. The resulting approach performs better trajectory reconstruction than the well-known ORB-SLAM. In particular, for sequence 7 from the KITTI dataset, the translation error is reduced by nearly 65.6% under the frame rate 10 frame per second. Besides, testing on the independent TUM sequence shot outdoors produces a translation error not exceeding 6% and a rotation error not exceeding 3.68 degrees per 100 m. Testing was carried out on the Raspberry Pi 3+ single-board computer.


Sign in / Sign up

Export Citation Format

Share Document