scholarly journals A Brain-Inspired Goal-Oriented Robot Navigation System

2019 ◽  
Vol 9 (22) ◽  
pp. 4869
Author(s):  
Qiuying Chen ◽  
Hongwei Mo

Autonomous navigation in unknown environments is still a challenge for robotics. Many efforts have been exerted to develop truly autonomous goal-oriented robot navigation models based on the neural mechanism of spatial cognition and mapping in animals’ brains. Inspired by the Semantic Pointer Architecture Unified Network (SPAUN) neural model and neural navigation mechanism, we developed a brain-like biologically plausible mathematical model and applied it to robotic spatial navigation tasks. The proposed cognitive navigation framework adopts a one-dimensional ring attractor to model the head-direction cells, uses the sinusoidal interference model to obtain the grid-like activity pattern, and gets optimal movement direction based on the entire set of activities. The application of adaptive resonance theory (ART) could effectively reduce resource consumption and solve the problem of stability and plasticity in the dynamic adjustment network. This brain-like system model broadens the perspective to develop more powerful autonomous robotic navigation systems. The proposed model was tested under different conditions and exhibited superior navigation performance, proving its effectiveness and reliability.

Author(s):  
JINXUE SUI ◽  
LI YANG ◽  
XIN ZHANG ◽  
XIA ZHANG

The laser ranging has extensive detection range and high measurement accuracy and so it is widely applied in navigation systems of the autonomous robot. But different environmental conditions have different influences on the transmission of the laser, producing errors of various degrees to measurement. The proposed range of influence in research on laser ranging LMS200 which is used in robot navigation under dissimilar conditions, by analyzing the primary factor that affects LMS200 ranging in a room and outside, contributing to the experimental contrast, obtaining experimental effects under environmentally influencing factors, discovering reflectivity, incidence, mixed pixel and visibility and so on that are environmental factors truly affecting practical application, builds the foundation for carrying out the ranging application of robot navigation. Mobile robots need enough sensor information of movements for carrying out autonomous navigation. The information establishment environment model in the completely unknown environment, depends upon itself to provide sensors to the robot that carries out autonomous localization and navigation. This paper also systematically introduces the applied research of laser ranging in mobile robot autonomous navigation, carrying out the scanning of the environment through the two-dimensional range finder sensor, proposed in autonomous navigation of map building and localization that uses two-dimensional scanning to obtain three-dimensional data stream, the experiment confirms that this algorithm allows the robot to obtain a very good three-dimensional visual structure drawing.


Author(s):  
Vladimir T. Minligareev ◽  
Elena N. Khotenko ◽  
Vadim V. Tregubov ◽  
Tatyana V. Sazonova ◽  
Vaclav L. Kravchenok

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2947
Author(s):  
Ming Hua ◽  
Kui Li ◽  
Yanhong Lv ◽  
Qi Wu

Generally, in order to ensure the reliability of Navigation system, vehicles are usually equipped with two or more sets of inertial navigation systems (INSs). Fusion of navigation measurement information from different sets of INSs can improve the accuracy of autonomous navigation effectively. However, due to the existence of misalignment angles, the coordinate axes of different systems are usually not in coincidence with each other absolutely, which would lead to serious problems when integrating the attitudes information. Therefore, it is necessary to precisely calibrate and compensate the misalignment angles between different systems. In this paper, a dynamic calibration method of misalignment angles between two systems was proposed. This method uses the speed and attitude information of two sets of INSs during the movement of the vehicle as measurements to dynamically calibrate the misalignment angles of two systems without additional information sources or other external measuring equipment, such as turntable. A mathematical model of misalignment angles between two INSs was established. The simulation experiment and the INSs vehicle experiments were conducted to verify the effectiveness of the method. The results show that the calibration accuracy of misalignment angles between the two sets of systems can reach to 1″ while using the proposed method.


2021 ◽  
Author(s):  
Hao Wu ◽  
Jiangming Jin ◽  
Jidong Zhai ◽  
Yifan Gong ◽  
Wei Liu

Data ◽  
2018 ◽  
Vol 4 (1) ◽  
pp. 4 ◽  
Author(s):  
Viacheslav Moskalenko ◽  
Alona Moskalenko ◽  
Artem Korobov ◽  
Viktor Semashko

Trainable visual navigation systems based on deep learning demonstrate potential for robustness of onboard camera parameters and challenging environment. However, a deep model requires substantial computational resources and large labelled training sets for successful training. Implementation of the autonomous navigation and training-based fast adaptation to the new environment for a compact drone is a complicated task. The article describes an original model and training algorithms adapted to the limited volume of labelled training set and constrained computational resource. This model consists of a convolutional neural network for visual feature extraction, extreme-learning machine for estimating the position displacement and boosted information-extreme classifier for obstacle prediction. To perform unsupervised training of the convolution filters with a growing sparse-coding neural gas algorithm, supervised learning algorithms to construct the decision rules with simulated annealing search algorithm used for finetuning are proposed. The use of complex criterion for parameter optimization of the feature extractor model is considered. The resulting approach performs better trajectory reconstruction than the well-known ORB-SLAM. In particular, for sequence 7 from the KITTI dataset, the translation error is reduced by nearly 65.6% under the frame rate 10 frame per second. Besides, testing on the independent TUM sequence shot outdoors produces a translation error not exceeding 6% and a rotation error not exceeding 3.68 degrees per 100 m. Testing was carried out on the Raspberry Pi 3+ single-board computer.


Agriculture ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 954
Author(s):  
Abhijeet Ravankar ◽  
Ankit A. Ravankar ◽  
Arpit Rawankar ◽  
Yohei Hoshino

In recent years, autonomous robots have extensively been used to automate several vineyard tasks. Autonomous navigation is an indispensable component of such field robots. Autonomous and safe navigation has been well studied in indoor environments and many algorithms have been proposed. However, unlike structured indoor environments, vineyards pose special challenges for robot navigation. Particularly, safe robot navigation is crucial to avoid damaging the grapes. In this regard, we propose an algorithm that enables autonomous and safe robot navigation in vineyards. The proposed algorithm relies on data from a Lidar sensor and does not require a GPS. In addition, the proposed algorithm can avoid dynamic obstacles in the vineyard while smoothing the robot’s trajectories. The curvature of the trajectories can be controlled, keeping a safe distance from both the crop and the dynamic obstacles. We have tested the algorithm in both a simulation and with robots in an actual vineyard. The results show that the robot can safely navigate the lanes of the vineyard and smoothly avoid dynamic obstacles such as moving people without abruptly stopping or executing sharp turns. The algorithm performs in real-time and can easily be integrated into robots deployed in vineyards.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guangbing Zhou ◽  
Jing Luo ◽  
Shugong Xu ◽  
Shunqing Zhang ◽  
Shige Meng ◽  
...  

Purpose Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper aims to enhance the navigation performance of mobile robots, a multiple data fusion (MDF) method is proposed for indoor environments. Design/methodology/approach Here, multiple sensor data i.e. collected information of inertial measurement unit, odometer and laser radar, are used. Then, an extended Kalman filter (EKF) is used to incorporate these multiple data and the mobile robot can perform autonomous localization according to the proposed EKF-based MDF method in complex indoor environments. Findings The proposed method has experimentally been verified in the different indoor environments, i.e. office, passageway and exhibition hall. Experimental results show that the EKF-based MDF method can achieve the best localization performance and robustness in the process of navigation. Originality/value Indoor localization precision is mostly related to the collected data from multiple sensors. The proposed method can incorporate these collected data reasonably and can guide the mobile robot to perform autonomous navigation (AN) in indoor environments. Therefore, the output of this paper would be used for AN in complex and unknown indoor environments.


Sign in / Sign up

Export Citation Format

Share Document