scholarly journals A topological navigation system for indoor environments based on perception events

2016 ◽  
Vol 14 (1) ◽  
pp. 172988141667813 ◽  
Author(s):  
Clara Gomez ◽  
Alejandra Carolina Hernandez ◽  
Jonathan Crespo ◽  
Ramon Barber

The aim of the work presented in this article is to develop a navigation system that allows a mobile robot to move autonomously in an indoor environment using perceptions of multiple events. A topological navigation system based on events that imitates human navigation using sensorimotor abilities and sensorial events is presented. The increasing interest in building autonomous mobile systems makes the detection and recognition of perceptions a crucial task. The system proposed can be considered a perceptive navigation system as the navigation process is based on perception and recognition of natural and artificial landmarks, among others. The innovation of this work resides in the use of an integration interface to handle multiple events concurrently, leading to a more complete and advanced navigation system. The developed architecture enhances the integration of new elements due to its modularity and the decoupling between modules. Finally, experiments have been carried out in several mobile robots, and their results show the feasibility of the navigation system proposed and the effectiveness of the sensorial data integration managed as events.

2020 ◽  
Vol 69 ◽  
pp. 471-500
Author(s):  
Shih-Yun Lo ◽  
Shiqi Zhang ◽  
Peter Stone

Intelligent mobile robots have recently become able to operate autonomously in large-scale indoor environments for extended periods of time. In this process, mobile robots need the capabilities of both task and motion planning. Task planning in such environments involves sequencing the robot’s high-level goals and subgoals, and typically requires reasoning about the locations of people, rooms, and objects in the environment, and their interactions to achieve a goal. One of the prerequisites for optimal task planning that is often overlooked is having an accurate estimate of the actual distance (or time) a robot needs to navigate from one location to another. State-of-the-art motion planning algorithms, though often computationally complex, are designed exactly for this purpose of finding routes through constrained spaces. In this article, we focus on integrating task and motion planning (TMP) to achieve task-level-optimal planning for robot navigation while maintaining manageable computational efficiency. To this end, we introduce TMP algorithm PETLON (Planning Efficiently for Task-Level-Optimal Navigation), including two configurations with different trade-offs over computational expenses between task and motion planning, for everyday service tasks using a mobile robot. Experiments have been conducted both in simulation and on a mobile robot using object delivery tasks in an indoor office environment. The key observation from the results is that PETLON is more efficient than a baseline approach that pre-computes motion costs of all possible navigation actions, while still producing plans that are optimal at the task level. We provide results with two different task planning paradigms in the implementation of PETLON, and offer TMP practitioners guidelines for the selection of task planners from an engineering perspective.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6238
Author(s):  
Payal Mahida ◽  
Seyed Shahrestani ◽  
Hon Cheung

Wayfinding and navigation can present substantial challenges to visually impaired (VI) people. Some of the significant aspects of these challenges arise from the difficulty of knowing the location of a moving person with enough accuracy. Positioning and localization in indoor environments require unique solutions. Furthermore, positioning is one of the critical aspects of any navigation system that can assist a VI person with their independent movement. The other essential features of a typical indoor navigation system include pathfinding, obstacle avoidance, and capabilities for user interaction. This work focuses on the positioning of a VI person with enough precision for their use in indoor navigation. We aim to achieve this by utilizing only the capabilities of a typical smartphone. More specifically, our proposed approach is based on the use of the accelerometer, gyroscope, and magnetometer of a smartphone. We consider the indoor environment to be divided into microcells, with the vertex of each microcell being assigned two-dimensional local coordinates. A regression-based analysis is used to train a multilayer perceptron neural network to map the inertial sensor measurements to the coordinates of the vertex of the microcell corresponding to the position of the smartphone. In order to test our proposed solution, we used IPIN2016, a publicly-available multivariate dataset that divides the indoor environment into cells tagged with the inertial sensor data of a smartphone, in order to generate the training and validating sets. Our experiments show that our proposed approach can achieve a remarkable prediction accuracy of more than 94%, with a 0.65 m positioning error.


Author(s):  
Donato Di Paola ◽  
Annalisa Milella ◽  
Grazia Cicirelli ◽  
Arcangelo Distante

This paper presents a novel vision-based approach for indoor environment monitoring by a mobile robot. The proposed system is based on computer vision methods to match the current scene with a stored one, looking for new or removed objects. The matching process uses both keypoint features and colour information. A PCA-SIFT algorithm is employed for feature extraction and matching. Colour-based segmentation is performed separately, using HSV coding. A fuzzy logic inference system is applied to fuse information from both steps and decide whether a significant variation of the scene has occurred. Results from experimental tests demonstrate the feasibility of the proposed method in robot surveillance applications.


2013 ◽  
Vol 441 ◽  
pp. 796-800
Author(s):  
Chun Shu Li ◽  
Zhi Hua Yang ◽  
Gen Qun Cui ◽  
Bo Jin

Aiming at the odor source localization in an obstacle-filled wind-varying indoor environment, a new method based odor source localization algorithm for a single mobile robot is proposed. With the information of the wind and the concentration gradient, Wasps can find odor source in a short time. However, it is very difficult for mobile robots to mimic the behaviors of wasps exactly. So, besides the bionics, BP neural network is adopted for the mobile robot to find the odor source. The control strategies for the plume-tracing mobile robot are proposed which include the intelligent plume-tracing algorithm and the collision avoidance algorithm based on improved potential grid method. The algorithms were integrated to control the robot trace plumes in obstructed indoor environments. Experimental results have demonstrated the capability of this kind of plume-tracing mobile robot.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5409
Author(s):  
Gonzalo Farias ◽  
Ernesto Fabregas ◽  
Enrique Torres ◽  
Gaëtan Bricas ◽  
Sebastián Dormido-Canto ◽  
...  

This work presents the development and implementation of a distributed navigation system based on object recognition algorithms. The main goal is to introduce advanced algorithms for image processing and artificial intelligence techniques for teaching control of mobile robots. The autonomous system consists of a wheeled mobile robot with an integrated color camera. The robot navigates through a laboratory scenario where the track and several traffic signals must be detected and recognized by using the images acquired with its on-board camera. The images are sent to a computer server that performs a computer vision algorithm to recognize the objects. The computer calculates the corresponding speeds of the robot according to the object detected. The speeds are sent back to the robot, which acts to carry out the corresponding manoeuvre. Three different algorithms have been tested in simulation and a practical mobile robot laboratory. The results show an average of 84% success rate for object recognition in experiments with the real mobile robot platform.


1999 ◽  
Vol 11 (1) ◽  
pp. 39-44 ◽  
Author(s):  
Motoji Yamamoto ◽  
◽  
Nobuhiro Ushimi ◽  
Akira Mohri

Sensor-based navigation used a target direction sensor for mobile robots among unknown obstacles in work space is discussed. The advantage of target direction information is robustness of measurement error for online navigation, compared to robot location information. Convergence of navigation using target direction information is discussed. An actual sensor system using two CdS sensors to measure target direction is proposed. Using target direction information, we present a new sensor based navigation algorithm in unknown obstacle environment. The navigation algorithm is based on target direction information, unlike sensor-based motion planning algorithms based on mobile robot location information. Using a sensor-based navigation system, we conducted a navigation experiment and simulations in unknown obstacle environment.


1999 ◽  
Vol 11 (1) ◽  
pp. 45-53 ◽  
Author(s):  
Shinji Kotani ◽  
◽  
Ken’ichi Kaneko ◽  
Tatsuya Shinoda ◽  
Hideo Mori ◽  
...  

This paper describes a navigation system for an autonomous mobile robot in outdoors. The robot uses vision to detect landmarks and DGPS information to determine its initial position and orientation. The vision system detects landmarks in the environment by referring to an environmental model. As the robot moves, it calculates its position by conventional dead reckoning, and matches landmarks to the environmental model to reduce error in position calculation. The robot's initial position and orientation are calculated from coordinates of the first and second locations acquired by DGPS. Subsequent orientations and positions are derived by map matching. We implemented the system on a mobile robot, Harunobu 6. Experiments in real environments verified the effectiveness of our proposed navigation.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2922 ◽  
Author(s):  
Dinh Van Nam ◽  
Kim Gon-Woo

Robotic mapping and odometry are the primary competencies of a navigation system for an autonomous mobile robot. However, the state estimation of the robot typically mixes with a drift over time, and its accuracy is degraded critically when using only proprioceptive sensors in indoor environments. Besides, the accuracy of an ego-motion estimated state is severely diminished in dynamic environments because of the influences of both the dynamic objects and light reflection. To this end, the multi-sensor fusion technique is employed to bound the navigation error by adopting the complementary nature of the Inertial Measurement Unit (IMU) and the bearing information of the camera. In this paper, we propose a robust tightly-coupled Visual-Inertial Navigation System (VINS) based on multi-stage outlier removal using the Multi-State Constraint Kalman Filter (MSCKF) framework. First, an efficient and lightweight VINS algorithm is developed for the robust state estimation of a mobile robot by practicing a stereo camera and an IMU towards dynamic indoor environments. Furthermore, we propose strategies to deal with the impacts of dynamic objects by using multi-stage outlier removal based on the feedback information of estimated states. The proposed VINS is implemented and validated through public datasets. In addition, we develop a sensor system and evaluate the VINS algorithm in the dynamic indoor environment with different scenarios. The experimental results show better performance in terms of robustness and accuracy with low computation complexity as compared to state-of-the-art approaches.


Author(s):  
Gonzalo Farias ◽  
Ernesto Fabregas ◽  
Enrique Torres ◽  
Gaetan Bricas ◽  
Sebastián Dormido-Canto ◽  
...  

This work presents the development and implementation of a distributed navigation system based on computer vision. The autonomous system consists of a wheeled mobile robot with an integrated colour camera. The robot navigates through a laboratory scenario where the track and several traffic signals must be detected and recognized by using the images acquired with its on-board camera. The images are sent to a computer server that processes them and calculates the corresponding speeds of the robot using a cascade of trained classifiers. These speeds are sent back to the robot, which acts to carry out the corresponding manoeuvre. The classifier cascade should be trained before experimentation with two sets of positive and negative images. The number of images in these sets should be considered to limit the training stage time and avoid overtraining the system.


Sign in / Sign up

Export Citation Format

Share Document