scholarly journals Method for SLAM Based on Omnidirectional Vision: A Delayed-EKF Approach

2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Rodrigo Munguía ◽  
Carlos López-Franco ◽  
Emmanuel Nuño ◽  
Adriana López-Franco

This work presents a method for implementing a visual-based simultaneous localization and mapping (SLAM) system using omnidirectional vision data, with application to autonomous mobile robots. In SLAM, a mobile robot operates in an unknown environment using only on-board sensors to simultaneously build a map of its surroundings, which it uses to track its position. The SLAM is perhaps one of the most fundamental problems to solve in robotics to build mobile robots truly autonomous. The visual sensor used in this work is an omnidirectional vision sensor; this sensor provides a wide field of view which is advantageous in a mobile robot in an autonomous navigation task. Since the visual sensor used in this work is monocular, a method to recover the depth of the features is required. To estimate the unknown depth we propose a novel stochastic triangulation technique. The system proposed in this work can be applied to indoor or cluttered environments for performing visual-based navigation when GPS signal is not available. Experiments with synthetic and real data are presented in order to validate the proposal.

1999 ◽  
Vol 11 (1) ◽  
pp. 1-1
Author(s):  
Kiyoshi Komoriya ◽  

Mobility, or locomotion, is as important a function for robots as manipulation. A robot can enlarge its work space by locomotion. It can also recognize its environment well with its sensors by moving around and by observing its surroundings from various directions. Much researches has been done on mobile robots and the research appears to be mature. Research activity on robot mobility is still very active; for example, 22% of the sessions at ICRA'98 - the International Conference on Robotics and Automation - and 24% of the sessions at IROS'98 - the International Conference on Intelligent Robots and Systems - dealt with issues directly related to mobile robots. One of the main reasons may be that intelligent mobile robots are thought to be the closest position to autonomous robot applications. This special issue focuses on a variety of mobile robot research from mobile mechanisms, localization, and navigation to remote control through networks. The first paper, entitled ""Control of an Omnidirectional Vehicle with Multiple Modular Steerable Drive Wheels,"" by M. Hashimoto et al., deals with locomotion mechanisms. They propose an omnidirectional mobile mechanism consisting of modular steerable drive wheels. The omnidirectional function of mobile mechanisms will be an important part of the human-friendly robot in the near future to realize flexible movements in indoor environments. The next three papers focus on audiovisual sensing to localize and navigate a robot. The second paper, entitled ""High-Speed Measurement of Normal Wall Direction by Ultrasonic Sensor,"" by A. Ohya et al., proposes a method to measure the normal direction of walls by ultrasonic array sensor. The third paper, entitled ""Self-Position Detection System Using a Visual-Sensor for Mobile Robots,"" is written by T. Tanaka et al. In their method, the position of the robot is decided by measuring marks such as name plates and fire alarm lamps by visual sensor. In the fourth paper, entitled ""Development of Ultra-Wide-Angle Laser Range Sensor and Navigation of a Mobile Robot in a Corridor Environment,"" written by Y Ando et al., a very wide view-angle sensor is realized using 5 laser fan beam projectors and 3 CCD cameras. The next three papers discussing navigation problems. The fifth paper, entitled ""Autonomous Navigation of an Intelligent Vehicle Using 1-Dimensional Optical Flow,"" by M. Yamada and K. Nakazawa, discusses navigation based on visual feedback. In this work, navigation is realized by general and qualitative knowledge of the environment. The sixth paper, entitled ""Development of Sensor-Based Navigation for Mobile Robots Using Target Direction Sensor,"" by M. Yamamoto et al., proposes a new sensor-based navigation algorithm in an unknown obstacle environment. The seventh paper, entitled ""Navigation Based on Vision and DGPS Information for Mobile Robots,"" S. Kotani et al., describes a navigation system for an autonomous mobile robot in an outdoor environment. The unique point of their paper is the utilization of landmarks and a differential global positioning system to determine robot position and orientation. The last paper deals with the relationship between the mobile robot and computer networks. The paper, entitled ""Direct Mobile Robot Teleoperation via Internet,"" by K. Kawabata et al., proposes direct teleoperation of a mobile robot via the Internet. Such network-based robotics will be an important field in robotics application. We sincerely thank all of the contributors to this special issue for their cooperation from the planning stage to the review process. Many thanks also go to the reviewers for their excellent work. We will be most happy if this issue aids readers in understanding recent trends in mobile robot research and furthers interest in this research field.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 28
Author(s):  
Asilbek Ganiev ◽  
Kang Hee Lee

In this paper, we used a robot operating system (ROS) that is designed to work with mobile robots. ROS provides us with simultaneous localization and mapping of the environment, and here it is used to autonomously navigate a mobile robot simulator between specified points. Also, when the mobile robot automatically navigates between the starting point and the target point, it bypasses obstacles; and if necessary, sets a new path of the route to reach the goal point.  


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhen Tong

As a sensor with a wide field of view, the panoramic vision sensor is efficient and convenient in perceiving the characteristic information of the surrounding environment and plays an important role in the experience of artistic design of images. The transformation of visual and other sensory experiences in art design is to integrate sound, image, texture, taste, and smell with each other through reasonable rules, to create more excellent crossborder art design works. To improve the sensory experience that art design works bring to the audience, the combination of vision and other sensory experiences can maximize the advantages of multiple information dissemination methods and combine the omnidirectional visual sensor with the sensory experience of art design images. In the method part, this article introduces the omnidirectional vision sensor, art design image, and sensory experience modes and content and introduces the hyperbolic concave mirror theory and the Micusik perspective projection imaging model. In the experimental part, the experimental environment, experimental objects, and experimental procedures of this article are introduced. In the analysis part, this article analyzes the six aspects of image database dependency test, performance, comparison of different distortion types, false detection rate and missing detection rate, algorithm time-consuming comparison, sensory experience analysis, and feature point screening. Among the feelings of the art design image, for the first image, 87.21% of the audience’s feelings are happy, indicating that the main idea of this image can bring joy to people. In the second image, the audience’s feelings are mostly sad. For the third image, more than half of the audience’s feelings are melancholy. For the fourth image, 69.34% of the audience’s inner feelings are calm. It explains that the difference in the content of art design images can bring different sensory experiences to people.


2017 ◽  
Vol 36 (12) ◽  
pp. 1363-1386 ◽  
Author(s):  
Patrick McGarey ◽  
Kirk MacTavish ◽  
François Pomerleau ◽  
Timothy D Barfoot

Tethered mobile robots are useful for exploration in steep, rugged, and dangerous terrain. A tether can provide a robot with robust communications, power, and mechanical support, but also constrains motion. In cluttered environments, the tether will wrap around a number of intermediate ‘anchor points’, complicating navigation. We show that by measuring the length of tether deployed and the bearing to the most recent anchor point, we can formulate a tethered simultaneous localization and mapping (TSLAM) problem that allows us to estimate the pose of the robot and the positions of the anchor points, using only low-cost, nonvisual sensors. This information is used by the robot to safely return along an outgoing trajectory while avoiding tether entanglement. We are motivated by TSLAM as a building block to aid conventional, camera, and laser-based approaches to simultaneous localization and mapping (SLAM), which tend to fail in dark and or dusty environments. Unlike conventional range-bearing SLAM, the TSLAM problem must account for the fact that the tether-length measurements are a function of the robot’s pose and all the intermediate anchor-point positions. While this fact has implications on the sparsity that can be exploited in our method, we show that a solution to the TSLAM problem can still be found and formulate two approaches: (i) an online particle filter based on FastSLAM and (ii) an efficient, offline batch solution. We demonstrate that either method outperforms odometry alone, both in simulation and in experiments using our TReX (Tethered Robotic eXplorer) mobile robot operating in flat-indoor and steep-outdoor environments. For the indoor experiment, we compare each method using the same dataset with ground truth, showing that batch TSLAM outperforms particle-filter TSLAM in localization and mapping accuracy, owing to superior anchor-point detection, data association, and outlier rejection.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Chittaranjan Paital ◽  
Saroj Kumar ◽  
Manoj Kumar Muni ◽  
Dayal R. Parhi ◽  
Prasant Ranjan Dhal

PurposeSmooth and autonomous navigation of mobile robot in a cluttered environment is the main purpose of proposed technique. That includes localization and path planning of mobile robot. These are important aspects of the mobile robot during autonomous navigation in any workspace. Navigation of mobile robots includes reaching the target from the start point by avoiding obstacles in a static or dynamic environment. Several techniques have already been proposed by the researchers concerning navigational problems of the mobile robot still no one confirms the navigating path is optimal.Design/methodology/approachTherefore, the modified grey wolf optimization (GWO) controller is designed for autonomous navigation, which is one of the intelligent techniques for autonomous navigation of wheeled mobile robot (WMR). GWO is a nature-inspired algorithm, which mainly mimics the social hierarchy and hunting behavior of wolf in nature. It is modified to define the optimal positions and better control over the robot. The motion from the source to target in the highly cluttered environment by negotiating obstacles. The controller is authenticated by the approach of V-REP simulation software platform coupled with real-time experiment in the laboratory by using Khepera-III robot.FindingsDuring experiments, it is observed that the proposed technique is much efficient in motion control and path planning as the robot reaches its target position without any collision during its movement. Further the simulation through V-REP and real-time experimental results are recorded and compared against each corresponding results, and it can be seen that the results have good agreement as the deviation in the results is approximately 5% which is an acceptable range of deviation in motion planning. Both the results such as path length and time taken to reach the target is recorded and shown in respective tables.Originality/valueAfter literature survey, it may be said that most of the approach is implemented on either mathematical convergence or in mobile robot, but real-time experimental authentication is not obtained. With a lack of clear evidence regarding use of MGWO (modified grey wolf optimization) controller for navigation of mobile robots in both the environment, such as in simulation platform and real-time experimental platforms, this work would serve as a guiding link for use of similar approaches in other forms of robots.


Robotics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 40
Author(s):  
Hirokazu Madokoro ◽  
Hanwool Woo ◽  
Stephanie Nix ◽  
Kazuhito Sato

This study was conducted to develop original benchmark datasets that simultaneously include indoor–outdoor visual features. Indoor visual information related to images includes outdoor features to a degree that varies extremely by time, weather, and season. We obtained time-series scene images using a wide field of view (FOV) camera mounted on a mobile robot moving along a 392-m route in an indoor environment surrounded by transparent glass walls and windows for two directions in three seasons. For this study, we propose a unified method for extracting, characterizing, and recognizing visual landmarks that are robust to human occlusion in a real environment in which robots coexist with people. Using our method, we conducted an evaluation experiment to recognize scenes divided up to 64 zones with fixed intervals. The experimentally obtained results using the datasets revealed the performance and characteristics of meta-parameter optimization, mapping characteristics to category maps, and recognition accuracy. Moreover, we visualized similarities between scene images using category maps. We also identified cluster boundaries obtained from mapping weights.


2019 ◽  
Vol 4 (2) ◽  
pp. 78 ◽  
Author(s):  
Dwiky Erlangga ◽  
Endang D ◽  
Rosalia H S ◽  
Sunarto Sunarto ◽  
Kuat Rahardjo T.S ◽  
...  

<p><em>Autonomous navigation is absolutely necessary in mobile-robotic, which consists of four main components, namely: perception, localization, path-planning, and motion-control. Mobile robots create maps of space so that they can carry out commands to move from one place to another using the autonomous-navigation method. Map making using the Simultaneous-Localization-and-Mapping (SLAM) algorithm that processes data from the RGB-D camera sensor and bumper converted to laser-scan and point-cloud is used to obtain perception. While the wheel-encoder and gyroscope are used to obtain odometry data which is used to construct travel maps with the SLAM algorithm, gmapping and performing autonomous navigation. The system consists of three sub-systems, namely: sensors as inputs, single-board computers for processes, and actuators as movers. Autonomous-navigation is regulated through the navigation-stack using the Adaptive-Monte-Carlo-Localization (AMCL) algorithm for localization and global-planning, while the Dynamic-Window-Approach (DWA) algorithm with Robot-Operating-System-(ROS) for local -planning. The results of the test show the system can provide depth-data that is converted to laser-scan, bumper data, and odometry data to single-board-computer-based ROS so that mobile-controlled teleoperating robots from workstations can build 2-dimensional grid maps with total accuracy error rate of 0.987%. By using maps, data from sensors, and odometry the mobile-robot can perform autonomous-navigation consistently and be able to do path-replanning, avoid static obstacles and continue to do localization to reach the destination point.</em></p>


Author(s):  
Lorenzo Fernández Rojo ◽  
Luis Paya ◽  
Francisco Amoros ◽  
Oscar Reinoso

Mobile robots have extended to many different environments, where they have to move autonomously to fulfill an assigned task. With this aim, it is necessary that the robot builds a model of the environment and estimates its position using this model. These two problems are often faced simultaneously. This process is known as SLAM (simultaneous localization and mapping) and is very common since when a robot begins moving in a previously unknown environment it must start generating a model from the scratch while it estimates its position simultaneously. This chapter is focused on the use of computer vision to solve this problem. The main objective is to develop and test an algorithm to solve the SLAM problem using two sources of information: (1) the global appearance of omnidirectional images captured by a camera mounted on the mobile robot and (2) the robot internal odometry. A hybrid metric-topological approach is proposed to solve the SLAM problem.


Author(s):  
Mahdi Haghshenas-Jaryani ◽  
Hakki Erhan Sevil ◽  
Liang Sun

Abstract This paper presents the concept of teaming up snake-robots, as unmanned ground vehicles (UGVs), and unmanned aerial vehicles (UAVs) for autonomous navigation and obstacle avoidance. Snake robots navigate in cluttered environments based on visual servoing of a co-robot UAV. It is assumed that snake-robots do not have any means to map the surrounding environment, detect obstacles, or self-localize, and these tasks are allocated to the UAV, which uses visual sensors to track the UGVs. The obtained images were used for the geo-localization and mapping the environment. Computer vision methods were utilized for the detection of obstacles, finding obstacle clusters, and then, mapping based on Probabilistic Threat Exposure Map (PTEM) construction. A path planner module determines the heading direction and velocity of the snake robot. A combined heading-velocity controller was used for the snake robot to follow the desired trajectories using the lateral undulatory gait. A series of simulations were carried out for analyzing the snake-robot’s maneuverability and proof-of-concept by navigating the snake robot in an environment with two obstacles based on the UAV visual servoing. The results showed the feasibility of the concept and effectiveness of the integrated system for navigation.


2015 ◽  
Vol 27 (4) ◽  
pp. 318-326 ◽  
Author(s):  
Shin'ichi Yuta ◽  
◽  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/01.jpg"" width=""300"" /> Autonomous mobile robot in RWRC 2014</div> The Tsukuba Challenge, an open experiment for autonomous mobile robotics researchers, lets mobile robots travel in a real – and populated – city environment. Following the challenge in 2013, the mobile robots must navigate autonomously to their destination while, as the task of Tsukuba Challenge 2014, looking for and finding specific persons sitting in the environment. Total 48 teams (54 robots) seeking success in this complex challenge. </span>


Sign in / Sign up

Export Citation Format

Share Document