scholarly journals NYK’s Approach for Autonomous Navigation – Structure of Action Planning System and Demonstration Experiments

2019 ◽  
Vol 1357 ◽  
pp. 012013 ◽  
Author(s):  
Koji Kutsuna ◽  
Hideyuki Ando ◽  
Takuya Nakashima ◽  
Satoru Kuwahara ◽  
Shinya Nakamura
2019 ◽  
Vol 54 (2) ◽  
pp. 210-214
Author(s):  
Koji Kutsuna ◽  
Hideyuki Ando ◽  
Takuya Nakashima ◽  
Satoru Kuwahara ◽  
Shinya Nakamura

2020 ◽  
Vol 10 (9) ◽  
pp. 3219 ◽  
Author(s):  
Sung-Hyeon Joo ◽  
Sumaira Manzoor ◽  
Yuri Goncalves Rocha ◽  
Sang-Hyeon Bae ◽  
Kwang-Hee Lee ◽  
...  

Humans have an innate ability of environment modeling, perception, and planning while simultaneously performing tasks. However, it is still a challenging problem in the study of robotic cognition. We address this issue by proposing a neuro-inspired cognitive navigation framework, which is composed of three major components: semantic modeling framework (SMF), semantic information processing (SIP) module, and semantic autonomous navigation (SAN) module to enable the robot to perform cognitive tasks. The SMF creates an environment database using Triplet Ontological Semantic Model (TOSM) and builds semantic models of the environment. The environment maps from these semantic models are generated in an on-demand database and downloaded in SIP and SAN modules when required to by the robot. The SIP module contains active environment perception components for recognition and localization. It also feeds relevant perception information to behavior planner for safely performing the task. The SAN module uses a behavior planner that is connected with a knowledge base and behavior database for querying during action planning and execution. The main contributions of our work are the development of the TOSM, integration of SMF, SIP, and SAN modules in one single framework, and interaction between these components based on the findings of cognitive science. We deploy our cognitive navigation framework on a mobile robot platform, considering implicit and explicit constraints for autonomous robot navigation in a real-world environment. The robotic experiments demonstrate the validity of our proposed framework.


1990 ◽  
Vol 22 (2) ◽  
pp. 32-35 ◽  
Author(s):  
Marsha Forest ◽  
Evelyn Lusthaus

2020 ◽  
Vol 07 (04) ◽  
pp. 373-389
Author(s):  
Asif Ahmed Neloy ◽  
Rafia Alif Bindu ◽  
Sazid Alam ◽  
Ridwanul Haque ◽  
Md. Saif Ahammod Khan ◽  
...  

An improved version of Alpha-N, a self-powered, wheel-driven Automated Delivery Robot (ADR), is presented in this study. Alpha-N-V2 is capable of navigating autonomously by detecting and avoiding objects or obstacles in its path. For autonomous navigation and path planning, Alpha-N uses a vector map and calculates the shortest path by Grid Count Method (GCM) of Dijkstra’s Algorithm. The RFID Reading System (RRS) is assembled in Alpha-N to read Landmark determination with Radio Frequency Identification (RFID) tags. With the help of the RFID tags, Alpha-N verifies the path for identification between source and destination and calibrates the current position. Along with the RRS, GCM, to detect and avoid obstacles, an Object Detection Module (ODM) is constructed by Faster R-CNN with VGGNet-16 architecture that builds and supports the Path Planning System (PPS). In the testing phase, the following results are acquired from the Alpha-N: ODM exhibits an accuracy of [Formula: see text], RRS shows [Formula: see text] accuracy and the PPS maintains the accuracy of [Formula: see text]. This proposed version of Alpha-N shows significant improvement in terms of performance and usability compared with the previous version of Alpha-N.


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 220 ◽  
Author(s):  
Noé Pérez-Higueras ◽  
Alberto Jardón ◽  
Ángel Rodríguez ◽  
Carlos Balaguer

Navigation and exploration in 3D environments is still a challenging task for autonomous robots that move on the ground. Robots for Search and Rescue missions must deal with unstructured and very complex scenarios. This paper presents a path planning system for navigation and exploration of ground robots in such situations. We use (unordered) point clouds as the main sensory input without building any explicit representation of the environment from them. These 3D points are employed as space samples by an Optimal-RRTplanner (RRT * ) to compute safe and efficient paths. The use of an objective function for path construction and the natural exploratory behaviour of the RRT * planner make it appropriate for the tasks. The approach is evaluated in different simulations showing the viability of autonomous navigation and exploration in complex 3D scenarios.


Sign in / Sign up

Export Citation Format

Share Document