A Navigation Framework for Mobile Robots with 3D LiDAR and Monocular Camera

Author(s):  
Xiangrui Meng ◽  
Jun Cai ◽  
Yelan Wu ◽  
Shuang Liang ◽  
Zhiqiang Cao ◽  
...  
2010 ◽  
Vol 2010 (0) ◽  
pp. _1P1-E01_1-_1P1-E01_4
Author(s):  
Kazuki OTAKE ◽  
Naoki TOKUNAGA ◽  
Keiji NAGATANI ◽  
Kazuya YOSHIDA

2020 ◽  
Vol 32 (6) ◽  
pp. 1137-1153
Author(s):  
Ryusuke Miyamoto ◽  
Miho Adachi ◽  
Hiroki Ishida ◽  
Takuto Watanabe ◽  
Kouchi Matsutani ◽  
...  

The most popular external sensor for robots capable of autonomous movement is 3D LiDAR. However, cameras are typically installed on robots that operate in environments where humans live their daily lives to obtain the same information that is presented to humans, even though autonomous movement itself can be performed using only 3D LiDAR. The number of studies on autonomous movement for robots using only visual sensors is relatively small, but this type of approach is effective at reducing the cost of sensing devices per robot. To reduce the number of external sensors required for autonomous movement, this paper proposes a novel visual navigation scheme using only a monocular camera as an external sensor. The key concept of the proposed scheme is to select a target point in an input image toward which a robot can move based on the results of semantic segmentation, where road following and obstacle avoidance are performed simultaneously. Additionally, a novel scheme called virtual LiDAR is proposed based on the results of semantic segmentation to estimate the orientation of a robot relative to the current path in a traversable area. Experiments conducted during the course of the Tsukuba Challenge 2019 demonstrated that a robot can operate in a real environment containing several obstacles, such as humans and other robots, if correct results of semantic segmentation are provided.


Sign in / Sign up

Export Citation Format

Share Document