scholarly journals An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot

2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Jianjun Ni ◽  
Tao Gong ◽  
Yafei Gu ◽  
Jinxiu Zhu ◽  
Xinnan Fan

The robot simultaneous localization and mapping (SLAM) is a very important and useful technology in the robotic field. However, the environmental map constructed by the traditional visual SLAM method contains little semantic information, which cannot satisfy the needs of complex applications. The semantic map can deal with this problem efficiently, which has become a research hot spot. This paper proposed an improved deep residual network- (ResNet-) based semantic SLAM method for monocular vision robots. In the proposed approach, an improved image matching algorithm based on feature points is presented, to enhance the anti-interference ability of the algorithm. Then, the robust feature point extraction method is adopted in the front-end module of the SLAM system, which can effectively reduce the probability of camera tracking loss. In addition, the improved key frame insertion method is introduced in the visual SLAM system to enhance the stability of the system during the turning and moving of the robot. Furthermore, an improved ResNet model is proposed to extract the semantic information of the environment to complete the construction of the semantic map of the environment. Finally, various experiments are conducted and the results show that the proposed method is effective.

Robotica ◽  
2019 ◽  
Vol 38 (2) ◽  
pp. 256-270 ◽  
Author(s):  
Jiyu Cheng ◽  
Yuxiang Sun ◽  
Max Q.-H. Meng

SummaryVisual simultaneous localization and mapping (visual SLAM) has been well developed in recent decades. To facilitate tasks such as path planning and exploration, traditional visual SLAM systems usually provide mobile robots with the geometric map, which overlooks the semantic information. To address this problem, inspired by the recent success of the deep neural network, we combine it with the visual SLAM system to conduct semantic mapping. Both the geometric and semantic information will be projected into the 3D space for generating a 3D semantic map. We also use an optical-flow-based method to deal with the moving objects such that our method is capable of working robustly in dynamic environments. We have performed our experiments in the public TUM dataset and our recorded office dataset. Experimental results demonstrate the feasibility and impressive performance of the proposed method.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5570
Author(s):  
Yiming Ding ◽  
Zhi Xiong ◽  
Wanling Li ◽  
Zhiguo Cao ◽  
Zhengchun Wang

The combination of biomechanics and inertial pedestrian navigation research provides a very promising approach for pedestrian positioning in environments where Global Positioning System (GPS) signal is unavailable. However, in practical applications such as fire rescue and indoor security, the inertial sensor-based pedestrian navigation system is facing various challenges, especially the step length estimation errors and heading drift in running and sprint. In this paper, a trinal-node, including two thigh-worn inertial measurement units (IMU) and one waist-worn IMU, based simultaneous localization and occupation grid mapping method is proposed. Specifically, the gait detection and segmentation are realized by the zero-crossing detection of the difference of thighs pitch angle. A piecewise function between the step length and the probability distribution of waist horizontal acceleration is established to achieve accurate step length estimation both in regular walking and drastic motions. In addition, the simultaneous localization and mapping method based on occupancy grids, which involves the historic trajectory to improve the pedestrian’s pose estimation is introduced. The experiments show that the proposed trinal-node pedestrian inertial odometer can identify and segment each gait cycle in the walking, running, and sprint. The average step length estimation error is no more than 3.58% of the total travel distance in the motion speed from 1.23 m/s to 3.92 m/s. In combination with the proposed simultaneous localization and mapping method based on the occupancy grid, the localization error is less than 5 m in a single-story building of 2643.2 m2.


2021 ◽  
Vol 33 (8) ◽  
pp. 2591
Author(s):  
Chaoyang Chen ◽  
Qi He ◽  
Qiubo Ye ◽  
Guangsong Yang ◽  
Cheng-Fu Yang

Author(s):  
Lorenzo Fernández Rojo ◽  
Luis Paya ◽  
Francisco Amoros ◽  
Oscar Reinoso

Mobile robots have extended to many different environments, where they have to move autonomously to fulfill an assigned task. With this aim, it is necessary that the robot builds a model of the environment and estimates its position using this model. These two problems are often faced simultaneously. This process is known as SLAM (simultaneous localization and mapping) and is very common since when a robot begins moving in a previously unknown environment it must start generating a model from the scratch while it estimates its position simultaneously. This chapter is focused on the use of computer vision to solve this problem. The main objective is to develop and test an algorithm to solve the SLAM problem using two sources of information: (1) the global appearance of omnidirectional images captured by a camera mounted on the mobile robot and (2) the robot internal odometry. A hybrid metric-topological approach is proposed to solve the SLAM problem.


2011 ◽  
Vol 366 ◽  
pp. 90-94
Author(s):  
Ying Min YI ◽  
Yu Hui

How to identify objects is a hot issue of robot simultaneous localization and mapping (SLAM) with monocular vision. In this paper, an algorithm of wheeled robot’s simultaneous localization and mapping with identification of landmarks based on monocular vision is proposed. In observation steps, identifying landmarks and locating position are performed by image processing and analyzing, which converts vision image projection of wheeled robots and geometrical relations of spatial objects into calculating robots’ relative landmarks distance and angle. The integral algorithm procedure follows the recursive order of prediction, observation, data association, update, mapping to have simultaneous localization and map building. Compared with Active Vision algorithm, Three dimensional vision and stereo vision algorithm, the proposed algorithm is able to identify environmental objects and conduct smooth movement as well.


2007 ◽  
Vol 7 (1) ◽  
pp. 190-194 ◽  
Author(s):  
Wu Zu Yu ◽  
Huang Xin Han ◽  
Li Xin de ◽  
Wang Min ◽  
Yan Huai Cheng

Sign in / Sign up

Export Citation Format

Share Document