scholarly journals Visual SLAM Based Spatial Recognition and Visualization Method for Mobile AR Systems

2022 ◽  
Vol 5 (1) ◽  
pp. 11
Author(s):  
Jooeun Song ◽  
Joongjin Kook

The simultaneous localization and mapping (SLAM) market is growing rapidly with advances in Machine Learning, Drones, and Augmented Reality (AR) technologies. However, due to the absence of an open source-based SLAM library for developing AR content, most SLAM researchers are required to conduct their own research and development to customize SLAM. In this paper, we propose an open source-based Mobile Markerless AR System by building our own pipeline based on Visual SLAM. To implement the Mobile AR System of this paper, we use ORB-SLAM3 and Unity Engine and experiment with running our system in a real environment and confirming it in the Unity Engine’s Mobile Viewer. Through this experimentation, we can verify that the Unity Engine and the SLAM System are tightly integrated and communicate smoothly. In addition, we expect to accelerate the growth of SLAM technology through this research.

2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Jianjun Ni ◽  
Tao Gong ◽  
Yafei Gu ◽  
Jinxiu Zhu ◽  
Xinnan Fan

The robot simultaneous localization and mapping (SLAM) is a very important and useful technology in the robotic field. However, the environmental map constructed by the traditional visual SLAM method contains little semantic information, which cannot satisfy the needs of complex applications. The semantic map can deal with this problem efficiently, which has become a research hot spot. This paper proposed an improved deep residual network- (ResNet-) based semantic SLAM method for monocular vision robots. In the proposed approach, an improved image matching algorithm based on feature points is presented, to enhance the anti-interference ability of the algorithm. Then, the robust feature point extraction method is adopted in the front-end module of the SLAM system, which can effectively reduce the probability of camera tracking loss. In addition, the improved key frame insertion method is introduced in the visual SLAM system to enhance the stability of the system during the turning and moving of the robot. Furthermore, an improved ResNet model is proposed to extract the semantic information of the environment to complete the construction of the semantic map of the environment. Finally, various experiments are conducted and the results show that the proposed method is effective.


Author(s):  
Lorenzo Fernández Rojo ◽  
Luis Paya ◽  
Francisco Amoros ◽  
Oscar Reinoso

Mobile robots have extended to many different environments, where they have to move autonomously to fulfill an assigned task. With this aim, it is necessary that the robot builds a model of the environment and estimates its position using this model. These two problems are often faced simultaneously. This process is known as SLAM (simultaneous localization and mapping) and is very common since when a robot begins moving in a previously unknown environment it must start generating a model from the scratch while it estimates its position simultaneously. This chapter is focused on the use of computer vision to solve this problem. The main objective is to develop and test an algorithm to solve the SLAM problem using two sources of information: (1) the global appearance of omnidirectional images captured by a camera mounted on the mobile robot and (2) the robot internal odometry. A hybrid metric-topological approach is proposed to solve the SLAM problem.


Author(s):  
Gerhard Reitmayr ◽  
Tobias Langlotz ◽  
Daniel Wagner ◽  
Alessandro Mulloni ◽  
Gerhard Schall ◽  
...  

2018 ◽  
Vol 49 (1) ◽  
pp. 391-394 ◽  
Author(s):  
Bing Yu ◽  
Yang Li ◽  
Chao Ping Chen ◽  
Nizamuddin Maitlo ◽  
Jiaqi Chen ◽  
...  

Author(s):  
Lorenzo Fernández Rojo ◽  
Luis Paya ◽  
Francisco Amoros ◽  
Oscar Reinoso

Nowadays, mobile robots have extended to many different environments, where they have to move autonomously to fulfill an assigned task. With this aim, it is necessary that the robot builds a model of the environment and estimates its position using this model. These two problems are often faced simultaneously. This process is known as SLAM (Simultaneous Localization and Mapping) and is very common since when a robot begins moving in a previously unknown environment it must start generating a model from the scratch while it estimates its position simultaneously. This work is focused on the use of computer vision to solve this problem. The main objective is to develop and test an algorithm to solve the SLAM problem using two sources of information: (a) the global appearance of omnidirectional images captured by a camera mounted on the mobile robot and (b) the robot internal odometry. A hybrid metric-topological approach is proposed to solve the SLAM problem.


2017 ◽  
Vol 34 (4) ◽  
pp. 1217-1239 ◽  
Author(s):  
Chen-Chien Hsu ◽  
Cheng-Kai Yang ◽  
Yi-Hsing Chien ◽  
Yin-Tien Wang ◽  
Wei-Yen Wang ◽  
...  

Purpose FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms. Design/methodology/approach As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates. Findings Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy. Originality/value The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.


Sign in / Sign up

Export Citation Format

Share Document