Semantic Mapping Based on YOLOv3 and Visual SLAM

2020 ◽  
Vol 57 (20) ◽  
pp. 201012
Author(s):  
邹斌 Zou Bin ◽  
林思阳 Lin Siyang ◽  
尹智帅 Yin Zhishuai
Keyword(s):  
Robotica ◽  
2019 ◽  
Vol 38 (2) ◽  
pp. 256-270 ◽  
Author(s):  
Jiyu Cheng ◽  
Yuxiang Sun ◽  
Max Q.-H. Meng

SummaryVisual simultaneous localization and mapping (visual SLAM) has been well developed in recent decades. To facilitate tasks such as path planning and exploration, traditional visual SLAM systems usually provide mobile robots with the geometric map, which overlooks the semantic information. To address this problem, inspired by the recent success of the deep neural network, we combine it with the visual SLAM system to conduct semantic mapping. Both the geometric and semantic information will be projected into the 3D space for generating a 3D semantic map. We also use an optical-flow-based method to deal with the moving objects such that our method is capable of working robustly in dynamic environments. We have performed our experiments in the public TUM dataset and our recorded office dataset. Experimental results demonstrate the feasibility and impressive performance of the proposed method.


2019 ◽  
Vol 39 (2) ◽  
pp. 543-570 ◽  
Author(s):  
Mingyang Geng ◽  
Suning Shang ◽  
Bo Ding ◽  
Huaimin Wang ◽  
Pengfei Zhang

Author(s):  
Xingwu Ji ◽  
Zheng Gong ◽  
Ruihang Miao ◽  
Wuyang Xue ◽  
Rendong Ying
Keyword(s):  

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 236
Author(s):  
Ling Zhu ◽  
Guangshuai Jin ◽  
Dejun Gao

Freely available satellite imagery improves the research and production of land-cover products at the global scale or over large areas. The integration of land-cover products is a process of combining the advantages or characteristics of several products to generate new products and meet the demand for special needs. This study presents an ontology-based semantic mapping approach for integration land-cover products using hybrid ontology with EAGLE (EIONET Action Group on Land monitoring in Europe) matrix elements as the shared vocabulary, linking and comparing concepts from multiple local ontologies. Ontology mapping based on term, attribute and instance is combined to obtain the semantic similarity between heterogeneous land-cover products and realise the integration on a schema level. Moreover, through the collection and interpretation of ground verification points, the local accuracy of the source product is evaluated using the index Kriging method. Two integration models are developed that combine semantic similarity and local accuracy. Taking NLCD (National Land Cover Database) and FROM-GLC-Seg (Finer Resolution Observation and Monitoring-Global Land Cover-Segmentation) as source products and the second-level class refinement of GlobeLand30 land-cover product as an example, the forest class is subdivided into broad-leaf, coniferous and mixed forest. Results show that the highest accuracies of the second class are 82.6%, 72.0% and 60.0%, respectively, for broad-leaf, coniferous and mixed forest.


2021 ◽  
Vol 11 (4) ◽  
pp. 1953
Author(s):  
Francisco Martín ◽  
Fernando González ◽  
José Miguel Guerrero ◽  
Manuel Fernández ◽  
Jonatan Ginés

The perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object’s space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot’s indoor environments.


Sign in / Sign up

Export Citation Format

Share Document