Semantic Mapping: A Semantics-based Approach to Virtual Content Placement for Immersive Environments

Author(s):  
Jingyang Liu
Author(s):  
S Leinster-Evans ◽  
J Newell ◽  
S Luck

This paper looks to expand on the INEC 2016 paper ‘The future role of virtual reality within warship support solutions for the Queen Elizabeth Class aircraft carriers’ presented by Ross Basketter, Craig Birchmore and Abbi Fisher from BAE Systems in May 2016 and the EAAW VII paper ‘Testing the boundaries of virtual reality within ship support’ presented by John Newell from BAE Systems and Simon Luck from BMT DSL in June 2017. BAE Systems and BMT have developed a 3D walkthrough training system that supports the teams working closely with the QEC Aircraft Carriers in Portsmouth and this work was presented at EAAW VII. Since then this work has been extended to demonstrate the art of the possible on Type 26. This latter piece of work is designed to explore the role of 3D immersive environments in the development and fielding of support and training solutions, across the range of support disciplines. The combined team are looking at how this digital thread leads from design of platforms, both surface and subsurface, through build into in-service support and training. This rich data and ways in which it could be used in the whole lifecycle of the ship, from design and development (used for spatial acceptance, HazID, etc) all the way through to operational support and maintenance (in conjunction with big data coming off from the ship coupled with digital tech docs for maintenance procedures) using constantly developing technologies such as 3D, Virtual Reality, Augmented Reality and Mixed Reality, will be proposed.  The drive towards gamification in the training environment to keep younger recruits interested and shortening course lengths will be explored. The paper develops the options and looks to how this technology can be used and where the value proposition lies. 


Author(s):  
Xingwu Ji ◽  
Zheng Gong ◽  
Ruihang Miao ◽  
Wuyang Xue ◽  
Rendong Ying
Keyword(s):  

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 236
Author(s):  
Ling Zhu ◽  
Guangshuai Jin ◽  
Dejun Gao

Freely available satellite imagery improves the research and production of land-cover products at the global scale or over large areas. The integration of land-cover products is a process of combining the advantages or characteristics of several products to generate new products and meet the demand for special needs. This study presents an ontology-based semantic mapping approach for integration land-cover products using hybrid ontology with EAGLE (EIONET Action Group on Land monitoring in Europe) matrix elements as the shared vocabulary, linking and comparing concepts from multiple local ontologies. Ontology mapping based on term, attribute and instance is combined to obtain the semantic similarity between heterogeneous land-cover products and realise the integration on a schema level. Moreover, through the collection and interpretation of ground verification points, the local accuracy of the source product is evaluated using the index Kriging method. Two integration models are developed that combine semantic similarity and local accuracy. Taking NLCD (National Land Cover Database) and FROM-GLC-Seg (Finer Resolution Observation and Monitoring-Global Land Cover-Segmentation) as source products and the second-level class refinement of GlobeLand30 land-cover product as an example, the forest class is subdivided into broad-leaf, coniferous and mixed forest. Results show that the highest accuracies of the second class are 82.6%, 72.0% and 60.0%, respectively, for broad-leaf, coniferous and mixed forest.


2021 ◽  
Vol 11 (4) ◽  
pp. 1953
Author(s):  
Francisco Martín ◽  
Fernando González ◽  
José Miguel Guerrero ◽  
Manuel Fernández ◽  
Jonatan Ginés

The perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object’s space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot’s indoor environments.


2018 ◽  
Vol 51 (3) ◽  
pp. 161-162
Author(s):  
Maaike H.T. de Boer

Information ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 92
Author(s):  
Xiaoning Han ◽  
Shuailong Li ◽  
Xiaohui Wang ◽  
Weijia Zhou

Sensing and mapping its surroundings is an essential requirement for a mobile robot. Geometric maps endow robots with the capacity of basic tasks, e.g., navigation. To co-exist with human beings in indoor scenes, the need to attach semantic information to a geometric map, which is called a semantic map, has been realized in the last two decades. A semantic map can help robots to behave in human rules, plan and perform advanced tasks, and communicate with humans on the conceptual level. This survey reviews methods about semantic mapping in indoor scenes. To begin with, we answered the question, what is a semantic map for mobile robots, by its definitions. After that, we reviewed works about each of the three modules of semantic mapping, i.e., spatial mapping, acquisition of semantic information, and map representation, respectively. Finally, though great progress has been made, there is a long way to implement semantic maps in advanced tasks for robots, thus challenges and potential future directions are discussed before a conclusion at last.


Sign in / Sign up

Export Citation Format

Share Document