scholarly journals TOWARDS THE AUTOMATIC ONTOLOGY GENERATION AND ALIGNMENT OF BIM AND GIS DATA FORMATS

Author(s):  
A. U. Usmani ◽  
M. Jadidi ◽  
G. Sohn

Abstract. Establishing semantic interoperability between BIM and GIS is vital for geospatial information exchange. Semantic web have a natural ability to provide seamless semantic representation and integration among the heterogeneous domains like BIM and GIS through employing ontology. Ontology models can be defined (or generated) using domain-data representations and further aligned across other ontologies by the semantic similarity of their entities - introducing cross-domain ontologies to achieve interoperability of heterogeneous information. However, due to extensive semantic features and complex alignment (mapping) relations between BIM and GIS data formats, many approaches are far from generating semantically-rich ontologies and perform effective alignment to address geospatial interoperability. This study highlights the fundamental perspectives to be addressed for BIM and GIS interoperability and proposes a comprehensive conceptual framework for automatic ontology generation followed by ontology alignment of open-standards for BIM and GIS data formats. It presents an approach based on transformation patterns to automatically generate ontology models, and semantic-based and structure-based alignment techniques to form cross-domain ontology. Proposed two-phase framework provides ontology model generation for input XML schemas (i.e. of IFC and CityGML formats), and illustrates alignment technique to potentially develop a cross-domain ontology. The study concludes anticipated results of cross-domain ontology can provides future perspectives in knowledge-discovery applications and seamless information exchange for BIM and GIS.

Author(s):  
A. U. Usmani ◽  
M. Jadidi ◽  
G. Sohn

Abstract. Data represented in the form of geospatial context and detailed building information are prominently nurturing infrastructure development and smart city applications. Bringing open-formats from data acquisition level to information engineering accelerates geospatial technologies towards urban sustainability and knowledge-based systems. BIM and GIS technologies are known to excel in this domain. However, fundamental level differences lie among their data-formats, which developed integration methods to bridge the gap between these distinct domains. Several studies have conducted data, process, and application-level integration, considering the significance of collaboration among these information systems. Although integration methods have narrowed the gap of geometric dissimilarity, semantic inconsistency, and information loss yet add constraints towards achieving interoperability. Integration using semantic web technology is more flexible and enables process-level integration without changing data format and structure. However, due to its developing nature and complex BIM-GIS data-formats, most approaches adapted requires human intervention. This paper presents a method, named OGGD (Ontology Generation for Geospatial Data), that implements a formal method for automatic ontology generation from XSD documents using transformation patterns following three extensive processes; first, formalization of XSD elements and transformation patterns; the second process identifies corresponding patterns explicitly, and the last process generates ontology for XSD schema. XSD elements from open-standard data models of BIM and GIS, ifcXML and CityGML, are manipulated and transformed into a semantically rich OWL model. The ontology models created can be applicable for information-based integration systems that will nurture knowledge-discovery and urban applications.


2018 ◽  
Author(s):  
Maria Montefinese ◽  
Erin Michelle Buchanan ◽  
David Vinson

Models of semantic representation predict that automatic priming is determined by associative and co-occurrence relations (i.e., spreading activation accounts), or to similarity in words' semantic features (i.e., featural models). Although, these three factors are correlated in characterizing semantic representation, they seem to tap different aspects of meaning. We designed two lexical decision experiments to dissociate these three different types of meaning similarity. For unmasked primes, we observed priming only due to association strength and not the other two measures; and no evidence for differences in priming for concrete and abstract concepts. For masked primes there was no priming regardless of the semantic relation. These results challenge theoretical accounts of automatic priming. Rather, they are in line with the idea that priming may be due to participants’ controlled strategic processes. These results provide important insight about the nature of priming and how association strength, as determined from word-association norms, relates to the nature of semantic representation.


2020 ◽  
Vol 531 ◽  
pp. 47-67 ◽  
Author(s):  
Qian Geng ◽  
Siyu Deng ◽  
Danping Jia ◽  
Jian Jin

1987 ◽  
Vol 9 (1) ◽  
pp. 20-22
Author(s):  
Marietta Baba ◽  
Ann Sheldon ◽  
Thomas Miesse

It is widely acknowledged that university research and information exchange between academic and industrial science play vital roles in technological innovation, yet little is known about the process of university-industry linkage and its economic results. In 1982, the National Science Foundation's Division of Industrial Science and Technological Innovation (NSF/ISTI) began sponsorship of a major research project exploring the process of information exchange between academia and the business world, as well as the relationship of such linkage to the complex process of technological innovation. From 1982 to 1985, the Productivity Improvement Research Section of NSF/ISTI provided approximately $400,000 to support a feasibility study and a two-phase research project focusing on university-industry linkages in the State of Michigan (Grant #ISI-8313945).


2020 ◽  
Vol 10 (14) ◽  
pp. 4893 ◽  
Author(s):  
Wenfeng Hou ◽  
Qing Liu ◽  
Longbing Cao

Short text is widely seen in applications including Internet of Things (IoT). The appropriate representation and classification of short text could be severely disrupted by the sparsity and shortness of short text. One important solution is to enrich short text representation by involving cognitive aspects of text, including semantic concept, knowledge, and category. In this paper, we propose a named Entity-based Concept Knowledge-Aware (ECKA) representation model which incorporates semantic information into short text representation. ECKA is a multi-level short text semantic representation model, which extracts the semantic features from the word, entity, concept and knowledge levels by CNN, respectively. Since word, entity, concept and knowledge entity in the same short text have different cognitive informativeness for short text classification, attention networks are formed to capture these category-related attentive representations from the multi-level textual features, respectively. The final multi-level semantic representations are formed by concatenating all of these individual-level representations, which are used for text classification. Experiments on three tasks demonstrate our method significantly outperforms the state-of-the-art methods.


Author(s):  
J. Balaji ◽  
T.V. Geetha ◽  
Ranjani Parthasarathi

Customization of information from web documents is an immense job that involves mainly the shortening of original texts. This task is carried out using summarization techniques. In general, an automatically generated summary is of two types – extractive and abstractive. Extractive methods use surface level and statistical features for the selection of important sentences, without considering the meaning conveyed by those sentences. In contrast, abstractive methods need a formal semantic representation, where the selection of important components and the rephrasing of the selected components are carried out using the semantic features associated with the words as well as the context. Furthermore, a deep linguistic analysis is needed for generating summaries. However, the bottleneck behind abstractive summarization is that it requires semantic representation, inference rules and natural language generation. In this paper, The authors propose a semi-supervised bootstrapping approach for the identification of important components for abstractive summarization. The input to the proposed approach is a fully connected semantic graph of a document, where the semantic graphs are constructed for sentences, which are then connected by synonym concepts and co-referring entities to form a complete semantic graph. The direction of the traversal of nodes is determined by a modified spreading activation algorithm, where the importance of the nodes and edges are decided, based on the node and its connected edges under consideration. Summary obtained using the proposed approach is compared with extractive and template based summaries, and also evaluated using ROUGE scores.


Author(s):  
Leen Hanayneh ◽  
Yiwen Wang ◽  
Yan Wang ◽  
Jack C. Wileden ◽  
Khurshid A. Qureshi

Computer-aided design (CAD) data interoperability is one of the most important issues to enable information integration and sharing in a collaborative engineering environment. A significant amount of work has been done on the extension and standardization of neutral data formats in both academy and industry. In this paper, we present a feature mapping mechanism to allow for automatic feature information exchange. A hybrid semantic feature model is used to represent implicit and explicit features. A graph-based feature isomorphism algorithm is developed to support feature mapping between different CAD data formats.


2018 ◽  
Author(s):  
Sasa L. Kivisaari ◽  
Marijn van Vliet ◽  
Annika Hultén ◽  
Tiina Lindh-Knuutila ◽  
Ali Faisal ◽  
...  

AbstractWe can easily identify a dog merely by the sound of barking or an orange by its citrus scent. In this work, we study the neural underpinnings of how the brain combines bits of information into meaningful object representations. Modern theories of semantics posit that the meaning of words can be decomposed into a unique combination of individual semantic features (e.g., “barks”, “has citrus scent”). Here, participants received clues of individual objects in form of three isolated semantic features, given as verbal descriptions. We used machine-learning-based neural decoding to learn a mapping between individual semantic features and BOLD activation patterns. We discovered that the recorded brain patterns were best decoded using a combination of not only the three semantic features that were presented as clues, but a far richer set of semantic features typically linked to the target object. We conclude that our experimental protocol allowed us to observe how fragmented information is combined into a complete semantic representation of an object and suggest neuroanatomical underpinnings for this process.


Author(s):  
Yudong Zhang ◽  
Wenhao Zheng ◽  
Ming Li

Semantic feature learning for natural language and programming language is a preliminary step in addressing many software mining tasks. Many existing methods leverage information in lexicon and syntax to learn features for textual data. However, such information is inadequate to represent the entire semantics in either text sentence or code snippet. This motivates us to propose a new approach to learn semantic features for both languages, through extracting three levels of information, namely global, local and sequential information, from textual data. For tasks involving both modalities, we project the data of both types into a uniform feature space so that the complementary knowledge in between can be utilized in their representation. In this paper, we build a novel and general-purpose feature learning framework called UniEmbed, to uniformly learn comprehensive semantic representation for both natural language and programming language. Experimental results on three real-world software mining tasks show that UniEmbed outperforms state-of-the-art models in feature learning and prove the capacity and effectiveness of our model.


Sign in / Sign up

Export Citation Format

Share Document