scholarly journals Interpreted open data exchange between architectural design and structural analysis models

2021 ◽  
Vol 26 ◽  
pp. 39-57
Author(s):  
Goran Sibenik ◽  
Iva Kovacic

The heterogeneity of the architecture, engineering and construction (AEC) industry reflects on digital building models, which differ across domains and planning phases. Data exchange between architectural design and structural analysis models poses a particular challenge because of dramatically different representations of building elements. Existing software tools and standards have not been able to deal with these differences. The research on inter-domain building information modelling (BIM) frameworks does not consider the geometry interpretations for data exchange. Analysis of geometry interpretations is mostly project-specific and is seldom reflected in general data exchange frameworks. By defining a data exchange framework that engages with varying requirements and representations of architectural design and structural analysis in terms of geometry, which is open to other domains, we aim to close the identified gap. Existing classification systems in software tools and standards were reviewed in order to understand architectural design and structural analysis representations and to identify the relationships between them. Following the analysis, a novel data management framework based on classification, interpretation and automation was proposed, implemented and tested. Classification is a model specification including domain-specific terms and relationships between them. Interpretations consist of inter-domain procedures necessary to generate domain-specific models from a provided model. Automation represents the connection between open domain-specific models and proprietary models in software tools. Practical implementation with a test case demonstrated a possible realization of the proposed framework. The innovative contribution of the research is a novel framework based on the system of open domain-specific classifications and procedures for the inter-domain interpretation, which can prepare domain-specific models on central storage. The main benefit is a centrally prepared domain-specific model, relieving software developers from so-far-unsuccessful implementation of complex inter-domain interpretations in each software tool, and providing end users with control over the data exchange. Although the framework is based on the exchange between architectural design and structural analysis, the proposed central data management framework can be used for other exchange processes involving different model representations.

Buildings ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 605
Author(s):  
Goran Sibenik ◽  
Iva Kovacic ◽  
Valentinas Petrinas ◽  
Wendelin Sprenger

Building information modelling promises model-based collaboration between stakeholders in the project design stage. However, data exchange between physical and analytical building models used for architectural design and structural analysis respectively rarely takes place due to numerous differences in building element representation, especially the representation of geometry. This paper presents the realization of a novel data exchange framework between architectural design and structural analysis building models, based on open interpretations on central storage. The exchange is achieved with a new system architecture, where the program redDim was developed to perform the interpretations, including the most challenging transformations of geometry. We deliver a proof of concept for the novel framework with a prototype building model and verify it on two further building models. Results show that structural-analysis models can be correctly automatically created by reducing dimensionality and reconnecting building elements. The proposed data exchange provides a base for missing standardization of interpretations, which facilitates the non-proprietary automated conversion between physical and analytical models. This research fills the gap in the existing model-based communication that could lead to a seamless data exchange.


1999 ◽  
Vol 39 (4) ◽  
pp. 193-201
Author(s):  
P. J. A. Gijsbers

The need for integrated analysis poses a request for integration of computer models, paying extra attention to interfaces, data management and user interaction. Sector wide standardization using data dictionaries and data exchange formats can be a great help in streamlining data exchange. However, this type of standardization can have some drawbacks for a generic framework for model integration. Another concept, called Model Data Dictionary (MDD), has been developed as an alternative for proper data management. The concept is a variant on the federated database concept, a concept where local databases maintain their autonomy, while an interconnection database provides a link for sharing data. The MDD is based on a highly generic data model for geographic referenced objects, which if needed facilitates mapping of the sector wide data dictionary. External interfaces provide, in combination with a data format mapping component, a link to SQL-based data sources and model specific databases. A generic Object Data Editor (ODE), linked to the MDD, has been proposed for provision of a common data editing facility for mathematical models. A test version of the combined MDD/ODE-concept has shown the applicability for integration of all kinds of geographic object oriented mathematical models (both simulation and optimization).


2021 ◽  
Vol 3 (2) ◽  
pp. 299-317
Author(s):  
Patrick Schrempf ◽  
Hannah Watson ◽  
Eunsoo Park ◽  
Maciej Pajak ◽  
Hamish MacKinnon ◽  
...  

Training medical image analysis models traditionally requires large amounts of expertly annotated imaging data which is time-consuming and expensive to obtain. One solution is to automatically extract scan-level labels from radiology reports. Previously, we showed that, by extending BERT with a per-label attention mechanism, we can train a single model to perform automatic extraction of many labels in parallel. However, if we rely on pure data-driven learning, the model sometimes fails to learn critical features or learns the correct answer via simplistic heuristics (e.g., that “likely” indicates positivity), and thus fails to generalise to rarer cases which have not been learned or where the heuristics break down (e.g., “likely represents prominent VR space or lacunar infarct” which indicates uncertainty over two differential diagnoses). In this work, we propose template creation for data synthesis, which enables us to inject expert knowledge about unseen entities from medical ontologies, and to teach the model rules on how to label difficult cases, by producing relevant training examples. Using this technique alongside domain-specific pre-training for our underlying BERT architecture i.e., PubMedBERT, we improve F1 micro from 0.903 to 0.939 and F1 macro from 0.512 to 0.737 on an independent test set for 33 labels in head CT reports for stroke patients. Our methodology offers a practical way to combine domain knowledge with machine learning for text classification tasks.


Author(s):  
Lichao Xu ◽  
Szu-Yun Lin ◽  
Andrew W. Hlynka ◽  
Hao Lu ◽  
Vineet R. Kamat ◽  
...  

AbstractThere has been a strong need for simulation environments that are capable of modeling deep interdependencies between complex systems encountered during natural hazards, such as the interactions and coupled effects between civil infrastructure systems response, human behavior, and social policies, for improved community resilience. Coupling such complex components with an integrated simulation requires continuous data exchange between different simulators simulating separate models during the entire simulation process. This can be implemented by means of distributed simulation platforms or data passing tools. In order to provide a systematic reference for simulation tool choice and facilitating the development of compatible distributed simulators for deep interdependent study in the context of natural hazards, this article focuses on generic tools suitable for integration of simulators from different fields but not the platforms that are mainly used in some specific fields. With this aim, the article provides a comprehensive review of the most commonly used generic distributed simulation platforms (Distributed Interactive Simulation (DIS), High Level Architecture (HLA), Test and Training Enabling Architecture (TENA), and Distributed Data Services (DDS)) and data passing tools (Robot Operation System (ROS) and Lightweight Communication and Marshalling (LCM)) and compares their advantages and disadvantages. Three specific limitations in existing platforms are identified from the perspective of natural hazard simulation. For mitigating the identified limitations, two platform design recommendations are provided, namely message exchange wrappers and hybrid communication, to help improve data passing capabilities in existing solutions and provide some guidance for the design of a new domain-specific distributed simulation framework.


2012 ◽  
Vol 12 (11) ◽  
pp. 1237-1242 ◽  
Author(s):  
Walter Cedeno ◽  
Simson Alex ◽  
Edward P. Jaeger ◽  
Dimitris K. Agrafiotis ◽  
Victor S. Lobanov

Author(s):  
Uga Sproģis ◽  
Matīss Rikters

We present the Latvian Twitter Eater Corpus - a set of tweets in the narrow domain related to food, drinks, eating and drinking. The corpus has been collected over time-span of over 8 years and includes over 2 million tweets entailed with additional useful data. We also separate two sub-corpora of question and answer tweets and sentiment annotated tweets. We analyse the contents of the corpus and demonstrate use-cases for the sub-corpora by training domain-specific question-answering and sentiment-analysis models using the data from the corpus.


Sign in / Sign up

Export Citation Format

Share Document