Semantic Integration of Information Models of Different Domains for the Railway Sector

Author(s):  
Caner Guney ◽  
Berna Çalışkan ◽  
Ali Osman Atahan

Although BIM and GIS are from different domains, the interoperability of IFC and CityGML is seen today as a needed step for the plan, design, and construction of an infrastructure project. Such an approach utilizes data from both domains by converting two open data standards. However, the interoperability of GIS/BIM convergence with other domains, such as LandInfra, LADM, RailTopoModel, etc., is becoming increasingly more important, particularly in the projects of railway. Thus, the cooperation is not only for stakeholders within the AECO/MEP industry but also other stakeholders within other domains. A decentralized seamless data flow among different domains must be ensured by linking different domain information models. This study presents a comprehensive approach for incorporating information models with a particular focus on the railways. The approach in the study first asserts project information in BIM to be included in GIS using geospatial ontologies and then extends this approach by integrating other information models from different fields.

Author(s):  
G. S. Floros ◽  
C. Ellul ◽  
E. Dimopoulou

<p><strong>Abstract.</strong> Applications of 3D City Models range from assessing the potential output of solar panels across a city to determining the best location for 5G mobile phone masts. While in the past these models were not readily available, the rapid increase of available data from sources such as Open Data (e.g. OpenStreetMap), National Mapping and Cadastral Agencies and increasingly Building Information Models facilitates the implementation of increasingly detailed 3D Models. However, these sources also generate integration challenges relating to heterogeneity, storage and efficient management and visualization. CityGML and IFC (Industry Foundation Classes) are two standards that serve different application domains (GIS and BIM) and are commonly used to store and share 3D information. The ability to convert data from IFC to CityGML in a consistent manner could generate 3D City Models able to represent an entire city, but that also include detailed geometric and semantic information regarding its elements. However, CityGML and IFC present major differences in their schemas, rendering interoperability a challenging task, particularly when details of a building’s internal structure are considered (Level of Detail 4 in CityGML). The aim of this paper is to investigate interoperability options between the aforementioned standards, by converting IFC models to CityGML LoD 4 Models. The CityGML Models are then semantically enriched and the proposed methodology is assessed in terms of model’s geometric validity and capability to preserve semantics.</p>


2020 ◽  
Vol 6 ◽  
Author(s):  
Christoph Steinbeck ◽  
Oliver Koepler ◽  
Felix Bach ◽  
Sonja Herres-Pawlis ◽  
Nicole Jung ◽  
...  

The vision of NFDI4Chem is the digitalisation of all key steps in chemical research to support scientists in their efforts to collect, store, process, analyse, disclose and re-use research data. Measures to promote Open Science and Research Data Management (RDM) in agreement with the FAIR data principles are fundamental aims of NFDI4Chem to serve the chemistry community with a holistic concept for access to research data. To this end, the overarching objective is the development and maintenance of a national research data infrastructure for the research domain of chemistry in Germany, and to enable innovative and easy to use services and novel scientific approaches based on re-use of research data. NFDI4Chem intends to represent all disciplines of chemistry in academia. We aim to collaborate closely with thematically related consortia. In the initial phase, NFDI4Chem focuses on data related to molecules and reactions including data for their experimental and theoretical characterisation. This overarching goal is achieved by working towards a number of key objectives: Key Objective 1: Establish a virtual environment of federated repositories for storing, disclosing, searching and re-using research data across distributed data sources. Connect existing data repositories and, based on a requirements analysis, establish domain-specific research data repositories for the national research community, and link them to international repositories. Key Objective 2: Initiate international community processes to establish minimum information (MI) standards for data and machine-readable metadata as well as open data standards in key areas of chemistry. Identify and recommend open data standards in key areas of chemistry, in order to support the FAIR principles for research data. Finally, develop standards, if there is a lack. Key Objective 3: Foster cultural and digital change towards Smart Laboratory Environments by promoting the use of digital tools in all stages of research and promote subsequent Research Data Management (RDM) at all levels of academia, beginning in undergraduate studies curricula. Key Objective 4: Engage with the chemistry community in Germany through a wide range of measures to create awareness for and foster the adoption of FAIR data management. Initiate processes to integrate RDM and data science into curricula. Offer a wide range of training opportunities for researchers. Key Objective 5: Explore synergies with other consortia and promote cross-cutting development within the NFDI. Key Objective 6: Provide a legally reliable framework of policies and guidelines for FAIR and open RDM.


Database ◽  
2020 ◽  
Vol 2020 ◽  
Author(s):  
Xueqin Guo ◽  
Fengzhen Chen ◽  
Fei Gao ◽  
Ling Li ◽  
Ke Liu ◽  
...  

Abstract With the application and development of high-throughput sequencing technology in life and health sciences, massive multi-omics data brings the problem of efficient management and utilization. Database development and biocuration are the prerequisites for the reuse of these big data. Here, relying on China National GeneBank (CNGB), we present CNGB Sequence Archive (CNSA) for archiving omics data, including raw sequencing data and its further analyzed results which are organized into six objects, namely Project, Sample, Experiment, Run, Assembly and Variation at present. Moreover, CNSA has created a correlation model of living samples, sample information and analytical data on some projects. Both living samples and analytical data are directly correlated with the sample information. From either one, information or data of the other two can be obtained, so that all data can be traced throughout the life cycle from the living sample to the sample information to the analytical data. Complying with the data standards commonly used in the life sciences, CNSA is committed to building a comprehensive and curated data repository for storing, managing and sharing of omics data. We will continue to improve the data standards and provide free access to open-data resources for worldwide scientific communities to support academic research and the bio-industry. Database URL: https://db.cngb.org/cnsa/.


2020 ◽  
pp. 47-54
Author(s):  
Kim Young ◽  
Mew Leng Yin

E-government involves the use of latest skills by the government while providing the services and other information management systems to its citizens. Open data refers to the openness of all the data related to government to its citizens so that the government becomes more and more transparent and to decrease negative activities. Transparency refers to the clear and clean image of a government towards its people, which increases the trust of people on their government. The motive of this study is to analyze the impact of e-government adoption and open government data on transparency in ASEAN countries. Two control variables i.e. literacy rate and corruption have also been used in the study. The past literature has also been discussed in literature review section of this study. In order to conduct research, data about the concerned variables of the study has been collected from ASEAN countries consisting of 29 years. After applying several tests and approaches for research purpose, the two major hypotheses of this study are accepted along with the impact of a control variable, corruption. However, the impact of other control variable i.e. literacy rate has been rejected. There are various theoretical, practical and policy making benefits that will increase the transparency.


2015 ◽  
Vol 11 (5) ◽  
pp. 4309-4327
Author(s):  
N. P. McKay ◽  
J. Emile-Geay

Abstract. Paleoclimatology is a highly collaborative scientific endeavor, increasingly reliant on online databases for data sharing. Yet, there is currently no universal way to describe, store and share paleoclimate data: in other words, no standard. Data standards are often regarded by scientists as mere technicalities, though they underlie much scientific and technological innovation, as well as facilitating collaborations between research groups. In this article, we propose a preliminary data standard for paleoclimate data, general enough to accommodate all the proxy and measurement types encountered in a large international collaboration (PAGES2K). We also introduce a vehicle for such structured data (Linked Paleo Data, or LiPD), leveraging recent advances in knowledge representations (Linked Open Data). The LiPD framework enables quick querying and extraction, and we expect that it will facilitate the writing of open-source, community codes to access, analyze, model and visualize paleoclimate observations. We welcome community feedback on this standard, and encourage paleoclimatologists to experiment with the format for their own purposes.


2021 ◽  
Vol 258 ◽  
pp. 09054
Author(s):  
Pavel Chelyshkov ◽  
Sergey Volkov ◽  
Evgeny Babushkin

This article discusses the basics, concept and methodology for constructing tools that implement data exchange processes, including an exhaustive list of necessary data for the formation of information models of capital construction objects at each stage of the life cycle, as well as a plan for the development of these tools that provide information exchange processes. The technical and technological foundations for the formation of a digital information model and a scheme for verifying information transmitted from stage to stage of the life cycle of a capital construction object are stated.


2018 ◽  
pp. 205-221
Author(s):  
Richard Beckwith ◽  
John Sherry ◽  
David Prendergast
Keyword(s):  

2020 ◽  
Author(s):  
JAYDIP DATTA

In this REVIEW a comprehensive metod for Data management including data Structure ( C++ ) for ALGOL can be co-rrelated with Data Base Management System including front end tool ( VISUAL BASIC ) with back end tool ( ORACLE or SQL server ) . ( Open Data Base Connectivity System ) ODBC . There are two important workshops 1. Conducted by Calcutta UNIVERSITY COMPUTER CENTRE ( 1997 ) for ANCI C - Important Highlights : Looping , Arrays and Pointers . 2. Conducted By La Mare Info tech Pvt Ltd ( 2001 ) - for C++ - Important Highlights - Classes and Objects , Inheritance , Encapsulation , Looping , Arrays etc. In this Data Flow Diagram ( DFD ) algorithm is used for the Object Oriented Programming Language ( OOPL ) like C++ . The algorithm is linked with Different models like Random Coil, Zipper and Zimm Bragg modelling applied on different Bio molecules like Serum Albumin , Poly - Gamma Benzoyl Glutamate etc. N.B. Please go thrrough Comment ( 18 ) carefully . Data Base Management Science specially Data Based Programming on ORACLE , PL SQL , V.B , and Access. The open data base connectivity system ( ODBC ) provides the STORE and VENDOR management system . Which is a basic AI Management system . REF : DATA FLOW DIAGRAM : A DATA STRUCTURE May 2019 DOI: 10.13140/RG.2.2.29643.44329/8 LicenseCC BY-SA 4.0 DATA BASE MANAGEMENT : A PART OF AI MANAGEMENT August 2019 DOI: 10.13140/RG.2.2.16099.09769/1 LicenseCC BY-SA 4.0


Sign in / Sign up

Export Citation Format

Share Document