data standard
Recently Published Documents


TOTAL DOCUMENTS

173
(FIVE YEARS 56)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Laura Brenskelle ◽  
John Wieczorek ◽  
Edward Davis ◽  
Kitty Emery ◽  
Neill J. Wallis ◽  
...  

Darwin Core, the data standard used for sharing modern biodiversity and paleodiversity occurrence records, has previously lacked proper mechanisms for reporting what is known about the estimated age range of specimens from deep time. This has led to data providers putting these data in fields where they cannot easily be found by users, which impedes the reuse and improvement of these data by other researchers. Here we describe the development of the Chronometric Age Extension to Darwin Core, a ratified, community-developed extension that enables the reporting of ages of specimens from deeper time and the evidence supporting these estimates. The extension standardizes reporting about the methods or assays used to determine an age and other critical information like uncertainty. It gives data providers flexibility about the level of detail reported, focusing on the minimum information needed for reuse while still allowing for significant detail if providers have it. Providing a standardized format for reporting these data will make them easier to find and search and enable researchers to pinpoint specimens of interest for data improvement or accumulate more data for broad temporal studies. The Chronometric Age Extension was also the first community-managed vocabulary to undergo the new Biodiversity Informatics Standards (TDWG) review and ratification process, thus providing a blueprint for future Darwin Core extension development.


2021 ◽  
Author(s):  
Li Tang ◽  
LiQingLi ◽  
Yang Xie ◽  
Jilan Zhang ◽  
Gongliang Li ◽  
...  

Author(s):  
W. N. F. W. A. Basir ◽  
U. Ujang ◽  
Z. Majid ◽  
S. Azri

Abstract. Application of Building Information Modeling (BIM) in construction industry has been applied for many years back. This because BIM can provide a better advantage in construction industry in term of controlling and managing construction project during their life cycle. The advantages that can be provide by BIM is focusing on the indoor planning tasks. But, when the construction project involves, besides indoor planning, outdoor planning also is important part that need to be look up. To cover the outdoor planning in construction project, Geographic Information System (GIS) need to be applied. GIS can overcome this problem because GIS mainly for outdoor planning by using their spatial analysis. GIS can offer a high degree of geospatial information and can provide the detailed geometrical and semantic information of building to assisted across improve automation. Towards produce the improved preparation in construction project, BIM and GIS should be integrated. To integrate both domains, the data interoperability between them need to be investigate because they used different data standard. This study focusses on solving the data interoperability through the data integration between BIM and GIS to solve the problem of data mismatch and data missing during data translation process. Industry Foundation Classes (IFC) was used as a data standard to perform the data integration between BIM and GIS. The outcomes from this study show that when the data interoperability applied between BIM and GIS, the problem above can be solved, and the data dimension and their coordinate system also can be control.


2021 ◽  
Author(s):  
Joy Hsu ◽  
Ramya Ravichandran ◽  
Edwin Zhang ◽  
Christine Keung

Buildings ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 456
Author(s):  
Soroush Sobhkhiz ◽  
Yu-Cheng Zhou ◽  
Jia-Rui Lin ◽  
Tamer E. El-Diraby

This research reviews recent advances in the domain of Automated Rule Checking (ARC) and argues that current systems are predominantly designed to validate models in post-design stages, useful for applications such as e-permitting. However, such a design-check-separated paradigm imposes a burden on designers as they need to iteratively fix the fail-to-pass issues. Accordingly, the study reviews the best-practices of IFC-based ARC systems and proposes a framework for ARC system development, aiming to achieve proactive bottom-up solutions building upon the requirements and resources of end-users. To present and evaluate its capabilities, the framework is implemented in a real-life case study. The case study presents all the necessary steps that should be taken for the development of an ARC solution from rule selection and analysis, to implementation and feedback. It is explained how a rule checking problem can be broken down into separate modules implemented in an iterative approach. Results show that the proposed framework is feasible for successful implementation of ARC systems and highlight that a stable data standard and modeling guideline is needed to achieve proactive ARC solutions. The study also discusses that there are some critical limitations in using IFC which need to be addressed in future studies.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


2021 ◽  
Author(s):  
Vivien Voss ◽  
David Grawe ◽  
K. Heinke Schlünzen

<p>Numerical modeling makes it possible to represent complex processes in small-scale and complex areas like cities. For resolving obstacles, grid sizes in the order of meters are needed. Due to small grid sizes and numerical restrictions, such high-resolution investigations require a great deal of resources. Therefore, a re-use of the results by others would enhance the value of any of these model results. However, the subsequent use of model results is still poorly developed. Comparisons of model data, dissemination of results, or reproduction of simulations are hampered by inconsistent data structures, non-standardized variable names, and lack of information on model setup. In general, to ensure the reusability and accessibility of model data, data standards should be used. The most common data standard for atmospheric model output data are the CF conventions, a data standard for netCDF files, but this standard is currently not extended to cover the model output of obstacle resolving models (ORM).</p><p>The AtMoDat (Atmospheric Model Data) project developed a model data standard (ATMODAT standard) which ensure FAIR (Findability, Accessibility, Interoperability, and Reuse) and well documented data. We involved the micro-scale modelling community in this process with a web based survey (http://uhh.de/orm-survey) to find out which micro-scale ORMs are currently in use, their model specifics (e.g. used grid, coordinate system), and the handling of the model result data. Furthermore, the survey provides the opportunity to include suggestions and ideas, what we should consider in the development of the standard. We already identified typical variables used by ORMs (i.e. building structures, wall temperatures) and will propose them to be included in the CF convention.  The application of this standard is tested on the model output of the ORM MITRAS. The standard and experiences with its application will be presented.</p>


2021 ◽  
Author(s):  
Danila Bredikhin ◽  
Ilia Kats ◽  
Oliver Stegle

Advances in multi-omics technologies have led to an explosion of multimodal datasets to address questions ranging from basic biology to translation. While these rich data provide major opportunities for discovery, they also come with data management and analysis challenges, thus motivating the development of tailored computational solutions to deal with multi-omics data. Here, we present a data standard and an analysis framework for multi-omics - MUON - designed to organise, analyse, visualise, and exchange multimodal data. MUON stores multimodal data in an efficient yet flexible data structure, supporting an arbitrary number of omics layers. The MUON data structure is interoperable with existing community standards for single omics, and it provides easy access to both data from individual omics as well as multimodal dataviews. Building on this data infrastructure, MUON enables a versatile range of analyses, from data preprocessing, the construction of multi-omics containers to flexible multi-omics alignment.


Sign in / Sign up

Export Citation Format

Share Document