machine readability
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 14)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Martijn G. Kersloot ◽  
Philip van Damme ◽  
Ameen Abu-Hanna ◽  
Derk L. Arts ◽  
Ronald Cornet

The FAIR Principles are supported by various initiatives in the biomedical community. However, little is known about the knowledge and efforts of individual clinical researchers regarding data FAIRification. We distributed an online questionnaire to researchers from six Dutch University Medical Centers, as well as researchers using an Electronic Data Capture platform, to gain insight into their understanding of and experience with data FAIRification. 164 researchers completed the questionnaire. 64.0% of them had heard of the FAIR Principles. 62.8% of the researchers spent some or a lot of effort to achieve any aspect of FAIR and 11.0% addressed all aspects. Most researchers were unaware of the Principles’ emphasis on both human- and machine-readability, as their FAIRification efforts were primarily focused on achieving human-readability (93.9%), rather than machine-readability (31.2%). In order to make machine-readable, FAIR data a reality, researchers require proper training, support, and tools to help them understand the importance of data FAIRification and guide them through the FAIRification process.


2021 ◽  
Author(s):  
Varsha Gouthamchand ◽  
Andre Dekker ◽  
Leonard Wee ◽  
Johan van Soest

One of the common concerns in clinical research is improving the infrastructure to facilitate the reuse of clinical data and deal with interoperability issues. FAIR (Findable, Accessible, Interoperable and Reusable) Data Principles enables reuse of data by providing us with descriptive metadata, explaining what the data represents and where the data can be found. In addition to aiding scholars, FAIR guidelines also help in enhancing the machine-readability of data, making it easier for machine algorithms to find and utilize the data. Hence, the feasibility of accurate interpretation of data is higher and this helps in obtaining maximum results from research work. FAIR-ification is done by embedding knowledge on data. This could be achieved by annotating the data using terminologies and concepts from Web Ontology Language (OWL). By attaching a terminological value, we add semantics to a specific data element, increasing the interoperability and reuse. However, this FAIR-ification of data can be a complicated and a time-consuming process. Our main objective is to disentangle the process of making data FAIR by using both domain and technical expertise. We apply this process in a workflow which involves FAIR-ification of four independent public HNSCC datasets from The Cancer Imaging Archive (TCIA). This approach converts the data from the four datasets into Linked Data using RDF triples, and finally annotates these datasets using standardized terminologies. By annotating them, we link all the four datasets together using their semantics and thus a single query would get the intended information from all the datasets.


2021 ◽  
Author(s):  
Bhairavsingh Ghorpade ◽  
Shivakumar Raman

Abstract Part design is the principal source of communicating design intent to manufacturing and inspection. The design data is often communicated through CAD systems. Modern analytics tools and artificial intelligence integration into manufacturing has significantly advanced machine recognition of design specification and manufacturing constraints. This paper is aimed at the collaboration among multiple vendors across supply chains to enable efficient order procurement. To this end, the paper discusses the development of a simple framework for extracting the dimensional data from part design and storing them for enhancing machine readability of the part design at multiple levels of manufacturing.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 188
Author(s):  
Chuangtao Ma ◽  
Bálint Molnár ◽  
András Benczúr

To tackle the issues of semantic collision and inconsistencies between ontologies and the original data model while learning ontology from relational database (RDB), a semi-automatic semantic consistency checking method based on graph intermediate representation and model checking is presented. Initially, the W-Graph, as an intermediate model between databases and ontologies, was utilized to formalize the semantic correspondences between databases and ontologies, which were then transformed into the Kripke structure and eventually encoded with the SMV program. Meanwhile, description logics (DLs) were employed to formalize the semantic specifications of the learned ontologies, since the OWL DL showed good semantic compatibility and the DLs presented an excellent expressivity. Thereafter, the specifications were converted into a computer tree logic (CTL) formula to improve machine readability. Furthermore, the task of checking semantic consistency could be converted into a global model checking problem that could be solved automatically by the symbolic model checker. Moreover, an example is given to demonstrate the specific process of formalizing and checking the semantic consistency between learned ontologies and RDB, and a verification experiment was conducted to verify the feasibility of the presented method. The results showed that the presented method could correctly check and identify the different kinds of inconsistencies between learned ontologies and its original data model.


2021 ◽  
Author(s):  
Neal Robert Haddaway ◽  
Adam G. Dunn ◽  
Shinichi Nakagawa

Evidence syntheses are considerable undertakings requiring substantial efforts to complete. Most data generated during a review are typically never made publicly available: a small fraction is typically provided alongside a review (typically not machine-readable). Full, publicly available review data in a standardised, machine-readable format, could radically increase the impact, transparency, efficiency, rigour, reusability, and legacy of evidence syntheses. Using a Delphi-style, stakeholder-driven approach, we plan to develop minimum standards and standardised formats for data from systematic reviews (and related systematic maps). We will collate suggestions from a broad group of stakeholders, using several rounds of anonymous voting and review to improve group consensus on necessary data and formatting. We will host an online workshop with a smaller group of stakeholders to refine and finalise a short-list of recommended minimum standards for the necessary data, formatting and file types to allow machine readability of the data associated with evidence syntheses.


Author(s):  
S. Kornienko ◽  
◽  
I. Ismakaeva ◽  

The article discusses the need and problems of organizing sources of data for the study of ideological and political and agitation-propaganda discourses of the “reds” and “whites” during the Civil War based on materials from the Perm province newspapers of 1918–1919. It is noted that the solution to these problems is determined by the tasks of study, using digital technologies and mainly reduced to ensuring the machine readability of data sources, their structuring and organization based on forms that allow machine processing. The main ways to solve these problems are the creation of digital sources of complexes based on source-oriented information systems, arrays in the form of file collections of publications in text formats and data in tabular forms. It is shown that solving the problems of organizing data creates the necessary conditions for the effective use of digital methods of analysis and obtaining the expected results at subsequent, analytical stages of the study.


Author(s):  
Elisa Herrmann

In 2016 the United Nations published the 17 Sustainable Development Goals (SDGs). It quickly became clear that information is a catalyst for almost every goal, and enhancing information access is necessary to achieve and ultimately improve global community life. The Biodiversity Heritage Library (BHL) is therefore an invaluable resource for redressing inequallities as it provides information and literature as an open access library. But there are also still hurdles to overcome to ensure information for all. In the following, we will focus on technical developments outlined in the BHL’s technical strategy. One challenge is the different digital infrastructures resulting in limited access to the web-based BHL. In 2019, only 53.6% of the global population accessed the internet (Clement 2020). Even if the reasons for this are diverse, we assume that network coverage is a problem we have to address. One focus of the BHL's technical strategy is to support and provide solutions for remote areas with no or low bandwidth connection. Furthermore the technical strategy focuses on the provision of services and tools for various usage scenarios by implementing a responsive design. In 2019, mobile devices, such as mobile phones and tablets, accounted for 54% of all page views worldwide (Poleshova 2020). Even though a differentiated view must be taken of which devices are used for which scenario, it can be assumed that mobile devices will be used more frequently in everyday scientific life, for example in field research. By a responsive design of the BHL website, we address this trend in technological development and media usage in order to remain a user-friendly research infrastructure in the future. Another challenge is the multilingual user experience. The multilingualism of BHL will become an essential part of the technological development to address the global biodiversity community and to reflect the worldwide biodiversity research. We aim to achieve this through a multilingual user interface and multilingual search options. The services and tools mentioned above require a high quality database, especially machine-readable text. The improvement of optical character recognition (OCR) is fundamentally important for further technological developments. Good OCR results ensure a comprehensive search in the entire corpus, and with further technological possibilities, data could be added that goes beyond the pure text. Currently taxonomic names are parsed and linked to the Encyclopedia of Life (EOL), giving users the opportunities to search for taxonomic synonyms. In the future, this enrichment could be used for more data, such as collection data, geographical names, etc. In the challenge of improving and enriching the data, the BHL will depend on its large community, for example in crowdsourcing transcription projects. In order to reach those objectives and to continue to offer BHL's services to the global community in the best possible way, we need to monitor best practices in digital library and bioinformatics developments and implement them wherever possible. The BHL consortium will have to rely on partnerships and collaborations to fulfill this plan. We are therefore looking into cooperation with other consortia and will also explore alternative technological development models where third parties would develop apps and services from open BHL data. Taking all the mentioned approaches into account, the BHL will develop from a mainly literature library to a data library. It will be our task to create open source software and tools, like better APIs, to support the re-use of the data. This goes along with the aim to increase the awareness of the BHL within the biodiversity community as it is set in the BHL Strategic Plan 2020-2025 (Biodiversity Heritage Library 2020). To draw a conclusion, the BHL's technical strategy focuses on five main objectives to advance information access for the biodiversity community worldwide: Improve global awareness and accessibility Enhance machine-readability of BHL content for data re-use Identify resources needed to achieve the technical plan Ensure continued priorities and leadership for technical infrastructure Implement BHL 2020 Technical Priority Plan (Biodiversity Heritage Library 2020). Improve global awareness and accessibility Enhance machine-readability of BHL content for data re-use Identify resources needed to achieve the technical plan Ensure continued priorities and leadership for technical infrastructure Implement BHL 2020 Technical Priority Plan (Biodiversity Heritage Library 2020). The principle of our work is to adapt BHL to current technological, scientific and social developments in order to provide the global community with the best possible research tool for biodiversity research and to enhance the achievement of the SDGs.


2020 ◽  
Author(s):  
Neal Robert Haddaway ◽  
Adam G. Dunn ◽  
Shinichi Nakagawa

Evidence syntheses are considerable undertakings requiring substantial efforts to complete. Most data generated during a review are typically never made publicly available: a small fraction is typically provided alongside a review (typically not machine-readable). Full, publicly available review data in a standardised, machine-readable format, could radically increase the impact, transparency, efficiency, rigour, reusability and legacy of evidence syntheses. Using a Delphi-style, stakeholder-driven approach, we plan to develop minimum standards and standardised formats for data from evidence syntheses. We will collate suggestions from a broad group of stakeholders, using several rounds of anonymous voting and review to improve group consensus on necessary data and formatting. We will host an online workshop with a smaller group of stakeholders to refine and finalise a short-list of recommended minimum standards for the necessary data, formatting and file types to allow machine readability of the data associated with evidence syntheses.


2020 ◽  
Vol 12 (14) ◽  
pp. 5644
Author(s):  
Sebastian Theißen ◽  
Jannick Höper ◽  
Jan Drzymalla ◽  
Reinhard Wimmer ◽  
Stanimira Markova ◽  
...  

Holistic views of all environmental impacts for buildings such as Life Cycle Assessments (LCAs) are rarely performed. Building services are mostly included in this assessment only in a simplified way, which means that their embodied impacts are usually underestimated. Open Building Information Modeling (BIM) and Industry Foundation Classes (IFC) provide for significantly more efficient and comprehensive LCA performance. This study investigated how building services can be included in an open BIM-integrated whole-building LCA for the first time, identified challenges and showed six solution approaches. Based on the definition of 222 exchange requirements and their mapping with IFC, an example BIM model was modeled before the linking of 7312 BIM objects of building services with LCA data that were analyzed in an LCA tool. The results show that 94.5% of the BIM objects could only be linked by applying one of the six solution approaches. The main problems were due to: (1) modeling by a lack of standardization of attributes of BIM objects; (2) difficult machine readability of the building services LCA datasets as well as a general lack of these; and (3) non-standardized properties of building services and LCA specific dataset information in the IFC data format.


Sign in / Sign up

Export Citation Format

Share Document