scholarly journals Making Linked Data accessible for One Health Surveillance with the "One Health Linked Data Toolbox"

2021 ◽  
Vol 4 ◽  
Author(s):  
Taras Günther ◽  
Matthias Filter ◽  
Fernanda Dórea

In times of emerging diseases, data sharing and data integration are of particular relevance for One Health Surveillance (OHS) and decision support. Furthermore, there is an increasing demand to provide governmental data in compliance to the FAIR (Findable, Accessible, Interoperable, Reusable) data principles. Semantic web technologies are key facilitators for providing data interoperability, as they allow explicit annotation of data with their meaning, enabling reuse without loss of the data collection context. Among these, we highlight ontologies as a tool for modeling knowledge in a field, which simplify the interpretation and mapping of datasets in a computer readable medium; and the Resource Description Format (RDF), which allows data to be shared among human and computer agents following this knowledge model. Despite their potential for enabling cross-sectoral interoperability and data linkage, the use and application of these technologies is often hindered by their complexity and the lack of easy-to-use software applications. To overcome these challenges the OHEJP Project ORION developed the Health Surveillance Ontology (HSO). This knowledge model forms a foundation for semantic interoperability in the domain of One Health Surveillance. It provides a solution to add data from the target sectors (public health, animal health and food safety) in compliance with the FAIR principles of findability, accessibility, interoperability, and reusability, supporting interdisciplinary data exchange and usage. To provide use cases and facilitate the accessibility to HSO, we developed the One Health Linked Data Toolbox (OHLDT), which consists of three new and custom-developed web applications with specific functionalities. The first web application allows users to convert surveillance data available in Excel files online into HSO-RDF and vice versa. The web application demonstrates that data provided in well-established data formats can be automatically translated in the linked data format HSO-RDF. The second application is a demonstrator of the usage of HSO-RDF in a HSO triplestore database. In the user interface of this application, the user can select HSO concepts based on which to search and filter among surveillance datasets stored in a HSO triplestore database. The service then provides automatically generated dashboards based on the context of the data. The third web application demonstrates the use of data interoperability in the OHS context by using HSO-RDF to annotate meta-data, and in this way link datasets across sectors. The web application provides a dashboard to compare public data on zoonosis surveillance provided by EFSA and ECDC. The first solution enables linked data production, while the second and third provide examples of linked data consumption, and their value in enabling data interoperability across sectors. All described solutions are based on the open-source software KNIME and are deployed as web service via a KNIME Server hosted at the German Federal Institute for Risk Assessment. The semantic web extension of KNIME, which is based on the Apache Jena Framework, allowed a rapid an easy development within the project. The underlying open source KNIME workflows are freely available and can be easily customized by interested end users. With our applications, we demonstrate that the use of linked data has a great potential strengthening the use of FAIR data in OHS and interdisciplinary data exchange.

2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


Author(s):  
Georg Neubauer

The main subject of the work is the visualization of typed links in Linked Data. The academic subjects relevant to the paper in general are the Semantic Web, the Web of Data and information visualization. The Semantic Web, invented by Tim Berners-Lee in 2001, was announced as an extension to the World Wide Web (Web 2.0). The actual area of investigation concerns the connectivity of information on the World Wide Web. To be able to explore such interconnections, visualizations are critical requirements as well as a major part of processing data in themselves. In the context of the Semantic Web, representation of information interrelations can be achieved using graphs. The aim of the article is to primarily describe the arrangement of Linked Data visualization concepts by establishing their principles in a theoretical approach. Putting design restrictions into context leads to practical guidelines. By describing the creation of two alternative visualizations of a commonly used web application representing Linked Data as network visualization, their compatibility was tested. The application-oriented part treats the design phase, its results, and future requirements of the project that can be derived from this test.


2021 ◽  
Vol 81 (3-4) ◽  
pp. 318-358
Author(s):  
Sander Stolk

Abstract This article provides an introduction to the web application Evoke. This application offers functionality to navigate, view, extend, and analyse thesaurus content. The thesauri that can be navigated in Evoke are expressed in Linguistic Linked Data, an interoperable data form that enables the extension of thesaurus content with custom labels and allows for the linking of thesaurus content to other digital resources. As such, Evoke is a powerful research tool that facilitates its users to perform novel cultural linguistic analyses over multiple sources. This article further demonstrates the potential of Evoke by discussing how A Thesaurus of Old English was made available in the application and how this has already been adopted in the field of Old English studies. Lastly, the author situates Evoke within a number of recent developments in the field of Digital Humanities and its applications for onomasiological research.


2017 ◽  
Vol 22 (1) ◽  
pp. 21-37 ◽  
Author(s):  
Matthew T. Mccarthy

The web of linked data, otherwise known as the semantic web, is a system in which information is structured and interlinked to provide meaningful content to artificial intelligence (AI) algorithms. As the complex interactions between digital personae and these algorithms mediate access to information, it becomes necessary to understand how these classification and knowledge systems are developed. What are the processes by which those systems come to represent the world, and how are the controversies that arise in their creation, overcome? As a global form, the semantic web is an assemblage of many interlinked classification and knowledge systems, which are themselves assemblages. Through the perspectives of global assemblage theory, critical code studies and practice theory, I analyse netnographic data of one such assemblage. Schema.org is but one component of the larger global assemblage of the semantic web, and as such is an emergent articulation of different knowledges, interests and networks of actors. This articulation comes together to tame the profusion of things, seeking stability in representation, but in the process, it faces and produces more instability. Furthermore, this production of instability contributes to the emergence of new assemblages that have similar aims.


Author(s):  
Leila Zemmouchi-Ghomari

Data play a central role in the effectiveness and efficiency of web applications, such as the Semantic Web. However, data are distributed across a very large number of online sources, due to which a significant effort is needed to integrate this data for its proper utilization. A promising solution to this issue is the linked data initiative, which is based on four principles related to publishing web data and facilitating interlinked and structured online data rather than the existing web of documents. The basic ideas, techniques, and applications of the linked data initiative are surveyed in this paper. The authors discuss some Linked Data open issues and potential tracks to address these pending questions.


Author(s):  
Charles Greenidge ◽  
Hadrian Peter

Data warehouses have established themselves as necessary components of an effective Information Technology (IT) strategy for large businesses. In addition to utilizing operational databases data warehouses must also integrate increasing amounts of external data to assist in decision support. An important source of such external data is the Web. In an effort to ensure the availability and quality of Web data for the data warehouse we propose an intermediate data-staging layer called the Meta-Data Engine (M-DE). A major challenge, however, is the conversion of data originating in the Web, and brought in by robust search engines, to data in the data warehouse. The authors therefore also propose a framework, the Semantic Web Application (SEMWAP) framework, which facilitates semi-automatic matching of instance data from opaque web databases using ontology terms. Their framework combines Information Retrieval (IR), Information Extraction (IE), Natural Language Processing (NLP), and ontology techniques to produce a matching and thus provide a viable building block for Semantic Web (SW) Applications.


Author(s):  
Amrapali Zaveri ◽  
Andrea Maurino ◽  
Laure-Berti Equille

The standardization and adoption of Semantic Web technologies has resulted in an unprecedented volume of data being published as Linked Data (LD). However, the “publish first, refine later” philosophy leads to various quality problems arising in the underlying data such as incompleteness, inconsistency and semantic ambiguities. In this article, we describe the current state of Data Quality in the Web of Data along with details of the three papers accepted for the International Journal on Semantic Web and Information Systems' (IJSWIS) Special Issue on Web Data Quality. Additionally, we identify new challenges that are specific to the Web of Data and provide insights into the current progress and future directions for each of those challenges.


Author(s):  
Khalid Saleh Aloufi

<span>Open data are available from various private and public institutions in different resource formats. There are already great number of open data that are published using open data portals, where datasets and resources are mainly presented in tabular or sheet formats. However, such formats have some barriers with application developments and web standards. One of the web recommenced standards for semantic web application is RDF. There are various research efforts have been focused on presenting open data in RDF formats. However, no framework has transformed tabular open data into RDFs considering the HTML tags and properties of the resources and datasets. Therefore, a methodology is required to generate RDF resources from this type of open data resources. This methodology applies data transformations of open data from a tabular format to RDF files for the Saudi Open Data Portal. The methodology successfully transforms open data resources in sheet format into RDF resources. Recommendations and future work are given to enhance the development of building open data.</span>


Author(s):  
Jessica Oliveira De Souza ◽  
Jose Eduardo Santarem Segundo

Since the Semantic Web was created in order to improve the current web user experience, the Linked Data is the primary means in which semantic web application is theoretically full, respecting appropriate criteria and requirements. Therefore, the quality of data and information stored on the linked data sets is essential to meet the basic semantic web objectives. Hence, this article aims to describe and present specific dimensions and their related quality issues.


2021 ◽  
Vol 9 ◽  
pp. 205031212110430
Author(s):  
Samer Abuzerr ◽  
Kate Zinszer ◽  
Abraham Assan

Objectives: Several factors have changed interactions between people, animals, plants, and the environment – renewing the relevance of the One Health surveillance system in the fight against zoonotic diseases such as COVID-19. Therefore, this study aimed to explore barriers to implementing an integrated One Health surveillance system in Palestine. Methods: This qualitative study was conducted from April 2020 until August 2020. Data were collected using semi-structured interview guides. Seven key stakeholders were interviewed during data collection. A thematic analysis was performed. Results: Four overarching themes emerged explaining barriers to integrated implementation of the One Health surveillance system. They are lack of policy coherence, limited financial resources, poor governance and leadership, and lack of One Health training programmes. Conclusion: Improved understanding of the transmission and effective control (including One Health approach) of zoonotic disease and better governance and leadership are critical in the diseases that threaten public health, such as the COVID-19.


Sign in / Sign up

Export Citation Format

Share Document