scholarly journals Covid-on-the-Web: Exploring the COVID-19 Scientific Literature through Visualization of Linked Data from Entity and Argument Mining

2021 ◽  
pp. 1-42
Author(s):  
Aline Menin ◽  
Franck Michel ◽  
Fabien Gandon ◽  
Raphaël Gazzotti ◽  
Elena Cabrio ◽  
...  

Abstract The unprecedented mobilization of scientists, consequent of the COVID-19 pandemics, has generated an enormous number of scholarly articles that is impossible for a human being to keep track and explore without appropriate tool support. In this context, we created the Covid-on-the-Web project, which aims to assist the access, querying, and sense making of COVID-19 related literature by combining efforts from semantic web, natural language processing, and visualization fields. Particularly, in this paper, we present (i) an RDF dataset, a linked version of the “COVID-19 Open Research Dataset” (CORD-19), enriched via entity linking and argument mining, and (ii) the “Linked Data Visualizer” (LDViz), 28 which assists the querying and visual exploration of the referred dataset. The LDViz tool assists the exploration of different views of the data by combining a querying management interface, which enables the definition of meaningful subsets of data through SPARQL queries, and a visualization interface based on a set of six visualization techniques integrated in a chained visualization concept, which also supports the tracking of provenance information. We demonstrate the potential of our approach to assist biomedical researchers in solving domain-related tasks, as well as to perform exploratory analyses through use case scenarios.

Author(s):  
Alfio Ferrara ◽  
Andriy Nikolov ◽  
François Scharffe

By specifying that published datasets must link to other existing datasets, the 4th linked data principle ensures a Web of data and not just a set of unconnected data islands. The authors propose in this paper the term data linking to name the problem of finding equivalent resources on the Web of linked data. In order to perform data linking, many techniques were developed, finding their roots in statistics, database, natural language processing and graph theory. The authors begin this paper by providing background information and terminological clarifications related to data linking. Then a comprehensive survey over the various techniques available for data linking is provided. These techniques are classified along the three criteria of granularity, type of evidence, and source of the evidence. Finally, the authors survey eleven recent tools performing data linking and we classify them according to the surveyed techniques.


Semantic Web ◽  
2013 ◽  
pp. 169-200 ◽  
Author(s):  
Alfio Ferraram ◽  
Andriy Nikolov ◽  
François Scharffe

By specifying that published datasets must link to other existing datasets, the 4th linked data principle ensures a Web of data and not just a set of unconnected data islands. The authors propose in this paper the term data linking to name the problem of finding equivalent resources on the Web of linked data. In order to perform data linking, many techniques were developed, finding their roots in statistics, database, natural language processing and graph theory. The authors begin this paper by providing background information and terminological clarifications related to data linking. Then a comprehensive survey over the various techniques available for data linking is provided. These techniques are classified along the three criteria of granularity, type of evidence, and source of the evidence. Finally, the authors survey eleven recent tools performing data linking and we classify them according to the surveyed techniques.


2021 ◽  
Vol 26 (2) ◽  
pp. 143-149
Author(s):  
Abdelghani Bouziane ◽  
Djelloul Bouchiha ◽  
Redha Rebhi ◽  
Giulio Lorenzini ◽  
Noureddine Doumi ◽  
...  

The evolution of the traditional Web into the semantic Web makes the machine a first-class citizen on the Web and increases the discovery and accessibility of unstructured Web-based data. This development makes it possible to use Linked Data technology as the background knowledge base for unstructured data, especially texts, now available in massive quantities on the Web. Given any text, the main challenge is determining DBpedia's most relevant information with minimal effort and time. Although, DBpedia annotation tools, such as DBpedia spotlight, mainly targeted English and Latin DBpedia versions. The current situation of the Arabic language is less bright; the Web content of the Arabic language does not reflect the importance of this language. Thus, we have developed an approach to annotate Arabic texts with Linked Open Data, particularly DBpedia. This approach uses natural language processing and machine learning techniques for interlinking Arabic text with Linked Open Data. Despite the high complexity of the independent domain knowledge base and the reduced resources in Arabic natural language processing, the evaluation results of our approach were encouraging.


Semantic Web ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 99-116
Author(s):  
Marie Destandau ◽  
Caroline Appert ◽  
Emmanuel Pietriga

Meaningful information about an RDF resource can be obtained not only by looking at its properties, but by putting it in the broader context of similar resources. Classic navigation paradigms on the Web of Data that employ a follow-your-nose strategy fail to provide such context, and put strong emphasis on first-level properties, forcing users to drill down in the graph one step at a time. We introduce the concept of semantic paths: starting from a set of resources, we follow and analyse chains of triples and characterize the sets of values at their end. We investigate a navigation strategy based on aggregation, relying on path characteristics to determine the most readable representation. We implement this approach in S-Paths, a browsing tool for linked datasets that systematically identifies the best rated view on a given resource set, leaving users free to switch to another resource set, or to get a different perspective on the same set by selecting other semantic paths to visualize.


Author(s):  
Tobias Käfer ◽  
Benjamin Jochum ◽  
Nico Aßfalg ◽  
Leonard Nürnberg

AbstractFor Read-Write Linked Data, an environment of reasoning and RESTful interaction, we investigate the use of the Guard-Stage-Milestone approach for specifying and executing user agents. We present an ontology to specify user agents. Moreover, we give operational semantics to the ontology in a rule language that allows for executing user agents on Read-Write Linked Data. We evaluate our approach formally and regarding performance. Our work shows that despite different assumptions of this environment in contrast to the traditional environment of workflow management systems, the Guard-Stage-Milestone approach can be transferred and successfully applied on the web of Read-Write Linked Data.


2021 ◽  
Vol 1 ◽  
pp. 2691-2700
Author(s):  
Stefan Goetz ◽  
Dennis Horber ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractThe success of complex product development projects strongly depends on the clear definition of target factors that allow a reliable statement about the fulfilment of the product requirements. In the context of tolerancing and robust design, Key Characteristics (KCs) have been established for this purpose and form the basis for all downstream activities. In order to integrate the activities related to the KC definition into product development as early as possible, the often vaguely formulated requirements must be translated into quantifiable KCs. However, this is primarily a manual process, so the results strongly depend on the experience of the design engineer.In order to overcome this problem, a novel computer-aided approach is presented, which automatically derives associated functions and KCs already during the definition of product requirements. The approach uses natural language processing and formalized design knowledge to extract and provide implicit information from the requirements. This leads to a clear definition of the requirements and KCs and thus creates a founded basis for robustness evaluation at the beginning of the concept design stage. The approach is exemplarily applied to a window lifter.


Designs ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 42
Author(s):  
Eric Lazarski ◽  
Mahmood Al-Khassaweneh ◽  
Cynthia Howard

In recent years, disinformation and “fake news” have been spreading throughout the internet at rates never seen before. This has created the need for fact-checking organizations, groups that seek out claims and comment on their veracity, to spawn worldwide to stem the tide of misinformation. However, even with the many human-powered fact-checking organizations that are currently in operation, disinformation continues to run rampant throughout the Web, and the existing organizations are unable to keep up. This paper discusses in detail recent advances in computer science to use natural language processing to automate fact checking. It follows the entire process of automated fact checking using natural language processing, from detecting claims to fact checking to outputting results. In summary, automated fact checking works well in some cases, though generalized fact checking still needs improvement prior to widespread use.


2019 ◽  
Vol 19 (1) ◽  
pp. 3-23
Author(s):  
Aurea Soriano-Vargas ◽  
Bernd Hamann ◽  
Maria Cristina F de Oliveira

We present an integrated interactive framework for the visual analysis of time-varying multivariate data sets. As part of our research, we performed in-depth studies concerning the applicability of visualization techniques to obtain valuable insights. We consolidated the considered analysis and visualization methods in one framework, called TV-MV Analytics. TV-MV Analytics effectively combines visualization and data mining algorithms providing the following capabilities: (1) visual exploration of multivariate data at different temporal scales, and (2) a hierarchical small multiples visualization combined with interactive clustering and multidimensional projection to detect temporal relationships in the data. We demonstrate the value of our framework for specific scenarios, by studying three use cases that were validated and discussed with domain experts.


Author(s):  
Olaf Hartig ◽  
Juan Sequeda ◽  
Jamie Taylor ◽  
Patrick Sinclair
Keyword(s):  

Author(s):  
Tim Berners-Lee ◽  
Kieron O’Hara

This paper discusses issues that will affect the future development of the Web, either increasing its power and utility, or alternatively suppressing its development. It argues for the importance of the continued development of the Linked Data Web, and describes the use of linked open data as an important component of that. Second, the paper defends the Web as a read–write medium, and goes on to consider how the read–write Linked Data Web could be achieved.


Sign in / Sign up

Export Citation Format

Share Document