scholarly journals Implementation of a System That Helps Novice Users Work with Linked Data

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1237
Author(s):  
Solgil Oh ◽  
Sujin Yoo ◽  
Yuri Kim ◽  
Jisoo Song ◽  
Seongbin Park

On the Semantic Web, resources are connected to each other by the IRI. As the basic unit is comprised of linked data, machines can use semantic data and reason their relations without additional intervention on the Semantic Web. However, it is necessary for users who first encounter the Semantic Web to understand its underlying structure and some grammatical rules. This study suggests linking data sets of the Semantic Web through the Euler diagram, which does not require any prior knowledge. We performed a user study with our relationship-building system and verified that users could better understand linked data through the usage of the system. Users can indirectly be guided by using our Euler diagram-based data relationship-building system to understand the Semantic Web and its data linkage system. We also expect that the data sets defined through our system can be used in various applications.


Author(s):  
Jessica Oliveira De Souza ◽  
Jose Eduardo Santarem Segundo

Since the Semantic Web was created in order to improve the current web user experience, the Linked Data is the primary means in which semantic web application is theoretically full, respecting appropriate criteria and requirements. Therefore, the quality of data and information stored on the linked data sets is essential to meet the basic semantic web objectives. Hence, this article aims to describe and present specific dimensions and their related quality issues.



2016 ◽  
Vol 35 (1) ◽  
pp. 51 ◽  
Author(s):  
Juliet L. Hardesty

Metadata, particularly within the academic library setting, is often expressed in eXtensible Markup Language (XML) and managed with XML tools, technologies, and workflows. Managing a library’s metadata currently takes on a greater level of complexity as libraries are increasingly adopting the Resource Description Framework (RDF). Semantic Web initiatives are surfacing in the library context with experiments in publishing metadata as Linked Data sets and also with development efforts such as BIBFRAME and the Fedora 4 Digital Repository incorporating RDF. Use cases show that transitions into RDF are occurring in both XML standards and in libraries with metadata encoded in XML. It is vital to understand that transitioning from XML to RDF requires a shift in perspective from replicating structures in XML to defining meaningful relationships in RDF. Establishing coordination and communication among these efforts will help as more libraries move to use RDF, produce Linked Data, and approach the Semantic Web.



Author(s):  
J. A. Jabbar ◽  
R. Bulbul

<p><strong>Abstract.</strong> Traveling is a basic part of our daily life, whenever a person wants to travel e.g. from home to workplace, the essential question that rises is which route to follow. The choice of a route also varies based on traveler’s interest e.g. visiting hospital on way back to home or traveling on a greener route. This varied route planning may be easy for any person in his local neighborhood, however in a new neighborhood and increasing number of options e.g. possible restaurant options to visit, a guiding system is required that suggests an optimal route according to traveler’s interests i.e. answering semantic queries. Most of the existing routing engines only answer geometric queries e.g. shortest route due to lack of data semantics and adding semantics to a routing graph requires a semantic data source. Geo-semantics can be added through combination of GIS and semantic web. Semantic web is an extension of World Wide Web (WWW) where the content is maintained and structured in a standard way that is understandable by machines; hence providing linked data as a way for semantic enrichment, in this study the semantic enrichment of routing dataset. To use this semantically enriched routing network a routing application needs to be developed that can answer the semantic queries. This research serves as a proof of concept for how linked data can be used for semantic enrichment of routing networks and proposes a prototype routing framework and application designed using open source technologies along with use cases where semantic routing queries are addressed. It also highlights the challenges of this approach and future research perspectives.</p>



Author(s):  
Andra Waagmeester ◽  
Lynn Schriml ◽  
Andrew Su

Wikidata (http://www.wikidata.org) is the linked database of the Wikimedia Foundation. Like its sister project Wikipedia it is open to humans and machines. Initially primarily intended as a central repository of structured data for the approximately 200 language versions of Wikipedia, Wikidata currently also serves many other use cases. It is an open, Semantic Web-compatible database that anyone can edit. Here, we present the Gene Wiki initiative. In 2008, this project started by creating Wikipedia articles for all human genes (Huss et al. 2008). These articles were enriched with structured information on these genes as tables (called infoboxes). With the onset of Wikidata in 2012, the project diverted its attention from the infoboxes and since we have been enriching Wikidata with structured knowledge from public scientific resources on gene, proteins, diseases and compounds (Burgstaller-Muehlbacher et al. 2016). This structured information is added to Wikidata, while active links to the primary source are maintained. Adding a new resource to Wikidata is a community-driven process that starts with modelling the subjects of the resource under scrutiny. This involves seeking commonalities with similar concepts in Wikidata and, if none are found, new are created. This process mostly happens in a collaboratively-edited document (i.e. GDocs), where different graphical networks are drawn to reflect the data being modelled and its embedding in Wikidata. Once consensus has been reached, the model typically exists in a human-readable document. To allow future validations of these models on existing data, it is converted in a machine-readable Shape Expression (ShEx) (Anonymous 2019, Waagmeester et al. 2017). Shape Expressions schema language can be consumed and produced by humans and machines and is useful in model development, legacy review or as formal documentation. Once a semantic data model (as Shape Expression) is found, i.e. community consensus is reached, a bot is developed to convert the knowledge from the primary source, into the Wikidata model. While Wikidata is linked data (part of the semantic web), many life science resources are not. On the contrary, many distinct file formats or API output formats are used to present life-science knowledge. To convert between these different formats, bots need to be developed that are able to parse the different resources and serialize into wikidata. We have developed a software library in the Python programming language, which we use to build these bots. Once created, these bots run regularly to keep Wikidata up-to-date with knowledge on genes, proteins, diseases and drugs. Having scientific knowledge represented in Wikidata comes with benefits. First, having research data on Wikidata increases its sustainability. When research projects end, their findings now remain on an independently funded infrastructure. Having someone else dealing with an infrastructure for a data commons also relieves the research community of having to do it themselves, leading to more time to focus on doing research As a generic public data commons, Wikidata allows public scrutiny and rapid integration with other domains. Inconsistencies or disagreement between resources become more visible, due to the unified data models and interfaces. The latter we leverage as a feature in our bots. One of our core resources is, for example, the Disease Ontology (Schriml et al. 2018). This ontology on human diseases is continuously updated by its curation team. 2 times per month, updates are then synchronised with Wikidata. If inconsistencies and disagreement with other resources surface, they are logged and shared with the curation team of the Disease Ontology. Hence, we have created a bi-directional update cycle, improving both the Disease Ontology and Wikidata. Although our bots focus on molecular biology, our approaches are generic in onset that we are confident a similar approach can work in biodiversity informatics.



2014 ◽  
Vol 66 (5) ◽  
pp. 519-536 ◽  
Author(s):  
Josep Maria Brunetti ◽  
Roberto García

Purpose – The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach – The Visual Information-Seeking Mantra “Overview first, zoom and filter, then details-on-demand” proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings – The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value – Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.





Semantic Web ◽  
2020 ◽  
pp. 1-29
Author(s):  
Bettina Klimek ◽  
Markus Ackermann ◽  
Martin Brümmer ◽  
Sebastian Hellmann

In the last years a rapid emergence of lexical resources has evolved in the Semantic Web. Whereas most of the linguistic information is already machine-readable, we found that morphological information is mostly absent or only contained in semi-structured strings. An integration of morphemic data has not yet been undertaken due to the lack of existing domain-specific ontologies and explicit morphemic data. In this paper, we present the Multilingual Morpheme Ontology called MMoOn Core which can be regarded as the first comprehensive ontology for the linguistic domain of morphological language data. It will be described how crucial concepts like morphs, morphemes, word forms and meanings are represented and interrelated and how language-specific morpheme inventories can be created as a new possibility of morphological datasets. The aim of the MMoOn Core ontology is to serve as a shared semantic model for linguists and NLP researchers alike to enable the creation, conversion, exchange, reuse and enrichment of morphological language data across different data-dependent language sciences. Therefore, various use cases are illustrated to draw attention to the cross-disciplinary potential which can be realized with the MMoOn Core ontology in the context of the existing Linguistic Linked Data research landscape.



Author(s):  
Yusuke Tagawa ◽  
Arata Tanaka ◽  
Yuya Minami ◽  
Daichi Namikawa ◽  
Michio Simomura ◽  
...  


Author(s):  
Georg Neubauer

The main subject of the work is the visualization of typed links in Linked Data. The academic subjects relevant to the paper in general are the Semantic Web, the Web of Data and information visualization. The Semantic Web, invented by Tim Berners-Lee in 2001, was announced as an extension to the World Wide Web (Web 2.0). The actual area of investigation concerns the connectivity of information on the World Wide Web. To be able to explore such interconnections, visualizations are critical requirements as well as a major part of processing data in themselves. In the context of the Semantic Web, representation of information interrelations can be achieved using graphs. The aim of the article is to primarily describe the arrangement of Linked Data visualization concepts by establishing their principles in a theoretical approach. Putting design restrictions into context leads to practical guidelines. By describing the creation of two alternative visualizations of a commonly used web application representing Linked Data as network visualization, their compatibility was tested. The application-oriented part treats the design phase, its results, and future requirements of the project that can be derived from this test.



Author(s):  
Dhavalkumar Thakker ◽  
Vania Dimitrova ◽  
Lydia Lau ◽  
Fan Yang-Turner ◽  
Dimoklis Despotakis


Sign in / Sign up

Export Citation Format

Share Document