REDI: Towards knowledge graph-powered scholarly information management and research networking

2020 ◽  
pp. 016555152094435
Author(s):  
José Ortiz Vivar ◽  
José Segarra ◽  
Boris Villazón-Terrazas ◽  
Víctor Saquicela

Academic data management has become an increasingly challenging task as research evolves over time. Essential tasks such as information retrieval and research networking have turned into extremely difficult operations due to an ever-growing number of researchers and scientific articles. Numerous initiatives have emerged in the IT environments to address this issue, especially focused on web technologies. Although those approaches have individually provided solutions for diverse problems, they still can not offer integrated knowledge bases nor flexibility to exploit adequately this information. In this article, we present REDI, a Linked Data-powered framework for academic knowledge management and research networking, which introduces a new perspective of integration. REDI combines information from multiple sources into a consolidated knowledge base through state-of-the-art procedures and leverages semantic web standards to represent the information. Moreover, REDI takes advantage of such knowledge for data visualisation and analysis, which ultimately improves and simplifies many activities including research networking.

Author(s):  
Laura Pandolfo ◽  
Luca Pulina

Using semantic web technologies is becoming an efficient way to overcome metadata storage and data integration problems in digital archives, thus enhancing the accuracy of the search process and leading to the retrieval of more relevant results. In this paper, the results of the implementation of the semantic layer of the Józef Piłsudski Institute of America digital archive are presented. In order to represent and integrate data about the archival collections housed by the institute, the authors developed arkivo, an ontology that accommodates the archival description of records but also provides a reference schema for publishing linked data. The authors describe the application of arkivo to the digitized archival collections of the institute, with emphasis on how these resources have been linked to external datasets in the linked data cloud. They also show the results of an experiment focused on the query answering task involving a state-of-the-art triple store system. The dataset related to the Piłsudski Institute archival collections has been made available for ontology benchmarking purposes.


2018 ◽  
Vol 5 (2) ◽  
pp. 207-211
Author(s):  
Nazila Zarghi ◽  
Soheil Dastmalchian Khorasani

Abstract Evidence based social sciences, is one of the state-of- the-art area in this field. It is making decisions on the basis of conscientious, explicit and judicious use of the best available evidence from multiple sources. It also could be conducive to evidence based social work, i.e a kind of evidence based practice in some extent. In this new emerging field, the research findings help social workers in different levels of social sciences such as policy making, management, academic area, education, and social settings, etc.When using research in real setting, it is necessary to do critical appraisal, not only for trustingon internal validity or rigor methodology of the paper, but also for knowing in what extent research findings could be applied in real setting. Undoubtedly, the latter it is a kind of subjective judgment. As social sciences findings are highly context bound, it is necessary to pay more attention to this area. The present paper tries to introduce firstly evidence based social sciences and its importance and then propose criteria for critical appraisal of research findings for application in society.


2021 ◽  
Vol 2 (2) ◽  
pp. 311-338
Author(s):  
Giulia Della Rosa ◽  
Clarissa Ruggeri ◽  
Alessandra Aloisi

Exosomes (EXOs) are nano-sized informative shuttles acting as endogenous mediators of cell-to-cell communication. Their innate ability to target specific cells and deliver functional cargo is recently claimed as a promising theranostic strategy. The glycan profile, actively involved in the EXO biogenesis, release, sorting and function, is highly cell type-specific and frequently altered in pathological conditions. Therefore, the modulation of EXO glyco-composition has recently been considered an attractive tool in the design of novel therapeutics. In addition to the available approaches involving conventional glyco-engineering, soft technology is becoming more and more attractive for better exploiting EXO glycan tasks and optimizing EXO delivery platforms. This review, first, explores the main functions of EXO glycans and associates the potential implications of the reported new findings across the nanomedicine applications. The state-of-the-art of the last decade concerning the role of natural polysaccharides—as targeting molecules and in 3D soft structure manufacture matrices—is then analysed and highlighted, as an advancing EXO biofunction toolkit. The promising results, integrating the biopolymers area to the EXO-based bio-nanofabrication and bio-nanotechnology field, lay the foundation for further investigation and offer a new perspective in drug delivery and personalized medicine progress.


Semantic Web ◽  
2021 ◽  
pp. 1-16
Author(s):  
Esko Ikkala ◽  
Eero Hyvönen ◽  
Heikki Rantala ◽  
Mikko Koho

This paper presents a new software framework, Sampo-UI, for developing user interfaces for semantic portals. The goal is to provide the end-user with multiple application perspectives to Linked Data knowledge graphs, and a two-step usage cycle based on faceted search combined with ready-to-use tooling for data analysis. For the software developer, the Sampo-UI framework makes it possible to create highly customizable, user-friendly, and responsive user interfaces using current state-of-the-art JavaScript libraries and data from SPARQL endpoints, while saving substantial coding effort. Sampo-UI is published on GitHub under the open MIT License and has been utilized in several internal and external projects. The framework has been used thus far in creating six published and five forth-coming portals, mostly related to the Cultural Heritage domain, that have had tens of thousands of end-users on the Web.


AI Magazine ◽  
2015 ◽  
Vol 36 (1) ◽  
pp. 75-86 ◽  
Author(s):  
Jennifer Sleeman ◽  
Tim Finin ◽  
Anupam Joshi

We describe an approach for identifying fine-grained entity types in heterogeneous data graphs that is effective for unstructured data or when the underlying ontologies or semantic schemas are unknown. Identifying fine-grained entity types, rather than a few high-level types, supports coreference resolution in heterogeneous graphs by reducing the number of possible coreference relations that must be considered. Big data problems that involve integrating data from multiple sources can benefit from our approach when the datas ontologies are unknown, inaccessible or semantically trivial. For such cases, we use supervised machine learning to map entity attributes and relations to a known set of attributes and relations from appropriate background knowledge bases to predict instance entity types. We evaluated this approach in experiments on data from DBpedia, Freebase, and Arnetminer using DBpedia as the background knowledge base.


2020 ◽  
Author(s):  
Ali Fallah ◽  
Sungmin O ◽  
Rene Orth

Abstract. Precipitation is a crucial variable for hydro-meteorological applications. Unfortunately, rain gauge measurements are sparse and unevenly distributed, which substantially hampers the use of in-situ precipitation data in many regions of the world. The increasing availability of high-resolution gridded precipitation products presents a valuable alternative, especially over gauge-sparse regions. Nevertheless, uncertainties and corresponding differences across products can limit the applicability of these data. This study examines the usefulness of current state-of-the-art precipitation datasets in hydrological modelling. For this purpose, we force a conceptual hydrological model with multiple precipitation datasets in > 200 European catchments. We consider a wide range of precipitation products, which are generated via (1) interpolation of gauge measurements (E-OBS and GPCC V.2018), (2) combination of multiple sources (MSWEP V2) and (3) data assimilation into reanalysis models (ERA-Interim, ERA5, and CFSR). For each catchment, runoff and evapotranspiration simulations are obtained by forcing the model with the various precipitation products. Evaluation is done at the monthly time scale during the period of 1984–2007. We find that simulated runoff values are highly dependent on the accuracy of precipitation inputs, and thus show significant differences between the simulations. By contrast, simulated evapotranspiration is generally much less influenced. The results are further analysed with respect to different hydro-climatic regimes. We find that the impact of precipitation uncertainty on simulated runoff increases towards wetter regions, while the opposite is observed in the case of evapotranspiration. Finally, we perform an indirect performance evaluation of the precipitation datasets by comparing the runoff simulations with streamflow observations. Thereby, E-OBS yields the best agreement, while furthermore ERA5, GPCC V.2018 and MSWEP V2 show good performance. In summary, our findings highlight a climate-dependent propagation of precipitation uncertainty through the water cycle; while runoff is strongly impacted in comparatively wet regions such as Central Europe, there are increasing implications on evapotranspiration towards drier regions.


2018 ◽  
Vol 2 ◽  
pp. e25614 ◽  
Author(s):  
Florian Pellen ◽  
Sylvain Bouquin ◽  
Isabelle Mougenot ◽  
Régine Vignes-Lebbe

Xper3 (Vignes Lebbe et al. 2016) is a collaborative knowledge base publishing platform that, since its launch in november 2013, has been adopted by over 2 thousand users (Pinel et al. 2017). This is mainly due to its user friendly interface and the simplicity of its data model. The data are stored in MySQL Relational DBs, but the exchange format uses the TDWG standard format SDD (Structured Descriptive DataHagedorn et al. 2005). However, each Xper3 knowledge base is a closed world that the author(s) may or may not share with the scientific community or the public via publishing content and/or identification key (Kopfstein 2016). The explicit taxonomic, geographic and phenotypic limits of a knowledge base are not always well defined in the metadata fields. Conversely terminology vocabularies, such as Phenotype and Trait Ontology PATO and the Plant Ontology PO, and software to edit them, such as Protégé and Phenoscape, are essential in the semantic web, but difficult to handle for biologist without computer skills. These ontologies constitute open worlds, and are expressed themselves by RDF triples (Resource Description Framework). Protégé offers vizualisation and reasoning capabilities for these ontologies (Gennari et al. 2003, Musen 2015). Our challenge is to combine the user friendliness of Xper3 with the expressive power of OWL (Web Ontology Language), the W3C standard for building ontologies. We therefore focused on analyzing the representation of the same taxonomic contents under Xper3 and under different models in OWL. After this critical analysis, we chose a description model that allows automatic export of SDD to OWL and can be easily enriched. We will present the results obtained and their validation on two knowledge bases, one on parasitic crustaceans (Sacculina) and the second on current ferns and fossils (Corvez and Grand 2014). The evolution of the Xper3 platform and the perspectives offered by this link with semantic web standards will be discussed.


Semantic Web ◽  
2021 ◽  
pp. 1-27
Author(s):  
Ahmet Soylu ◽  
Oscar Corcho ◽  
Brian Elvesæter ◽  
Carlos Badenes-Olmedo ◽  
Tom Blount ◽  
...  

Public procurement is a large market affecting almost every organisation and individual; therefore, governments need to ensure its efficiency, transparency, and accountability, while creating healthy, competitive, and vibrant economies. In this context, open data initiatives and integration of data from multiple sources across national borders could transform the procurement market by such as lowering the barriers of entry for smaller suppliers and encouraging healthier competition, in particular by enabling cross-border bids. Increasingly more open data is published in the public sector; however, these are created and maintained in siloes and are not straightforward to reuse or maintain because of technical heterogeneity, lack of quality, insufficient metadata, or missing links to related domains. To this end, we developed an open linked data platform, called TheyBuyForYou, consisting of a set of modular APIs and ontologies to publish, curate, integrate, analyse, and visualise an EU-wide, cross-border, and cross-lingual procurement knowledge graph. We developed advanced tools and services on top of the knowledge graph for anomaly detection, cross-lingual document search, and data storytelling. This article describes the TheyBuyForYou platform and knowledge graph, reports their adoption by different stakeholders and challenges and experiences we went through while creating them, and demonstrates the usefulness of Semantic Web and Linked Data technologies for enhancing public procurement.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2021 ◽  
Vol 81 (3-4) ◽  
pp. 318-358
Author(s):  
Sander Stolk

Abstract This article provides an introduction to the web application Evoke. This application offers functionality to navigate, view, extend, and analyse thesaurus content. The thesauri that can be navigated in Evoke are expressed in Linguistic Linked Data, an interoperable data form that enables the extension of thesaurus content with custom labels and allows for the linking of thesaurus content to other digital resources. As such, Evoke is a powerful research tool that facilitates its users to perform novel cultural linguistic analyses over multiple sources. This article further demonstrates the potential of Evoke by discussing how A Thesaurus of Old English was made available in the application and how this has already been adopted in the field of Old English studies. Lastly, the author situates Evoke within a number of recent developments in the field of Digital Humanities and its applications for onomasiological research.


Sign in / Sign up

Export Citation Format

Share Document