scholarly journals Exploring Methods for Linked Data Model Evaluation in Practice

2020 ◽  
Vol 20 (1) ◽  
pp. 65-89
Author(s):  
Robin Elizabeth Desmeules ◽  
Clara Turp ◽  
Andrew Senior
2018 ◽  
Vol 7 (2.21) ◽  
pp. 339 ◽  
Author(s):  
K Ulaga Priya ◽  
S Pushpa ◽  
K Kalaivani ◽  
A Sartiha

In Banking Industry loan Processing is a tedious task in identifying the default customers. Manual prediction of default customers might turn into a bad loan in future. Banks possess huge volume of behavioral data from which they are unable to make a judgement about prediction of loan defaulters. Modern techniques like Machine Learning will help to do analytical processing using Supervised Learning and Unsupervised Learning Technique. A data model for predicting default customers using Random forest Technique has been proposed. Data model Evaluation is done on training set and based on the performance parameters final prediction is done on the Test set. This is an evident that Random Forest technique will help the bank to predict the loan Defaulters with utmost accuracy.  


1976 ◽  
Vol 1 (4) ◽  
pp. 370-387 ◽  
Author(s):  
William C. McGee
Keyword(s):  

2016 ◽  
Vol 59 (2) ◽  
pp. 42-55 ◽  
Author(s):  
ADAM RABINOWITZ ◽  
RYAN SHAW ◽  
SARAH BUCHANAN ◽  
PATRICK GOLDEN ◽  
ERIC KANSA

Abstract The PeriodO project seeks to fill a gap in the landscape of digital antiquity through the creation of a Linked Data gazetteer of period definitions that transparently record the spatial and temporal boundaries assigned to a given period by an authoritative source. Our presentation of the PeriodO gazetteer is prefaced by a history of the role of periodization in the study of the past, and an analysis of the difficulties created by the use of periods for both digital data visualization and integration. This is followed by an overview of the PeriodO data model, a description of the platform's architecture, and a discussion of the future direction of the project.


2013 ◽  
Vol 64 (2-3) ◽  
Author(s):  
Stefan Gradmann ◽  
Julia Iwanowa ◽  
Evelyn Dröge ◽  
Steffen Hennicke ◽  
Violeta Trkulja ◽  
...  

Im Artikel werden laufende Arbeiten und Ergebnisse der Forschergruppe Wissensmanagement beschrieben. Diese entstanden vor allem durch die am Lehrstuhl Wissensmanagement angesiedelten Projekte Europeana v2.0, Digitised Manuscripts to Europeana (DM2E) sowie von Teilprojekten des vor kurzem gestarteten DFG-Exzellenzclusters Bild Wissen Gestaltung. Die Projekte befassen sich mit Spezialisierungen des Europeana Data Model, der Umwandlung von Metadaten in RDF und der automatisierten und nutzerbasierten semantischen Anreicherung dieser Daten auf Basis eigens entwickelter oder modifizierter Anwendungen sowie der Modellierung von Forschungsaktivitäten, welche derzeit auf die digitale Geisteswissenschaft zugeschnitten ist. Allen Projekten gemeinsam ist die konzeptionelle oder technische Modellierung von Informationsentitäten oder Nutzeraktivitäten, welche am Ende im Linked Data Web repräsentiert werden.


2018 ◽  
Vol 42 (2) ◽  
pp. 194-205 ◽  
Author(s):  
Pasquale Lisena ◽  
Manel Achichi ◽  
Pierre Choffé ◽  
Cécile Cecconi ◽  
Konstantin Todorov ◽  
...  
Keyword(s):  

Abstract DOREMUS works on a better description of music by building new tools to link and explore the data of three French institutions. This paper gives an overview of the data model based on FRBRoo explaining the conversion and linking processes using linked data technologies and presenting the prototypes created to consume the data according to the web users’ needs.


2015 ◽  
Author(s):  
J. Fernando Sánchez-Rada ◽  
Carlos A. Iglesias ◽  
Ronald Gil

2018 ◽  
Vol 37 (3) ◽  
pp. 29-49
Author(s):  
Kumar Sharma ◽  
Ujjal Marjit ◽  
Utpal Biswas

Resource Description Framework (RDF) is a commonly used data model in the Semantic Web environment. Libraries and various other communities have been using the RDF data model to store valuable data after it is extracted from traditional storage systems. However, because of the large volume of the data, processing and storing it is becoming a nightmare for traditional data-management tools. This challenge demands a scalable and distributed system that can manage data in parallel. In this article, a distributed solution is proposed for efficiently processing and storing the large volume of library linked data stored in traditional storage systems. Apache Spark is used for parallel processing of large data sets and a column-oriented schema is proposed for storing RDF data. The storage system is built on top of Hadoop Distributed File Systems (HDFS) and uses the Apache Parquet format to store data in a compressed form. The experimental evaluation showed that storage requirements were reduced significantly as compared to Jena TDB, Sesame, RDF/XML, and N-Triples file formats. SPARQL queries are processed using Spark SQL to query the compressed data. The experimental evaluation showed a good query response time, which significantly reduces as the number of worker nodes increases.


Sign in / Sign up

Export Citation Format

Share Document