scholarly journals Design and implementation of a Web application for Cultural Heritage

2020 ◽  
Vol 1 (2) ◽  
pp. 72-85
Author(s):  
Angelica Lo Duca ◽  
Andrea Marchetti

Within the field of Digital Humanities, a great effort has been made to digitize documents and collections in order to build catalogs and exhibitions on the Web. In this paper, we present WeME, a Web application for building a knowledge base, which can be used to describe digital documents. WeME can be used by different categories of users: archivists/librarians and scholars. WeME extracts information from some well-known Linked Data nodes, i.e. DBpedia and GeoNames, as well as traditional Web sources, i.e. VIAF. As a use case of WeME, we describe the knowledge base related to the Christopher Clavius’s corre spondence. Clavius was a mathematician and an astronomer of the XVI Century. He wrote more than 300 letters, most of which are owned by the Historical Archives of the Pontifical Gregorian University (APUG) in Rome. The built knowledge base contains 139 links to DBpedia, 83 links to GeoNames and 129 links to VIAF. In order to test the usability of WeME, we invited 26 users to test the application.

2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2021 ◽  
Vol 81 (3-4) ◽  
pp. 318-358
Author(s):  
Sander Stolk

Abstract This article provides an introduction to the web application Evoke. This application offers functionality to navigate, view, extend, and analyse thesaurus content. The thesauri that can be navigated in Evoke are expressed in Linguistic Linked Data, an interoperable data form that enables the extension of thesaurus content with custom labels and allows for the linking of thesaurus content to other digital resources. As such, Evoke is a powerful research tool that facilitates its users to perform novel cultural linguistic analyses over multiple sources. This article further demonstrates the potential of Evoke by discussing how A Thesaurus of Old English was made available in the application and how this has already been adopted in the field of Old English studies. Lastly, the author situates Evoke within a number of recent developments in the field of Digital Humanities and its applications for onomasiological research.


Author(s):  
Roberto Paiano ◽  
Anna Lisa Guido ◽  
Andrea Pandurino

As it will be clearer subsequently, two different technologies will be used for realizing the generation of the code; the first one predominantly focused on the generation of code for the Web applications that do not have an underlying business process, and that they do not require, therefore, the management of the relative problems. The second technology has been selected instead, to also keep in mind the business processes. In order to provide support to the designer in the design of the whole complex Web information system, it is essential to provide a suitable tool that hides the intrinsic complexity of the methodology supporting the designer in the application of the same that is often complex, and the tool has to be able to translate the design made up in a machine readable format to be able to use this design in the following automatic code generation of the Web application according to a model-driven approach. In this chapter, we introduce the design and implementation of the editor made up mainly of the architecture presented (and based on Eclipse™ Platform as illustrated in the preceding chapter) and on the methodological steps of integration among the several editors for the design and implementation of these guidelines.


Author(s):  
Ozzi Suria

The students consider learning Javanese script to be difficult particularly in distinguishing and memorizing Carakan, and memorizing Sandangan and Pasangan with its writing rules. This work intends to develop a supporting medium for learning Javanese script. The development process is started by defining the game functionalities by using the use-case diagrams, and then, the activity diagram is created to describe the workflow of the game algorithm. The database to support the game is also created and displayed by using the physical data model. Afterward, the game algorithm script is created using JavaScript so that the game can be played through a web browser. There are 27 respondents requested to test the game and to fill in questionnaires about the web application. The results suggest that 100%of respondents agree that the web application is necessary and useful to learn Javanese script. The application provides positive benefit to the users such as students who still need to learn Javanese script in schools with 97% average success rate to run the game 


Elections in Nigeria has been plagued by vote buying, ballot box snatching, weak and independent(but dependent on the Executive) electoral management body, corruption and legitimacy crises as well as other forms of election malpractice, violence and irregularities. To curb the incidences of fraud, malpractice and the flagrant absence of transparency, this study introduces the concept of the design and implementation of an online voting platform (OVP) for the Independent National Electoral Commission (INEC). This application implemented with Python, a powerful web programming language suggests an impartial, electronic and easily managed form of conducting gubernatorial elections (in one of the states) in Nigeria. The database was created using MySQL. The analyses and design of the web application involved some unified modeling diagrams (case, class). The web application promises to eradicate several weakness of the existing system such as slower vote counting, physical location for conducting the gubernatorial polls, inconsistencies and errors resulting from the manual tasks, costliness of the election and most especially delay/ time wastage.


2021 ◽  
Vol 81 (3-4) ◽  
pp. 359-383
Author(s):  
Thijs Porck

Abstract This article discusses proof-of-concept research into the structure of the vocabularies of three Old English texts, Beowulf, Andreas and the Old English Martyrology. With the help of the Web application Evoke, which makes A Thesaurus of Old English (TOE) available in Linguistic Linked Data form, the words that occur in these three texts have been tagged within the existing onomasiological structure of TOE. This tagging process has resulted in prototypes of ‘textual thesauri’ for each of the three texts; such thesauri allow researchers to analyse the ‘onomasiological profile’ of a text, using the statistical tools that are built into Evoke. Since the same overarching structure has been used for all three texts, these texts can now be compared on an onomasiological level. As the article demonstrates, this comparative approach gives rise to novel research questions, as new and distinctive patterns of vocabulary use come to the surface. The semantic fields discussed include “War” and “Animals”.


2021 ◽  
Vol 4 ◽  
Author(s):  
Taras Günther ◽  
Matthias Filter ◽  
Fernanda Dórea

In times of emerging diseases, data sharing and data integration are of particular relevance for One Health Surveillance (OHS) and decision support. Furthermore, there is an increasing demand to provide governmental data in compliance to the FAIR (Findable, Accessible, Interoperable, Reusable) data principles. Semantic web technologies are key facilitators for providing data interoperability, as they allow explicit annotation of data with their meaning, enabling reuse without loss of the data collection context. Among these, we highlight ontologies as a tool for modeling knowledge in a field, which simplify the interpretation and mapping of datasets in a computer readable medium; and the Resource Description Format (RDF), which allows data to be shared among human and computer agents following this knowledge model. Despite their potential for enabling cross-sectoral interoperability and data linkage, the use and application of these technologies is often hindered by their complexity and the lack of easy-to-use software applications. To overcome these challenges the OHEJP Project ORION developed the Health Surveillance Ontology (HSO). This knowledge model forms a foundation for semantic interoperability in the domain of One Health Surveillance. It provides a solution to add data from the target sectors (public health, animal health and food safety) in compliance with the FAIR principles of findability, accessibility, interoperability, and reusability, supporting interdisciplinary data exchange and usage. To provide use cases and facilitate the accessibility to HSO, we developed the One Health Linked Data Toolbox (OHLDT), which consists of three new and custom-developed web applications with specific functionalities. The first web application allows users to convert surveillance data available in Excel files online into HSO-RDF and vice versa. The web application demonstrates that data provided in well-established data formats can be automatically translated in the linked data format HSO-RDF. The second application is a demonstrator of the usage of HSO-RDF in a HSO triplestore database. In the user interface of this application, the user can select HSO concepts based on which to search and filter among surveillance datasets stored in a HSO triplestore database. The service then provides automatically generated dashboards based on the context of the data. The third web application demonstrates the use of data interoperability in the OHS context by using HSO-RDF to annotate meta-data, and in this way link datasets across sectors. The web application provides a dashboard to compare public data on zoonosis surveillance provided by EFSA and ECDC. The first solution enables linked data production, while the second and third provide examples of linked data consumption, and their value in enabling data interoperability across sectors. All described solutions are based on the open-source software KNIME and are deployed as web service via a KNIME Server hosted at the German Federal Institute for Risk Assessment. The semantic web extension of KNIME, which is based on the Apache Jena Framework, allowed a rapid an easy development within the project. The underlying open source KNIME workflows are freely available and can be easily customized by interested end users. With our applications, we demonstrate that the use of linked data has a great potential strengthening the use of FAIR data in OHS and interdisciplinary data exchange.


Author(s):  
Franck Michel ◽  
Catherine Faron-Zucker ◽  
Sandrine Tercerie ◽  
Antonia Ettorre ◽  
Gargominy Olivier

During the last decade, Web APIs (Application Programming Interface) have gained significant traction to the extent that they have become a de-facto standard to enable HTTP-based, machine-processable data access. Despite this success, however, they still often fail in making data interoperable, insofar as they commonly rely on proprietary data models and vocabularies that lack formal semantic descriptions essential to ensure reliable data integration. In the biodiversity domain, multiple data aggregators, such as the Global Biodiversity Information Facility (GBIF) and the Encyclopedia of Life (EoL), maintain specialized Web APIs giving access to billions of records about taxonomies, occurrences, or life traits (Triebel et al. 2012). They publish data sets spanning complementary and often overlapping regions, epochs or domains, but may also report or rely on potentially conflicting perspectives, e.g. with respect to the circumscription of taxonomic concepts. It is therefore of utmost importance for biologists and collection curators to be able to confront the knowledge they have about taxa with related data coming from third-party data sources. To tackle this issue, the French National Museum of Natural History (MNHN) has developed an application to edit TAXREF, the French taxonomic register for fauna, flora and fungus (Gargominy et al. 2018). TAXREF registers all species recorded in metropolitan France and overseas territories, accounting for 260,000+ biological taxa (200,000+ species) along with 570,000+ scientific names. The TAXREF-Web application compares data available in TAXREF with corresponding data from third-party data sources, points out disagreements and allows biologists to add, remove or amend TAXREF accordingly. This requires that TAXREF-Web developers write a specific piece of code for each considered Web API to align TAXREF representation with the Web API counterpart. This task is time-consuming and makes maintenance of the web application cumbersome. In this presentation, we report on a new implementation of TAXREF-Web that harnesses the Linked Data standards: Resource Description Framework (RDF), the Semantic Web format to represent knowledge graphs, and SPARQL, the W3C standard to query RDF graphs. In addition, we leverage the SPARQL Micro-Service architecture (Michel et al. 2018), a lightweight approach to query Web APIs using SPARQL. A SPARQL micro-service is a SPARQL endpoint that wraps a Web API service; it typically produces a small, resource-centric RDF graph by invoking the Web API and transforming the response into RDF triples. We developed SPARQL micro-services to wrap the Web APIs of GBIF, World Register of Marine Species (WoRMS), FishBase, Index Fungorum, Pan-European Species directories Infrastructure (PESI), ZooBank, International Plant Names Index (IPNI), EoL, Tropicos and Sandre. These micro-services consistently translate Web APIs responses into RDF graphs utilizing mainly two well-adopted vocabularies: Schema.org (Guha et al. 2015) and Darwin Core (Baskauf et al. 2015). This approach brings about two major advantages. First, the large adoption of Schema.org and Darwin Core ensures that the services can be immediately understood and reused by a large audience within the biodiversity community. Second, wrapping all these Web APIs in SPARQL micro-services “suddenly” makes them technically and semantically interoperable, since they all represent resources (taxa, habitats, traits, etc.) in a common manner. Consequently, the integration task is simplified: confronting data from multiple sources essentially consists of writing the appropriate SPARQL queries, thus making easier web application development and maintenance. We present several concrete cases in which we use this approach to detect disagreements between TAXREF and the aforementioned data sources, with respect to taxonomic information (author, synonymy, vernacular names, classification, taxonomic rank), habitats, bibliographic references, species interactions and life traits.


2021 ◽  
Vol 26 (2) ◽  
pp. 143-149
Author(s):  
Abdelghani Bouziane ◽  
Djelloul Bouchiha ◽  
Redha Rebhi ◽  
Giulio Lorenzini ◽  
Noureddine Doumi ◽  
...  

The evolution of the traditional Web into the semantic Web makes the machine a first-class citizen on the Web and increases the discovery and accessibility of unstructured Web-based data. This development makes it possible to use Linked Data technology as the background knowledge base for unstructured data, especially texts, now available in massive quantities on the Web. Given any text, the main challenge is determining DBpedia's most relevant information with minimal effort and time. Although, DBpedia annotation tools, such as DBpedia spotlight, mainly targeted English and Latin DBpedia versions. The current situation of the Arabic language is less bright; the Web content of the Arabic language does not reflect the importance of this language. Thus, we have developed an approach to annotate Arabic texts with Linked Open Data, particularly DBpedia. This approach uses natural language processing and machine learning techniques for interlinking Arabic text with Linked Open Data. Despite the high complexity of the independent domain knowledge base and the reduced resources in Arabic natural language processing, the evaluation results of our approach were encouraging.


Sign in / Sign up

Export Citation Format

Share Document