Hypertext Links and Relationships in XML Databases

Author(s):  
Anne Brüggemann-Klein ◽  
Lorenz Singer

Hypertext links are, for semistructured data and narrative documents in XML databases, a fitting analogue to foreign-key references for structured data in relational databases. We encode hypertext links with XLink. For processing the links, we use the XLink processor HyQuery, an XQuery module which turns a native, XQuery-enabled XML database into a hyperdata system. This system is used in a lab course "XML Technology" and in the case study XTunes, a Web application that manages metadata and recordings of classical music.

Author(s):  
David J. Birnbaum ◽  
Hugh Cayless ◽  
Emmanuelle Morlock ◽  
Leif-Jöran Olsson ◽  
Joseph Wicentowski

We have identified four models for integrating digital edition content into eXist-db [eXist-db], which are, in increasing order of dependence on eXist-db itself: 1) using Apache [Apache] and PHP [PHP] to mediate between the user and eXist-db, so that eXist-db provides only XML database services, 2) a pure XQuery framework for building an eXist-db web application [Web applications], 3) the eXist-db HTML templating framework [HTML templating], and 4) TEI Publisher [TEI Publisher]. Our examination and comparison of these ways of conceptualizing and implementing the infrastructure for a digital edition reveals that each of them has advantages and disadvantages, primarily from the perspective of sustainability. These considerations apply to edition frameworks generally, and are therefore not specific to eXist-db, which has been used here as an example because of the number of editions that employ it and the variety of models it currently supports.


2021 ◽  
Author(s):  
Jason Hunter ◽  
Mark Thyer ◽  
Dmitri Kavetski ◽  
David McInerney

<p>Probabilistic predictions provide crucial information regarding the uncertainty of hydrological predictions, which are a key input for risk-based decision-making. However, they are often excluded from hydrological modelling applications because suitable probabilistic error models can be both challenging to construct and interpret, and the quality of results are often reliant on the objective function used to calibrate the hydrological model.</p><p>We present an open-source R-package and an online web application that achieves the following two aims. Firstly, these resources are easy-to-use and accessible, so that users need not have specialised knowledge in probabilistic modelling to apply them. Secondly, the probabilistic error model that we describe provides high-quality probabilistic predictions for a wide range of commonly-used hydrological objective functions, which it is only able to do by including a new innovation that resolves a long-standing issue relating to model assumptions that previously prevented this broad application.  </p><p>We demonstrate our methods by comparing our new probabilistic error model with an existing reference error model in an empirical case study that uses 54 perennial Australian catchments, the hydrological model GR4J, 8 common objective functions and 4 performance metrics (reliability, precision, volumetric bias and errors in the flow duration curve). The existing reference error model introduces additional flow dependencies into the residual error structure when it is used with most of the study objective functions, which in turn leads to poor-quality probabilistic predictions. In contrast, the new probabilistic error model achieves high-quality probabilistic predictions for all objective functions used in this case study.</p><p>The new probabilistic error model and the open-source software and web application aims to facilitate the adoption of probabilistic predictions in the hydrological modelling community, and to improve the quality of predictions and decisions that are made using those predictions. In particular, our methods can be used to achieve high-quality probabilistic predictions from hydrological models that are calibrated with a wide range of common objective functions.</p>


1962 ◽  
Vol 21 (1) ◽  
pp. 56-60
Author(s):  
Robert Textor

The purpose of this article is to describe a methodological adventure in the use of the survey technique to investigate shamanism. At the outset I must state my belief that the anthropologist should use structured techniques, if at all, only after he has used unstructured ones. Structured data-gathering is a valuable supplement to, but never a substitute for, unstructured interviewing and observing. This article describes the use of a structured technique as a supplementary means of understanding shamanism, an area which, to my knowledge, has heretofore been studied only by unstructured techniques.


2018 ◽  
Vol 14 (3) ◽  
pp. 44-68 ◽  
Author(s):  
Fatma Abdelhedi ◽  
Amal Ait Brahim ◽  
Gilles Zurfluh

Nowadays, most organizations need to improve their decision-making process using Big Data. To achieve this, they have to store Big Data, perform an analysis, and transform the results into useful and valuable information. To perform this, it's necessary to deal with new challenges in designing and creating data warehouse. Traditionally, creating a data warehouse followed well-governed process based on relational databases. The influence of Big Data challenged this traditional approach primarily due to the changing nature of data. As a result, using NoSQL databases has become a necessity to handle Big Data challenges. In this article, the authors show how to create a data warehouse on NoSQL systems. They propose the Object2NoSQL process that generates column-oriented physical models starting from a UML conceptual model. To ensure efficient automatic transformation, they propose a logical model that exhibits a sufficient degree of independence so as to enable its mapping to one or more column-oriented platforms. The authors provide experiments of their approach using a case study in the health care field.


2015 ◽  
Vol 12 (2) ◽  
pp. 655-681 ◽  
Author(s):  
Tomas Cerny ◽  
Miroslav Macik ◽  
Michael Donahoo ◽  
Jan Janousek

Increasing demands on user interface (UI) usability, adaptability, and dynamic behavior drives ever-growing development and maintenance complexity. Traditional UI design techniques result in complex descriptions for data presentations with significant information restatement. In addition, multiple concerns in UI development leads to descriptions that exhibit concern tangling, which results in high fragment replication. Concern-separating approaches address these issues; however, they fail to maintain the separation of concerns for execution tasks like rendering or UI delivery to clients. During the rendering process at the server side, the separation collapses into entangled concerns that are provided to clients. Such client-side entanglement may seem inconsequential since the clients are simply displaying what is sent to them; however, such entanglement compromises client performance as it results in problems such as replication, fragment granularity ill-suited for effective caching, etc. This paper considers advantages brought by concern-separation from both perspectives. It proposes extension to the aspect-oriented UI design with distributed concern delivery (DCD) for client-server applications. Such an extension lessens the serverside involvement in UI assembly and reduces the fragment replication in provided UI descriptions. The server provides clients with individual UI concerns, and they become partially responsible for the UI assembly. This change increases client-side concern reuse and extends caching opportunities, reducing the volume of transmitted information between client and server to improve UI responsiveness and performance. The underlying aspect-oriented UI design automates the server-side derivation of concerns related to data presentations adapted to runtime context, security, conditions, etc. Evaluation of the approach is considered in a case study applying DCD to an existing, production web application. Our results demonstrate decreased volumes of UI descriptions assembled by the server-side and extended client-side caching abilities, reducing required data/fragment transmission, which improves UI responsiveness. Furthermore, we evaluate the potential benefits of DCD integration implications in selected UI frameworks.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2017 ◽  
Vol 53 (1) ◽  
pp. 217-241
Author(s):  
Anita Prelovšek

In Ljubljana and in its surroundings the music at a traditional funeral still consists usually of a vocal ensemble or a trumpet, but in 2016 this has increasingly tended to be replaced by a girl’s vocal and instrumental ensemble. The choice of music depends largely on the wishes of the relatives of the deceased. Folk music predominates, followed by popular music; the music requested least is classical music. The most frequently performed songs of the year 2016 were: Gozdič je že zelen, Lipa zelenela je and Nearer my God to Thee.


Sign in / Sign up

Export Citation Format

Share Document