scholarly journals A Scheme for Generating Provenance-aware Applications through UML

2013 ◽  
Vol 6 (3) ◽  
pp. 867-872 ◽  
Author(s):  
Anu Mary Chacko ◽  
Anu Mary Chacko ◽  
Dr. Madhukumar S D

The metadata that captures information about the origin of data is referred to as data provenance or data lineage. The provenance of a data item captures information about the processes and source data items that lead to its creation and its current representation. A provenance-aware application captures and stores adequate documentation about process executions to answer queries regarding provenance. Provenance information is very useful when we need to know the inter dependency of data to find error propagation or information flow. Provenance collected also helps to understand what went different in two identical workflows with same inputs but producing different outputs. Currently, most of the provenance systems designed is domain specific. Through this paper, we propose a general methodology for making an application provenance-aware from the basic UML design diagrams. As a starting point we have analyzed UML Class diagrams to generate information to make application provenance aware.  

Author(s):  
Liwei Wang ◽  
Henning Koehler ◽  
Ke Deng ◽  
Xiaofang Zhou ◽  
Shazia Sadiq

The description of the origins of a piece of data and the transformations by which it arrived in a database is termed the data provenance. The importance of data provenance has already been widely recognized in database community. The two major approaches to representing provenance information use annotations and inversion. While annotation is metadata pre-computed to include the derivation history of a data product, the inversion method finds the source data based on the situation that some derivation process can be inverted. Annotations are flexible to represent diverse provenance metadata but the complete provenance data may outsize data itself. Inversion method is concise by using a single inverse query or function but the provenance needs to be computed on-the-fly. This paper proposes a new provenance representation which is a hybrid of annotation and inversion methods in order to achieve combined advantage. This representation is adaptive to the storage constraint and the response time requirement of provenance inversion on-the-fly.


Author(s):  
Liwei Wang ◽  
Henning Koehler ◽  
Ke Deng ◽  
Xiaofang Zhou ◽  
Shazia Sadiq

The description of the origins of a piece of data and the transformations by which it arrived in a database is termed the data provenance. The importance of data provenance has already been widely recognized in database community. The two major approaches to representing provenance information use annotations and inversion. While annotation is metadata pre-computed to include the derivation history of a data product, the inversion method finds the source data based on the situation that some derivation process can be inverted. Annotations are flexible to represent diverse provenance metadata but the complete provenance data may outsize data itself. Inversion method is concise by using a single inverse query or function but the provenance needs to be computed on-the-fly. This paper proposes a new provenance representation which is a hybrid of annotation and inversion methods in order to achieve combined advantage. This representation is adaptive to the storage constraint and the response time requirement of provenance inversion on-the-fly.


Author(s):  
Anton Michlmayr ◽  
Florian Rosenberg ◽  
Philipp Leitner ◽  
Schahram Dustdar

In general, provenance describes the origin and well-documented history of a given object. This notion has been applied in information systems, mainly to provide data provenance of scientific workflows. Similar to this, provenance in Service-oriented Computing has also focused on data provenance. However, the authors argue that in service-centric systems the origin and history of services is equally important. This paper presents an approach that addresses service provenance. The authors show how service provenance information can be collected and retrieved, and how security mechanisms guarantee integrity and access to this information, while also providing user-specific views on provenance. Finally, the paper gives a performance evaluation of the authors’ approach, which has been integrated into the VRESCo Web service runtime environment.


Author(s):  
Camille Bourgaux ◽  
Ana Ozaki ◽  
Rafael Penaloza ◽  
Livia Predoiu

We address the problem of handling provenance information in ELHr ontologies. We consider a setting recently introduced for ontology-based data access, based on semirings and extending classical data provenance, in which ontology axioms are annotated with provenance tokens. A consequence inherits the provenance of the axioms involved in deriving it, yielding a provenance polynomial as an annotation. We analyse the semantics for the ELHr case and show that the presence of conjunctions poses various difficulties for handling provenance, some of which are mitigated by assuming multiplicative idempotency of the semiring. Under this assumption, we study three problems: ontology completion with provenance, computing the set of relevant axioms for a consequence, and query answering.


Author(s):  
Kristóf Csorba ◽  
Ádám Budai ◽  
Judit Zöldföldi ◽  
Balázs Székely

GrainAutLine is an interdisciplinary microscopy image analysis tool with domain specific smart functions to partially automate the processing of marble thin section images. It allows the user to create a clean grain boundary image which is a starting point of several archaeometric and geologic analyses. The semi-automatic tools minimize the need for carefully drawing the grain boundaries manually, even in cases where twin crystals prohibit the use of classic edge detection based boundary detection. Due to the semi-automatic approach, the user has full control over the process and can modify the automatic results before finalizing a specific step. This approach guarantees high quality results both in cases where the process is easy to automate, and also if it needs more help from the user. This paper presents the basic operation of the system and details about the provided tools as a case study for an interdisciplinary, semi-automatic image processing application.


2016 ◽  
Vol 60 (3) ◽  
pp. 576-598 ◽  
Author(s):  
Gudrun Rawoens ◽  
Thomas Egan

This paper examines the way in which the semantic notion of ‘betweenness’ is coded in Swedish and Norwegian translations of the same English source texts. The study takes its starting point in the contention that the original English expressions of ‘betweenness’ containing the prepositionbetweenconstitute a viabletertium comparationisfor translations of that form into the other two languages. A classification of all occurrences ofbetweenin the English source texts in The English-Swedish Parallel Corpus (ESPC) and The English-Norwegian Parallel Corpus (ENPC) in terms of the semantic role of the landmarks in the predications is followed by an analysis of the translations, both congruent and divergent. The primary focus, however, is not on the correspondences between the English original and its translations into Swedish and Norwegian, but on the parallels between the two sets of translations. To this end comparisons are drawn between the Swedish and Norwegian renderings of the various meanings ofbetweenin the source data. The analysis shows that Swedish and Norwegian resemble one another closely in the means employed to code the various senses ofbetween. The last part of the study offers a complementary perspective in comparing occurrences of the most common translation equivalents ofbetween,mellanin Swedish andmellomin Norwegian, in contexts where they do not translatebetweenin the English source texts. This approach reveals that, despite the lack ofbetweenin the original texts, the two sets of translators both employ the cognate prepositions in over 25% of cases.


2021 ◽  
Vol 11 (9) ◽  
pp. 504
Author(s):  
Marie-Jetta den Otter ◽  
Michiel Dam ◽  
Ludo Juurlink ◽  
Fred Janssen

Structure–property reasoning (SPR) is one of the most important aims of chemistry education but is seldom explicitly taught, and students find structure–property reasoning difficult. This study assessed two design principles for the development of structure–property reasoning in the context of demonstrations: (1) use of a POE task (predict–observe–explain) and (2) use of the domain-specific particle perspective, both to increase student engagement and to scaffold micro-level modeling. The aim of the demonstration series was to teach structure–property reasoning more explicitly to pre-university students (aged 15–16). Demonstrations pertained to the properties of metals, salts and molecular compounds. The SPR instrument was used as a pretest and posttest in order to gain insight into the effects on structure–property reasoning. In addition, one student (Sally) was followed closely to see how her structure–property reasoning evolved throughout the demonstrations. Results show that after the demonstrations students were more aware of the structure models at the micro-level. The students also knew and understood more chemical concepts needed for structure–property reasoning. Sally’s qualitative data additionally showed how she made interesting progress in modeling micro-level chemical structures. As we used conventional demonstrations as a starting point for design, this could well serve as a practical tool for teachers to redesign their existing demonstrations.


2020 ◽  
Author(s):  
Filipe Lautert ◽  
Daniel Fernandes Gonçalves Pigatto ◽  
Luiz Celso Gomes-JR

Data provenance tracks the origin of information with the goal of improving trust among interested parties. One of the key aspects provided by data provenance is transparency, which allows stakeholders to follow all the changes applied to the information (e.g. a document). Blockchains, a recent technological development, allow transparency in a distributed application context without the need for a trusted centralized entity. The approach presented here aims to use blockchain as a secure, shared and auditable storage providing transparent data provenance. Our proposal builds upon the well established W3C Prov Model, which simplifies adoption of the framework. An application consisting of a client and a REST API service that is able to store provenance information using open standards in a blockchain has been developed. Here we report the results of several stress tests to validate the practicability of our approach.


Author(s):  
Donald Needham ◽  
Rodrigo Caballero ◽  
Steven Demurjian ◽  
Felix Eickhoff ◽  
Yi Zhang

This chapter examines a formal framework for reusability assessment of development-time components and classes via metrics, refactoring guidelines, and algorithms. It argues that software engineers seeking to improve design reusability stand to benefit from tools that precisely measure the potential and actual reuse of software artifacts to achieve domain-specific reuse for an organization’s current and future products. The authors consider the reuse definition, assessment, and analysis of a UML design prior to the existence of source code, and include dependency tracking for use case and class diagrams in support of reusability analysis and refactoring for UML. The integration of these extensions into the UML tool Together Control Center to support reusability measurement from design to development is also considered.


2020 ◽  
Vol 25 (6) ◽  
pp. 793-801
Author(s):  
Maturi Sreerama Murty ◽  
Nallamothu Nagamalleswara Rao

Following the accessibility of Resource Description Framework (RDF) resources is a key capacity in the establishment of Linked Data frameworks. It replaces center around information reconciliation contrasted with work rate. Exceptional Connected Data that empowers applications to improve by changing over legacy information into RDF resources. This data contains bibliographic, geographic, government, arrangement, and alternate routes. Regardless, a large portion of them don't monitor the subtleties and execution of each sponsored resource. In such cases, it is vital for those applications to track, store and scatter provenance information that mirrors their source data and introduced tasks. We present the RDF information global positioning framework. Provenance information is followed during the progress cycle and oversaw multiple times. From that point, this data is appropriated utilizing of this concept URIs. The proposed design depends on the Harvard Library Database. The tests were performed on informational indexes with changes made to the qualities??In the RDF and the subtleties related with the provenance. The outcome has quieted the guarantee as in it pulls in record wholesalers to make significant realities that develop while taking almost no time and exertion.


Sign in / Sign up

Export Citation Format

Share Document