A Use Case for Ontology Evolution and Interoperability

Author(s):  
Mathias Uslar ◽  
Fabian Grüning ◽  
Sebastian Rohjans

Within this chapter, the authors provide two use cases on semantic interoperability in the electric utility industry based on the IEC TR 62357 seamless integration architecture. The first use case on semantic integration based on ontologies deals with the integration of the two heterogeneous standards families IEC 61970 and IEC 61850. Based on a quantitative analysis, we outline the need for integration and provide a solution based on our framework, COLIN. The second use cases points out the need to use better metadata semantics in the utility branch, also being solely based on the IEC 61970 standard. The authors provide a solution to use the CIM as a domain ontology and taxonomy for improving data quality. Finally, this chapter outlines open questions and argues that proper semantics and domain models based on international standards can improve the systems within a utility.

Author(s):  
Mathias Uslar ◽  
Fabian Grüning ◽  
Sebastian Rohjans

Within this chapter, the authors provide two use cases on semantic interoperability in the electric utility industry based on the IEC TR 62357 seamless integration architecture. The first use case on semantic integration based on ontologies deals with the integration of the two heterogeneous standards families IEC 61970 and IEC 61850. Based on a quantitative analysis, we outline the need for integration and provide a solution based on our framework, COLIN. The second use cases points out the need to use better metadata semantics in the utility branch, also being solely based on the IEC 61970 standard. The authors provide a solution to use the CIM as a domain ontology and taxonomy for improving data quality. Finally, this chapter outlines open questions and argues that proper semantics and domain models based on international standards can improve the systems within a utility.


Author(s):  
Ignacio Blanquer ◽  
Vicente Hernandez

Epidemiology constitutes one relevant use case for the adoption of grids for health. It combines challenges that have been traditionally addressed by grid technologies, such as managing large amounts of distributed and heterogeneous data, large scale computing and the need for integration and collaboration tools, but introduces new challenges traditionally addressed from the e-health area. The application of grid technologies to epidemiology has been concentrated in the federation of distributed repositories of data, the evaluation of computationally intensive statistical epidemiological models and the management of authorisation mechanism in virtual organisations. However, epidemiology presents important additional constraints that are not solved and harness the take-off of grid technologies. The most important problems are on the semantic integration of data, the effective management of security and privacy, the lack of exploitation models for the use of infrastructures, the instability of Quality of Service and the seamless integration of the technology on the epidemiology environment. This chapter presents an analysis of how these issues are being considered in state-of-the-art research.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 592
Author(s):  
Radek Silhavy ◽  
Petr Silhavy ◽  
Zdenka Prokopova

Software size estimation represents a complex task, which is based on data analysis or on an algorithmic estimation approach. Software size estimation is a nontrivial task, which is important for software project planning and management. In this paper, a new method called Actors and Use Cases Size Estimation is proposed. The new method is based on the number of actors and use cases only. The method is based on stepwise regression and led to a very significant reduction in errors when estimating the size of software systems compared to Use Case Points-based methods. The proposed method is independent of Use Case Points, which allows the elimination of the effect of the inaccurate determination of Use Case Points components, because such components are not used in the proposed method.


2014 ◽  
Vol 23 (01) ◽  
pp. 27-35 ◽  
Author(s):  
S. de Lusignan ◽  
S-T. Liaw ◽  
C. Kuziemsky ◽  
F. Mold ◽  
P. Krause ◽  
...  

Summary Background: Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective: To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results: We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.


2021 ◽  
Author(s):  
Bart Gajderowicz

The popularity of ontologies for representing the semantics behind many real-world domains has created a growing pool of ontologies on various topics. While different ontologists, experts, and organizations create the vast majority of ontologies, often for internal use of for use in a narrow context, their domains frequently overlap in a wider context, specifically for complementary domains. To assist in the reuse of ontologies, this thesis proposes a bottom-up technique for creating concept anchors that are used for ontology matching. Anchors are ontology concepts that have been matched to concepts in an eternal ontology. The matching process is based on inductively derived decision trees rules for an ontology that are compared with rules derived for external ontologies. The matching algorithm is intended to match taxomonies, ontologies which define subsumption relations between concepts, with an associated database used to derive the decision trees. This thesis also introduces several algorithm evolution measures, and presents a set of use cases that demonstrate the strengths and weaknesses of the matching process.


2021 ◽  
Author(s):  
Bart Gajderowicz

The popularity of ontologies for representing the semantics behind many real-world domains has created a growing pool of ontologies on various topics. While different ontologists, experts, and organizations create the vast majority of ontologies, often for internal use of for use in a narrow context, their domains frequently overlap in a wider context, specifically for complementary domains. To assist in the reuse of ontologies, this thesis proposes a bottom-up technique for creating concept anchors that are used for ontology matching. Anchors are ontology concepts that have been matched to concepts in an eternal ontology. The matching process is based on inductively derived decision trees rules for an ontology that are compared with rules derived for external ontologies. The matching algorithm is intended to match taxomonies, ontologies which define subsumption relations between concepts, with an associated database used to derive the decision trees. This thesis also introduces several algorithm evolution measures, and presents a set of use cases that demonstrate the strengths and weaknesses of the matching process.


2019 ◽  
Vol 70 (10) ◽  
pp. 3555-3560
Author(s):  
Costinela Valerica Georgescu ◽  
Cristian Catalin Gavat ◽  
Doina Carina Voinescu

Ascorbic acid is a water-soluble vitamin provided with strong antioxidant action, that fulfills an important immune protective role of the body against infections and prevents various cancers appearance. The main goal of this study was to exactly quantify pure ascorbic acid in tablets of two pharmaceuticals. Proposed objective consisted in improvement and application of a iodometric titration method in ascorbic acid quantitative analysis. Ascorbic acid content per tablet in both studied pharmaceuticals was 173.84 mg, very close to official stated amount of active substance (180 mg). Allowed percentage deviation from declared content of pure ascorbic acid was only 3.42 %, below maximum value of � 5 % imposed by Romanian Pharmacopoeia 10-th Edition, according to European and International standards. Statistical analysis confirmed experimental obtained results and revealed low Standard Error value SE = 0.214476, which has fallen within normal limits. Confidence Level value (95.0 %) = 0.551328 and Standard Deviation SD = 0.525357. were within normal range of values. Relative Standard Deviation (Coefficient of variation or homogeneity) RSD = 26.268% was found below maximum range of accepted values (30-35%). P value = 7.44. 10-6 was located within normal limits, P [ 0.001, so the experimental obtained results has shown highest statistical significance. Thus, studied titration method can be successfully used in quantitative analysis of ascorbic acid from different samples.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


Author(s):  
S R Mani Sekhar ◽  
Siddesh G M ◽  
Swapnil Kalra ◽  
Shaswat Anand

Blockchain technology is an emerging and rapidly growing technology in the current world scenario. It is a collection of records connected through cryptography. They play a vital role in smart contracts. Smart contracts are present in blockchains which are self-controlled and trustable. It can be integrated across various domains like healthcare, finance, self-sovereign identity, governance, logistics management and home care, etc. The purpose of this article is to analyze the various use cases of smart contracts in different domains and come up with a model which may be used in the future. Subsequently, a detailed description of a smart contract and blockchain is provided. Next, different case-studies related to five different domains is discussed with the help of use case diagrams. Finally, a solution for natural disaster management has been proposed by integrating smart contract, digital identity, policies and blockchain technologies, which can be used effectively for providing relief to victims during times of natural disaster.


Energies ◽  
2020 ◽  
Vol 13 (16) ◽  
pp. 4223
Author(s):  
Katja Sirviö ◽  
Kimmo Kauhaniemi ◽  
Aushiq Ali Memon ◽  
Hannu Laaksonen ◽  
Lauri Kumpulainen

The operation of microgrids is a complex task because it involves several stakeholders and controlling a large number of different active and intelligent resources or devices. Management functions, such as frequency control or islanding, are defined in the microgrid concept, but depending on the application, some functions may not be needed. In order to analyze the required functions for network operation and visualize the interactions between the actors operating a particular microgrid, a comprehensive use case analysis is needed. This paper presents the use case modelling method applied for microgrid management from an abstract or concept level to a more practical level. By utilizing case studies, the potential entities can be detected where the development or improvement of practical solutions is necessary. The use case analysis has been conducted from top-down until test use cases by real-time simulation models. Test use cases are applied to a real distribution network model, Sundom Smart Grid, with measurement data and newly developed controllers.. The functional analysis provides valuable results when studying several microgrid functions operating in parallel and affecting each other. For example, as shown in this paper, ancillary services provided by an active customer may mean that both the active power and reactive power from customer premises are controlled at the same time by different stakeholders.


Sign in / Sign up

Export Citation Format

Share Document