scholarly journals Ontologies and use case based planning of content delivery

2020 ◽  
Vol 77 ◽  
pp. 03001
Author(s):  
Daniela Burkhardt ◽  
Stefanie Clesle

This papers objective was the development of an ontology-based Content Delivery Portal (CDP) in combination with use cases. The aim was to find out how to enrich content in general through an ontology and to investigate the added value of ontologies. In particular, it has been researched how content can be delivered in CDP via an ontological context. The focus of the research team was on the knowledge builder of the ontology system manufacturer i-views for modelling the ontology and embedding content. Furthermore, the research team planned a CDP and implemented the model in the CDP i-views content. To accomplish this, use cases for users in the context of smart homes were developed to derive which information users want and when and how to grant access to the desired content. The implementation in the CDP was fully based on the ontology which was developed over the course of the project. Although the research team presumes that part of the implementation could have been realised with metadata from a CMS, the ontology can be used to provide context information that goes beyond the metadata that is traditionally assigned to individual topics in a CMS.

Author(s):  
Julian Endres ◽  
Reinhard C. Bernsteiner ◽  
Christian Ploder

This article provides a comprehensive use case-based comparison framework for the selection of the most suitable database for specific requirements and application domains. The concept of a NoSQL Enterprise Readiness Index which is a comparable numeric measurement for the fitness of a NoSQL database in enterprise use cases is proposed. It is calculated by the fulfillment of a comprehensive set of weighted criteria distilled from comprehensive research. The goal of the developed NoSQL Enterprise Readiness Index (NERI) is to provide a numeric index value for the comparison and benchmarking of NoSQL databases in enterprise use cases. The calculation of NERI is based on 43 criteria in which the single database product is evaluated. To be reproducible with more products and to guarantee a consistent evaluation among the products, a fulfillment matrix is used. The fulfillment matrix describes for each of the 43 criteria the four different levels of fulfillment in a mostly qualitative way and therefore guides the evaluator in choosing the right point value.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 592
Author(s):  
Radek Silhavy ◽  
Petr Silhavy ◽  
Zdenka Prokopova

Software size estimation represents a complex task, which is based on data analysis or on an algorithmic estimation approach. Software size estimation is a nontrivial task, which is important for software project planning and management. In this paper, a new method called Actors and Use Cases Size Estimation is proposed. The new method is based on the number of actors and use cases only. The method is based on stepwise regression and led to a very significant reduction in errors when estimating the size of software systems compared to Use Case Points-based methods. The proposed method is independent of Use Case Points, which allows the elimination of the effect of the inaccurate determination of Use Case Points components, because such components are not used in the proposed method.


2014 ◽  
Vol 23 (01) ◽  
pp. 27-35 ◽  
Author(s):  
S. de Lusignan ◽  
S-T. Liaw ◽  
C. Kuziemsky ◽  
F. Mold ◽  
P. Krause ◽  
...  

Summary Background: Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective: To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results: We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.


Author(s):  
Nina Rannharter ◽  
Sarah Teetor

Due to the complex nature of archival images, it is an ongoing challenge to establish a metadata architecture and metadata standards that are easy to navigate and take into consideration future requirements. This contribution will present a use case in the humanities based on the Digital Research Archive for Byzantium (DiFAB) at the University of Vienna. Tracing one monument and its photographic documentation, this paper will highlight some issues concerning metadata for images of material culture, such as: various analog and digital forms of documentation; available thesauri – including problems of historical geography, multilingualism, and culturally specific terminologies –; and the importance of both precise and imprecise dating for cultural historians and their research archives.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  

Purpose The purpose of this paper is to demonstrate the linkage of case-based instruction with the enhancement of self-regulated learning of employees. Design/methodology/approach The authors carried out a literature review of SRL and CBL, including reviewing the theories of situated learning and constructivism. They then provided a detailed design presentation for using CBL with trainees. Findings The findings of the analysis enable a full, detailed approach to the application of CBL for practitioner use Originality/value Case-based instruction has not previously been directly linked to the self-regulation of learning.


Sign in / Sign up

Export Citation Format

Share Document