scholarly journals Recycling and Upcycling: FENIX Validation on Three Use Cases

Author(s):  
Alvise Bianchin ◽  
George Smyrnakis

AbstractWithin FENIX a set of three use cases has been developed in order to test in practice the selected business models. After the description of a common data repository virtually connecting all the use cases and describing the common step of PCB disassembly, this chapter presents each use case into detail, by evidencing the main findings.

Network slicing is widely studied as an essential technological enabler for supporting diverse use case specific services through network virtualization. Industry verticals, consisting of diverse use cases requiring different network resources, are considered key customers for network slices. However, different approaches for network slice provisioning to industry verticals and required business models are still largely unexplored and require further work. Focusing on technical and business aspects of network slicing, this article develops three new business models, enabled by different distributions of business roles and management exposure between business actors. The feasibility of the business models is studied in terms of; the costs and benefits to business actors, mapping to use cases in various industry verticals, and the infrastructure costs of common and dedicated virtualization infrastructures. Finally, a strategic approach and relevant recommendations are proposed for major business actors, national regulatory authorities, and standards developing organizations.


Author(s):  
Narasimha Bolloju ◽  
Steven Alter

Research to date shows significant variability in the success of applying the common technique of use case diagramming for identifying information system scope in terms of use cases performed by actors interacting with an information system or performed automatically by the information system. The current research tests a) the benefits of using a work system snapshot, a basic analytical tool from the work system method, before producing use case diagrams, and b) the additional benefits of enhancing use case diagramming constructs to distinguish between automated activities, activities supported by the information system, and relevant manual activities. Teams of student subjects in an experiment produced substantially better use case diagrams - containing far more use cases and qualitatively better use cases than did the teams in control group - when provided with a work system snapshot that summarized a test scenario in terms of work system concepts.


2009 ◽  
Vol 38 (38) ◽  
pp. 119-130
Author(s):  
Erika Asnina

Use of Business Models within Model Driven Architecture Model Driven Architecture is a framework dedicated for development of large and complex computer systems. It states and implements the principle of architectural separation of concerns. This means that a system can be modeled from three different but related to each other viewpoints. The viewpoint discussed in this paper is a Computation Independent one. MDA specification states that a model that shows a system from this viewpoint is a business model. Taking into account transformations foreseen by MDA, it should be useful for automation of software development processes. This paper discusses an essence of the Computation Independent Model (CIM) and the place of business models in the computation independent modeling. This paper considers four types of business models, namely, SBVR, BPMN, use cases and Topological Functioning Model (TFM). Business persons use SBVR to define business vocabularies and business rules of the existing and planned domains, BPMN to define business processes of both existing and planned domains, and use cases to define business requirements to the planned domain. The TFM is used to define functionality of both existing and planned domains. This paper discusses their capabilities to be used as complete CIMs with formally defined conformity between planned and existing domains.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 592
Author(s):  
Radek Silhavy ◽  
Petr Silhavy ◽  
Zdenka Prokopova

Software size estimation represents a complex task, which is based on data analysis or on an algorithmic estimation approach. Software size estimation is a nontrivial task, which is important for software project planning and management. In this paper, a new method called Actors and Use Cases Size Estimation is proposed. The new method is based on the number of actors and use cases only. The method is based on stepwise regression and led to a very significant reduction in errors when estimating the size of software systems compared to Use Case Points-based methods. The proposed method is independent of Use Case Points, which allows the elimination of the effect of the inaccurate determination of Use Case Points components, because such components are not used in the proposed method.


2011 ◽  
Vol 16 (2) ◽  
pp. 133-152 ◽  
Author(s):  
James A. Brunetti ◽  
Kanti Chakrabarti ◽  
Alina M. Ionescu-Graff ◽  
Ramesh Nagarajan ◽  
Dong Sun

2014 ◽  
Vol 23 (01) ◽  
pp. 27-35 ◽  
Author(s):  
S. de Lusignan ◽  
S-T. Liaw ◽  
C. Kuziemsky ◽  
F. Mold ◽  
P. Krause ◽  
...  

Summary Background: Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective: To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results: We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.


2018 ◽  
Vol 10 (12) ◽  
pp. 4453 ◽  
Author(s):  
Maria J. Pouri ◽  
Lorenz M. Hilty

Human society is increasingly influencing the planet and its environmental systems. The existing environmental problems indicate that current production and consumption patterns are not sustainable. Despite the remarkable opportunities brought about by Information and Communication Technology (ICT) to improve the resource efficiency of production and consumption processes, it seems that the overall trend is still not heading towards sustainability. By promoting the utilization of available and underused resources, the ICT-enabled sharing economy has transformed, and even in some cases disrupted, the prevailing patterns of production and consumption, raising questions about opportunities and risks of shared consumption modes for sustainability. The present article attempts to conceptualize the sustainability implications of today’s sharing economy. We begin with presenting a definition for the digital sharing economy that embraces the common features of its various forms. Based on our proposed definition, we discuss the theoretical and practical implications of the digital sharing economy as a use case of ICT. The analysis is deepened by applying the life-cycle/enabling/structural impacts model of ICT effects to this use case. As a result, we show the various positive and negative potentials of digital sharing for sustainability at different system levels. While it is too early to project well-founded scenarios to describe the sustainability status of digital sharing, the implications discussed in our work may help outlining future research and policies in this area.


Author(s):  
Peter McCarthy-Ward ◽  
Andy Valdar ◽  
Stuart Newstead ◽  
Stuart Revell

Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


Sign in / Sign up

Export Citation Format

Share Document