scholarly journals scone – A Requirements Management Tool for the Specification and Variability-Based Analysis of Product Lines

Author(s):  
Sara Allabar ◽  
Christian Bettinger ◽  
Michael Müllen ◽  
Georg Rock

Nowadays, industrial products as well as software applications are expected to be tailored to the user’s needs in an increasingly distinct manner. This often makes it necessary to design a vast number of customized variants, which leads to complex and error prone analysis and development processes. Generally, requirements engineering is considered to be one of the most significant activities in software and system development. Variant management has proven to play an important role in handling the complexity arising from mass-customization of products. However, there are only a few, often rather complex-to-use, applications which allow adding variance information directly to requirements. Especially in case of small and medium sized enterprises, approaches to meet this challenge often result in isolated solutions that are not driven by state-of-the-art analysis methods and do not cope with future requirements. This paper introduces a lightweight requirements management tool called scone, which will be embedded into an overall variability management methodology. scone enables the user to create and manage requirement specifications and augment them with variability information. Based on this specification, the requirements can be analyzed in a formal way with respect to their variability using the variability management tool Glencoe. scone was created as a single-page web application to eliminate the need for installation and allow it to run on many devices, while offering the experience of working with a native application, rather than a website. Both tools are designed to provide a proof of concept for the seamless integration of variability information within a system development process as well as to show how variability can be handled in an easy-to-use way from the very beginning within this process.


2021 ◽  
pp. 193229682098557
Author(s):  
Alysha M. De Livera ◽  
Jonathan E. Shaw ◽  
Neale Cohen ◽  
Anne Reutens ◽  
Agus Salim

Motivation: Continuous glucose monitoring (CGM) systems are an essential part of novel technology in diabetes management and care. CGM studies have become increasingly popular among researchers, healthcare professionals, and people with diabetes due to the large amount of useful information that can be collected using CGM systems. The analysis of the data from these studies for research purposes, however, remains a challenge due to the characteristics and large volume of the data. Results: Currently, there are no publicly available interactive software applications that can perform statistical analyses and visualization of data from CGM studies. With the rapidly increasing popularity of CGM studies, such an application is becoming necessary for anyone who works with these large CGM datasets, in particular for those with little background in programming or statistics. CGMStatsAnalyser is a publicly available, user-friendly, web-based application, which can be used to interactively visualize, summarize, and statistically analyze voluminous and complex CGM datasets together with the subject characteristics with ease.



2021 ◽  
pp. 016555152199863
Author(s):  
Ismael Vázquez ◽  
María Novo-Lourés ◽  
Reyes Pavón ◽  
Rosalía Laza ◽  
José Ramón Méndez ◽  
...  

Current research has evolved in such a way scientists must not only adequately describe the algorithms they introduce and the results of their application, but also ensure the possibility of reproducing the results and comparing them with those obtained through other approximations. In this context, public data sets (sometimes shared through repositories) are one of the most important elements for the development of experimental protocols and test benches. This study has analysed a significant number of CS/ML ( Computer Science/ Machine Learning) research data repositories and data sets and detected some limitations that hamper their utility. Particularly, we identify and discuss the following demanding functionalities for repositories: (1) building customised data sets for specific research tasks, (2) facilitating the comparison of different techniques using dissimilar pre-processing methods, (3) ensuring the availability of software applications to reproduce the pre-processing steps without using the repository functionalities and (4) providing protection mechanisms for licencing issues and user rights. To show the introduced functionality, we created STRep (Spam Text Repository) web application which implements our recommendations adapted to the field of spam text repositories. In addition, we launched an instance of STRep in the URL https://rdata.4spam.group to facilitate understanding of this study.



2021 ◽  
Vol 50 (3) ◽  
pp. 424-442
Author(s):  
Atif Ali ◽  
Yaser Hafeez ◽  
Sadia Ali ◽  
Shariq Hussain ◽  
Shunkun Yang ◽  
...  

Department of Software Engineering, In the current application development strategies, families of productsare developed with personalized configurations to increase stakeholders’ satisfaction. Product lines have theability to address several requirements due to their reusability and configuration properties. The structuringand prioritizing of configuration requirements facilitate the development processes, whereas it increases theconflicts and inadequacies. This increases human effort, reducing user satisfaction, and failing to accommodatea continuous evolution in configuration requirements. To address these challenges, we propose a framework formanaging the prioritization process considering heterogeneous stakeholders priority semantically. Featuresare analyzed, and mined configuration priority using the data mining method based on frequently accessed andchanged configurations. Firstly, priority is identified based on heterogeneous stakeholder’s perspectives usingthree factors functional, experiential, and expressive values. Secondly, the mined configuration is based on frequentlyaccessed or changed configuration frequency to identify the new priority for reducing failures or errorsamong configuration interaction. We evaluated the performance of the proposed framework with the help ofan experimental study and by comparing it with analytical hierarchical prioritization (AHP) and Clustering.The results indicate a significant increase (more than 90 percent) in the precision and the recall value of theproposed framework, for all selected cases.



2021 ◽  
Vol 7 (1) ◽  
pp. 16-19
Author(s):  
Owes Khan ◽  
Geri Shahini ◽  
Wolfram Hardt

Automotive technologies are ever-increasinglybecoming digital. Highly autonomous driving togetherwith digital E/E control mechanisms include thousandsof software applications which are called as software components. Together with the industry requirements, and rigorous software development processes, mappingof components as a software pool becomes very difficult.This article analyses and discusses the integration possiblilities of machine learning approaches to our previously introduced concept of mapping of software components through a common software pool.



2018 ◽  
Author(s):  
Janet C. Siebert ◽  
Charles Preston Neff ◽  
Jennifer M. Schneider ◽  
EmiLie H. Regner ◽  
Neha Ohri ◽  
...  

AbstractBackgroundRelationships between specific microbes and proper immune system development, composition, and function have been reported in a number of studies. However, researchers have discovered only a fraction of the likely relationships. High-dimensional “omic” methodologies such as 16S ribosomal RNA (rRNA) sequencing and Time-of-flight mass cytometry (CyTOF) immunophenotyping generate data that support generation of hypotheses, with the potential to identify additional relationships at a level of granularity ripe for further experimentation. Pairwise linear regressions between microbial and host immune features is one approach for quantifying relationships between “omes”, and the differences in these relationships across study cohorts or arms. This approach yields a top table of candidate results. However, the top table alone lacks the detail that domain experts need to vet candidate results for follow-up experiments.ResultsTo support this vetting, we developed VOLARE (Visualization Of LineAr Regression Elements), a web application that integrates a searchable top table, small in-line graphs illustrating the fitted models, a network summarizing the top table, and on-demand detailed regression plots showing full sample-level detail. We applied VOLARE to three case studies—microbiome:cytokine data from fecal samples in HIV, microbiome:cytokine data in inflammatory bowel disease and spondyloarthritis, and microbiome:immune cell data from gut biopsies in HIV. We present both patient-specific phenomena and relationships that differ by disease state. We also analyzed interaction data from system logs to characterize usage scenarios. This log analysis revealed that, in using VOLARE, domain experts frequently generated detailed regression plots, suggesting that this detail aids the vetting of results.ConclusionsSystematically integrating microbe:immune cell readouts through pairwise linear regressions and presenting the top table in an interactive environment supports the vetting of results for scientific relevance. VOLARE allows domain experts to control the analysis of their results, screening dozens of candidate relationships with ease. This interactive environment transcends the limitations of a static top table.



Author(s):  
Kerstin Schmidt ◽  
Grit Walther ◽  
Thomas S. Spengler ◽  
Rolf Ernst


Author(s):  
Brahim Hamid ◽  
Yulin (Huaxi) Zhang ◽  
Jacob Geisel ◽  
David Gonzalez

The conception and design of Resource Constrained Embedded Systems (RCES) is an inherently complex endeavor. Non-functional requirements from security and dependability are exacerbate this complexity. Model-Driven Engineering (MDE) is a promising approach for the design of trusted systems, as it bridges the gap between design issues and implementation concerns. The purpose of process models is to document and communicate processes, as well as reuse them. Thus, processes can be better taught and executed. However, most useful metamodels are activity-oriented, and the required concepts of safety lifecycle, such as validation, cannot be easily modeled. In this paper, the authors propose a safety-oriented process metamodel that extends exiting framework to support all safety control requirements. A new safety lifecycle development processes technique has been built to ease its use in a building process of system/ software applications with safety support. As a proof of concept, the feasibility of the approach has been evaluated with an example. The example is an engineering process for building industry control systems with safety requirements for software and hardware resources. A prototype implementation of the approach is provided and applied to the example of industry control systems in the railway domain.



Author(s):  
Alena Buchalcevova

The article presents the ISO/IEC 29110 Profile Implementation Methodology that was developed to manage consistent implementation of individual ISO/IEC 29110 Profiles in the open-source content management tool Eclipse Process Framework Composer. Such an implementation enables effective managing of the standard and its publishing in the form of a web application that can be easily and efficiently used. This methodology represents an example of the usable outputs of the ISO/IEC 29110 standard being utilized in education and research in the Czech Republic. Its main elements described in this article can be also used for implementation purposes in other countries. First, the methodology structure is presented, followed by its individual elements, i.e. General Principles, Profile Structure, Profile Element Mapping, Implementation Conventions, EPF Composer Usage Guidelines, and Implementation Process. The evaluation of this methodology was performed during the implementation of the Entry Profile.



Sign in / Sign up

Export Citation Format

Share Document