scholarly journals Mind Your Outcomes: The ∆Q approach to Quality-Centric Systems Development and Its Application to a Blockchain Case-Study

Author(s):  
Seyed Hossein HAERI ◽  
Peter Thompson ◽  
Neil Davies ◽  
Peter Van Roy ◽  
Kevin Hammond ◽  
...  

This paper directly addresses a critical issue that affects the development of many complex distributed software systems: how to establish quickly, cheaply and reliably whether they will deliver their intended performance before expending significant time, effort and money on detailed design and implementation. We describe ΔQSD, a novel metrics-based and quality-centric paradigm that uses formalised outcome diagrams to explore the performance consequences of design decisions, as a performance blueprint of the system. The ΔQSD paradigm is both effective and generic: it allows values from various sources to be combined in a rigorous way, so that approximate results can be obtained quickly and subsequently refined. ΔQSD has been successfully used by Predictable Network Solutions for consultancy on large-scale applications in a number of industries, including telecommunications, avionics, and space and defence, resulting in cumulative savings of $Bs. The paper outlines the ΔQSD paradigm, describes its formal underpinnings, and illustrates its use via a topical real-world example taken from the blockchain/cryptocurrency domain, where application of this approach enabled an advanced distributed proof-of-stake system to meet challenging throughput targets.

Author(s):  
Seyed Hossein HAERI ◽  
Peter Thompson ◽  
Neil Davies ◽  
Peter Van Roy ◽  
Kevin Hammond ◽  
...  

This paper directly addresses a critical issue that affects the development of many complex distributed software systems: how to establish quickly, cheaply and reliably whether they will deliver their intended performance before expending significant time, effort and money on detailed design and implementation. We describe ΔQSD, a novel metrics-based and quality-centric paradigm that uses formalised outcome diagrams to explore the performance consequences of design decisions, as a performance blueprint of the system. The ΔQSD paradigm is both effective and generic: it allows values from various sources to be combined in a rigorous way, so that approximate results can be obtained quickly and subsequently refined. ΔQSD has been successfully used by Predictable Network Solutions for consultancy on large-scale applications in a number of industries, including telecommunications, avionics, and space and defence, resulting in cumulative savings of $Bs. The paper outlines the ΔQSD paradigm, describes its formal underpinnings, and illustrates its use via a topical real-world example taken from the blockchain/cryptocurrency domain, where application of this approach enabled an advanced distributed proof-of-stake system to meet challenging throughput targets.


Water ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 818
Author(s):  
Markus Reisenbüchler ◽  
Minh Duc Bui ◽  
Peter Rutschmann

Reservoir sedimentation is a critical issue worldwide, resulting in reduced storage volumes and, thus, reservoir efficiency. Moreover, sedimentation can also increase the flood risk at related facilities. In some cases, drawdown flushing of the reservoir is an appropriate management tool. However, there are various options as to how and when to perform such flushing, which should be optimized in order to maximize its efficiency and effectiveness. This paper proposes an innovative concept, based on an artificial neural network (ANN), to predict the volume of sediment flushed from the reservoir given distinct input parameters. The results obtained from a real-world study area indicate that there is a close correlation between the inputs—including peak discharge and duration of flushing—and the output (i.e., the volume of sediment). The developed ANN can readily be applied at the real-world study site, as a decision-support system for hydropower operators.


Author(s):  
Yutaka Watanobe ◽  
Nikolay Mirenkov

Programming in pictures is an approach where pictures and moving pictures are used as super-characters to represent the features of computational algorithms and data structures, as well as for explaining the models and application methods involved. *AIDA is a computer language that supports programming in pictures. This language and its environment have been developed and promoted as a testbed for various innovations in information technology (IT) research and implementation, including exploring the compactness of the programs and their adaptive software systems, and obtaining better understanding of information resources. In this paper, new features of the environment and methods of their implementation are presented. They are considered within a case study of a large-scale module of a nuclear safety analysis system to demonstrate that *AIDA language is appropriate for developing efficient codes of serious applications and for providing support, based on folding/unfolding techniques, enhancing the readability, maintainability and algorithmic transparency of programs. Features of this support and the code efficiency are presented through the results of a computational comparison with a FORTRAN equivalent.


2009 ◽  
pp. 468-483
Author(s):  
Efrem Mallach

The case study describes a small consulting company’s experience in the design and implementation of a database and associated information retrieval system. Their choices are explained within the context of the firm’s needs and constraints. Issues associated with development methods are discussed, along with problems that arose from not following proper development disciplines.


2013 ◽  
Vol 86 (11) ◽  
pp. 2797-2821 ◽  
Author(s):  
J. Pernstål ◽  
R. Feldt ◽  
T. Gorschek

Author(s):  
P. K. KAPUR ◽  
ANU. G. AGGARWAL ◽  
KANICA KAPOOR ◽  
GURJEET KAUR

The demand for complex and large-scale software systems is increasing rapidly. Therefore, the development of high-quality, reliable and low cost computer software has become critical issue in the enormous worldwide computer technology market. For developing these large and complex software small and independent modules are integrated which are tested independently during module testing phase of software development. In the process, testing resources such as time, testing personnel etc. are used. These resources are not infinitely large. Consequently, it is an important matter for the project manager to allocate these limited resources among the modules optimally during the testing process. Another major concern in software development is the cost. It is in fact, profit to the management if the cost of the software is less while meeting the costumer requirements. In this paper, we investigate an optimal resource allocation problem of minimizing the cost of software testing under limited amount of available resources, given a reliability constraint. To solve the optimization problem we present genetic algorithm which stands up as a powerful tool for solving search and optimization problems. The key objective of using genetic algorithm in the field of software reliability is its capability to give optimal results through learning from historical data. One numerical example has been discussed to illustrate the applicability of the approach.


2016 ◽  
Vol 2 ◽  
pp. e66 ◽  
Author(s):  
Johannes M. Schleicher ◽  
Michael Vögler ◽  
Christian Inzinger ◽  
Schahram Dustdar

Container-based application deployments have received significant attention in recent years. Operating system virtualization based on containers as a mechanism to deploy and manage complex, large-scale software systems has become a popular mechanism for application deployment and operation. Packaging application components into self-contained artifacts has brought substantial flexibility to developers and operation teams alike. However, this flexibility comes at a price. Pracitioners need to respect numerous constraints ranging from security and compliance requirements, to specific regulatory conditions. Fulfilling these requirements is especially challenging in specialized domains with large numbers of stakeholders. Moreover, the rapidly growing number of container images to be managed due to the introduction of new or updated applications and respective components, leads to significant challenges for container management and adaptation. In this paper, we introduce Smart Brix, a framework for continuous evolution of container application deployments that tackles these challenges. Smart Brix integrates and unifies concepts of continuous integration, runtime monitoring, and operational analytics. Furthermore, it allows practitioners to define generic analytics and compensation pipelines composed of self-assembling processing components to autonomously validate and verify containers to be deployed. We illustrate the feasibility of our approach by evaluating our framework using a case study from the smart city domain. We show that Smart Brix is horizontally scalable and runtime of the implemented analysis and compensation pipelines scales linearly with the number of container application packages.


Author(s):  
Matthew Woodruff ◽  
Timothy W. Simpson

Problem discovery is messy. It involves many mistakes, which may be regarded as a failure to address a design problem correctly. Mistakes, however, are inevitable, and misunderstanding the problems we are working on is the natural, default state of affairs. Only through engaging in a series of mistakes can we learn important things about our design problems. This study provides a case study in Many-Objective Visual Analytics (MOVA), as applied to the problem of problem discovery. It demonstrates the process of continually correcting and improving a problem formulation while visualizing its optimization results. This process produces a new, clearer understanding of the problem and puts the designer in a position to proceed with more-detailed design decisions.


Sign in / Sign up

Export Citation Format

Share Document