scholarly journals Dependencies among Architectural Views Got from Software Requirements Based on a Formal Model

2014 ◽  
Vol 16 (1) ◽  
pp. 5-12
Author(s):  
Janis Osis ◽  
Erika Asnina ◽  
Uldis Donins ◽  
Vicente García-Díaz

Abstract A system architect has software requirements and some unspecified knowledge about a problem domain (e.g., an enterprise) as source information for assessment and evaluation of possible solutions and getting the target point, a preliminary software design. The solving factor is architect’s experience and expertise in the problem domain (“AS-IS”). A proposed approach is dedicated to assist a system architect in making an appropriate decision on the solution (“TO-BE”). It is based on a formal mathematical model, Topological Functioning Model (TFM). Compliant TFMs can be transformed into software architectural views. The paper demonstrates and discusses tracing dependency links from the requirements to and between the architectural views.

Author(s):  
Yingxu Wang ◽  
Jian Huang

Software patterns are recognized as an ideal documentation of expert knowledge in software design and development. However, its formal model and semantics have not been generalized and matured. The traditional UML specifications and related formalization efforts cannot capture the essence of generic patterns precisely, understandably, and essentially. A generic mathematical model of patterns is presented in this article using real-time process algebra (RTPA). The formal model of patterns are more readable and highly generic, which can be used as the meta model to denote any design patterns deductively, and can be translated into code in programming languages by supporting tools. This work reveals that a pattern is a highly complicated and dynamic structure for software design encapsulation, because of its complex and flexible internal associations between multiple abstract classes and instantiations. The generic model of patterns is not only applicable to existing patterns’ description and comprehension, but also useful for future patterns’ identification and formalization.


2009 ◽  
pp. 635-647 ◽  
Author(s):  
Yingxu Wang ◽  
Jian Huang

Software patterns are recognized as an ideal documentation of expert knowledge in software design and development. However, its formal model and semantics have not been generalized and matured. The traditional UML specifications and related formalization efforts cannot capture the essence of generic patterns precisely, understandably, and essentially. A generic mathematical model of patterns is presented in this article using real-time process algebra (RTPA). The formal model of patterns are more readable and highly generic, which can be used as the meta model to denote any design patterns deductively, and can be translated into code in programming languages by supporting tools. This work reveals that a pattern is a highly complicated and dynamic structure for software design encapsulation, because of its complex and flexible internal associations between multiple abstract classes and instantiations. The generic model of patterns is not only applicable to existing patterns’ description and comprehension, but also useful for future patterns’ identification and formalization.


2018 ◽  
Vol 11 (1) ◽  
pp. 160-206
Author(s):  
RICCARDO PINOSIO ◽  
MICHIEL VAN LAMBALGEN

AbstractIn this paper we provide a mathematical model of Kant’s temporal continuum that yields formal correlates for Kant’s informal treatment of this concept in theCritique of Pure Reasonand in other works of his critical period. We show that the formal model satisfies Kant’s synthetic a priori principles for time (whose consistence is not obvious) and that it even illuminates what “faculties and functions” must be in place, as “conditions for the possibility of experience”, for time to satisfy such principles. We then present a mathematically precise account of Kant’s transcendental theory of time—the most precise account to date.Moreover, we show that the Kantian continuum which we obtain has some affinities with the Brouwerian continuum but that it also has “infinitesimal intervals” consisting of nilpotent infinitesimals; these allow us to capture Kant’s theory of rest and motion in theMetaphysical Foundations of Natural Science.While our focus is on Kant’s theory of time the material in this paper is more generally relevant for the problem of developing a rigorous theory of the phenomenological continuum, in the tradition of Whitehead, Russell, and Weyl among others.


2012 ◽  
Vol 241-244 ◽  
pp. 56-60
Author(s):  
Jian Yu Wang ◽  
Shao Zhong Li

In this paper, the pump performances intelligent test systems is introduced. Work principle of the test system is analyzed, and the hardware architecture and software design of the system is given. The principle of analog signal calibration and frequency test with period is discussed emphatically.finally, the mathematical model of data processing are also given.


2020 ◽  
Author(s):  
João Vitor Demaria Venâncio ◽  
Fabiane Barreto Vavassori Benitti

Requirements Engineering is concerned with identifying, analyzing,documenting and managing software requirements, which is an importantphase in the software development process. Research showsthat most software design failures are due to requirements engineeringissues. Thus, we propose a solution for the active learningof requirements specification techniques. Considering that the userstory technique is currently well accepted by IT companies, thispaper proposes a mobile game that supports learning and practicein writing user stories.


2016 ◽  
Vol 15 (4) ◽  
pp. 301-311
Author(s):  
Kirsten M. Winters ◽  
Denise Lach ◽  
Judith B. Cushing

Defining characteristics of a problem domain continues to challenge developers of visualization software although it is essential for designing both tools and resulting visualizations. Additionally, effectiveness of a visualization software tool often depends on the context of systems and actors within the domain problem. The nested blocks and guidelines model is a useful template for informing design and evaluation criteria for visualization software development because it aligns design to need. Characterizing the outermost block of the nested model—the domain problem—is challenging, mainly due to the nature of contemporary domain problems, which are dynamic and by definition difficult to problematize. We offer here our emerging conceptual model, based on the central question in our research study—what visualization works for whom and in which situation—to characterize the outermost block, the domain problem, of the nested model. We apply examples from a 3-year case study of visualization software design and development to demonstrate how the conceptual model might be used to create evaluation criteria affecting design and development of a visualization tool.


Author(s):  
DANNY C. C. POO

This paper discusses an object-oriented software requirements analysis method. The approach adopted here draws clear distinction between a system's basic structure (i.e. the object model) and its functionalities. The analysis model generated is a description of a problem domain; it consists of a set of primary and secondary objects that characterize the problem domain, and a set of pseudo objects that define the functional requirements of a system. There are two stages of analysis in the proposed method: Object Modelling and Functional Requirements Modelling. These two stages are built upon one another. The aim of the object modelling stage is to derive a model of the problem domain in terms of objects, their classification and inter-relationships with one another. The functional requirements modelling stage builds upon this initial object model to complete the requirement analysis specification. This paper uses a real-life library environment to illustrate how the method can be applied in the specification of an object-oriented software system.


Risks ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 133
Author(s):  
Andrey Koltays ◽  
Anton Konev ◽  
Alexander Shelupanov

The need to assess the risks of the trustworthiness of counterparties is increasing every year. The identification of increasing cases of unfair behavior among counterparties only confirms the relevance of this topic. The existing work in the field of information and economic security does not create a reasonable methodology that allows for a comprehensive study and an adequate assessment of a counterparty (for example, a developer company) in the field of software design and development. The purpose of this work is to assess the risks of a counterparty’s trustworthiness in the context of the digital transformation of the economy, which in turn will reduce the risk of offenses and crimes that constitute threats to the security of organizations. This article discusses the main methods used in the construction of a mathematical model for assessing the trustworthiness of a counterparty. The main difficulties in assessing the accuracy and completeness of the model are identified. The use of cross-validation to eliminate difficulties in building a model is described. The developed model, using machine learning methods, gives an accurate result with a small number of compared counterparties, which corresponds to the order of checking a counterparty in a real system. The results of calculations in this model show the possibility of using machine learning methods in assessing the risks of counterparty trustworthiness.


2008 ◽  
Vol Volume 9, 2007 Conference in... ◽  
Author(s):  
Claude Lobry

International audience In the process of elaboration of a model one emphasize on the necessity of confronting the model with the reality which it is supposed to represent. There is another aspect of the modelling process, to my opinion also essential, about which one usually do not speak. It consists in a logico-linguistic work where formal models are used to produce prediction which are not confronted with the reality but serve for falsifying assertions which nevertheless seemed to be derived from the not formalized model. More exactly a first informal model is described in the natural language and, considered in the natural language, seems to say some thing but in a more or less clear way. Then we translate the informal model into a formal model (mathematical model or computer model) where what was argumentation becomes demonstration.The formal model so serves for raising ambiguities of the natural language. But conversely a too much formalized text quickly loses any sense for a human brain what makes necessary the return for a less formal language. It is these successive "translations" between more or less formal languages that I try to analyze on two examples, the first one in population dynamics, the second in mathematics. Dans le processus d’élaboration d’un modèle on insiste beaucoup sur la nécessité de confronter le modèle à la réalité qu’il est sensé représenter. Il est un autre aspect de la modélisation, à mon avis tout aussi essentiel, dont on ne parle pas. Il s’agit d’un travail logico-linguistique où des modèles formels sont utilisés pour produire des prédiction qui ne sont pas confrontées à la réalité mais servent à falsifier des affirmations qui semblaient pourtant se déduire du modèle. Plus précisément un premier modèle informel est décrit dans la langue naturelle et, toujours dans la langue naturelle, semble dire quelques chose mais de façon plus ou moins claire. Alors on traduit le modèle informel en un modèle formel (mathématique ou informatique) où ce qui était argumentation devient démonstration. Le modèle formel sert ainsi à lever des ambiguïtés de la langue naturelle. Mais inversement un texte trop formalisé perd rapidement tout sens pour un cerveau humain ce qui rend nécessaire le retour à une langue moins formelle. Ce sont ces “traductions" successives entre langues plus ou moins formelles que je cherche à analyser sur deux exemples, le premier en dynamique des populations, le second en mathématiques.


Author(s):  
Nicola Ferro

This chapter deals with the problem of defining and assessing the quality of a digital library. The chapter will provide a brief excursus on the evolution of digital libraries and their current complexity to make it clear that there is a strong need for systematic and exhaustive models which precisely define what digital libraries are and encompass a model for the quality of digital libraries. In this context, the authors will present an overview of the DELOS Reference Model for digital libraries and they will go into details about how quality has been modelled in it. The authors will also compare this model to another formal model for digital libraries, which is the Stream, Structures, Spaces, Scenarios, Societies (5S) model. The discussion addressed in the chapter will not be limited to quality issues but will show how quality impact on various dimensions of the digital library universe. In particular, they will discuss how quality relates to interoperability. To this end, they will describe the conceptual model for interoperability developed in support to the European Digital Library initiative and will highlight its relationships with the quality domain in the DELOS Reference Model. Finally, the authors will outlook some future directions that may be pursued to improve and automate the assessment and evaluation of quality in digital libraries.


Sign in / Sign up

Export Citation Format

Share Document