scholarly journals Software Engineering Methods to Improve the Design of Software Reliability Systems: Roadmap

Author(s):  
Idrees S. Kocher

The reliability of software is founded on the development, testing, evaluation and maintenance of software systems. In recent years, researchers have been come to see software reliability as a major focus. This is due to the fact that reliability is central to all software quality concepts. System Reliability Engineering is the study of the processes and results of software systems in relation to the basic requirements of users. This paper provides an overview (roadmap) of current developments in software reliability metrics, modeling and operational profiles. It outlines several software engineering methods to achieve reasonable system reliability. Finally, failure metrics are considered based on feedback collected from users after releasing the software and case studies of detected failures. Consequently, numbers and types of failures will be recorded from the users feedback.

2014 ◽  
Vol 926-930 ◽  
pp. 2642-2645
Author(s):  
Wen Hong Liu ◽  
Chun Yan Wang ◽  
Li Ge

As the rapid development of social informatization, software reliability and security are highly required. Only applying high-quality software products can increase work efficiency. Quality is the life of software. How to enhance the quality of software products and how to use effective quality management method is an urgent need. This paper discuss the key point of software engineering and software quality management, and this is the basis of software quality ensurance model.


2016 ◽  
Vol 26 (44) ◽  
pp. 155
Author(s):  
Diana María Torres-Ricaurte ◽  
Carlos Mario Zapata-Jaramillo

Interoperability among heterogeneous software systems is a software quality sub-characteristic. Some methods for dealing with interoperability exhibit differences in aspects like generality, development method, and work products, among others. However, some authors understand interoperability as a non-functional requirement with general-purpose practices for identifying and specifying such requirement. Other authors assess and achieve interoperability by using work products falling beyond defined practices. Consequently, in this paper we propose four best practices in order to accomplish interoperability among heterogeneous software systems. Our best practices are represented with the Semat (Software Engineering Method and Theory) kernel, since it includes a language with simple and precise elements. Definition of interoperability best practices enables unification of the effort focused on software systems interoperability.


Author(s):  
Kehan Gao ◽  
Taghi M. Khoshgoftaar

Timely and accurate prediction of the quality of software modules in the early stages of the software development life cycle is very important in the field of software reliability engineering. With such predictions, a software quality assurance team can assign the limited quality improvement resources to the needed areas and prevent problems from occurring during system operation. Software metrics-based quality estimation models are tools that can achieve such predictions. They are generally of two types: a classification model that predicts the class membership of modules into two or more quality-based classes (Khoshgoftaar et al., 2005b), and a quantitative prediction model that estimates the number of faults (or some other quality factor) that are likely to occur in software modules (Ohlsson et al., 1998). In recent years, a variety of techniques have been developed for software quality estimation (Briand et al., 2002; Khoshgoftaar et al., 2002; Ohlsson et al., 1998; Ping et al., 2002), most of which are suited for either prediction or classification, but not for both. For example, logistic regression (Khoshgoftaar & Allen, 1999) can only be used for classification, whereas multiple linear regression (Ohlsson et al., 1998) can only be used for prediction. Some software quality estimation techniques, such as case-based reasoning (Khoshgoftaar & Seliya, 2003), can be used to calibrate both prediction and classification models, however, they require distinct modeling approaches for both types of models. In contrast to such software quality estimation methods, count models such as the Poisson regression model (PRM) and the zero-inflated Poisson (ziP) regression model (Khoshgoftaar et al., 2001) can be applied to yield both with just one modeling approach. Moreover, count models are capable of providing the probability that a module has a given number of faults. Despite the attractiveness of calibrating software quality estimation models with count modeling techniques, we feel that their application in software reliability engineering has been very limited (Khoshgoftaar et al., 2001). This study can be used as a basis for assessing the usefulness of count models for predicting the number of faults and quality-based class of software modules.


2010 ◽  
Vol 118-120 ◽  
pp. 891-895
Author(s):  
Chun Yang Jiang ◽  
Guo Qi Li ◽  
Xiao Hong Bao

Software reliability has been regarded as one of the most important quality attributes for software intensive systems, especially in embedded system domain. Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. As most of embedded systems complicated functionalities and controls are implemented by software which is embedded in hardware systems, it became more critical to assure high reliability for software itself. At this point, there is no visible boundary between Software reliability and software safety. Although software reliability has remained an active research subject over several years, challenges and open questions still exist. In particular, vital future goals include the development of new software reliability engineering paradigms that take software architectures, testing techniques, and software failure manifestation mechanisms into consideration. In this paper, we give a paradigm of embedded system, and do some analysis about it by using Generalized Stochastic Petri Net (GSPN).


Author(s):  
HENRIK MADSEN ◽  
POUL THYREGOD ◽  
BERNARD BURTSCHY ◽  
GRIGORE ALBEANU ◽  
FLORIN POPENTIU

Previous investigations have shown the importance of evaluating computer performances and predicting the system reliability. This paper considers soft computing techniques in order to be used for software fault diagnosis, reliability optimization and for time series prediction during the software reliability analysis. It is shown that the study of the data collections during a software project development can be done within a soft computing framework.


Author(s):  
Shola Oyedeji ◽  
Birgit Penzenstadler ◽  
Ahmed Seffah

Like other ICT communities, sustainability in software engineering is a major research and development concerns. Current research focusses on eliciting the meanings of sustainability and proposing approaches for its engineering and integration into the mainstream software development lifecycle. However, few concrete guidelines that software designers can apply effectively are available and applicable. Such guidelines are needed for the elicitation of sustainability requirements and testing software against these guidelines. This paper introduces a sustainability design catalogue to assist software developers and managers in eliciting sustainability requirements, and then in measuring and testing software sustainability. The paper reviews the current research on sustainability in software engineering which is the grounds for the development of the catalogue. Four different case studies were analyzed using the Karlskrona manifesto on sustainability design. The output from this research paper is a software sustainability design catalogue through which a pilot framework is proposed that includes a set of sustainability goals, concepts and methods. The integration of sustainability for/in software systems requires a concrete framework that exemplifies how to apply and quantify sustainability. The paper demonstrates how the proposed software sustainability design catalogue provides a step towards this direction through a series of guidelines.


Author(s):  
TAGHI M. KHOSHGOFTAAR ◽  
BOJAN CUKIC ◽  
NAEEM SELIYA

Embedded systems have become ubiquitous and essential entities in our ever growing high-tech world. The backbone of today's information-highway infrastructure are embedded systems such as telecommunication systems. They demand high reliability, so as to prevent severe consequences of failures including costly repairs at remote sites. Technology changes mandate that embedded systems evolve, resulting in a demand for techniques for improving reliability of their future system releases. Reliability models based on software metrics can be effective tools for software engineering of embedded systems, because quality improvements are so resource-consuming that it is not feasible to apply them to all modules. Identification of the likely fault-prone modules before system testing, can be effective in reducing the likelihood of faults discovered during operations. A software quality classification model is calibrated using software metrics from a past release, and is then applied to modules currently under development to estimate which modules are likely to be fault-prone. This paper presents and demonstrates an effective case-based reasoning approach for calibrating such classification models. It is attractive for software engineering of embedded systems, because it can be used to develop software reliability models using a faster, cheaper, and easier method. We illustrate our approach with two large-scale case studies obtained from embedded systems. They involve data collected from telecommunication systems including wireless systems. It is indicated that the level of classification accuracy observed in both case studies would be beneficial in achieving high software reliability of subsequent releases of the embedded systems.


Sign in / Sign up

Export Citation Format

Share Document