MAXIMUM LIKELIHOOD ESTIMATES FOR THE HYPERGEOMETRIC SOFTWARE RELIABILITY MODEL

Author(s):  
FRANK PADBERG

We present a fast and exact novel algorithm to compute maximum likelihood estimates for the number of defects initially contained in a software, using the hypergeometric software reliability model. The algorithm is based on a rigorous and comprehensive mathematical analysis of the growth behavior of the likelihood function for the hypergeometric model. We also study a numerical example taken from the literature and compare the estimate obtained in the hypergeometric model with the estimates obtained in other reliability models. The hypergeometric estimate is highly accurate.

Author(s):  
FAROKH B. BASTANI ◽  
ING-RAY CHEN ◽  
TA-WEI TSAO

In this paper we develop a software reliability model for Artificial Intelligence (AI) programs. We show that conventional software reliability models must be modified to incorporate certain special characteristics of AI programs, such as (1) failures due to intrinsic faults, e.g., limitations due to heuristics and other basic AI techniques, (2) fuzzy correctness criterion, i.e., difficulty in accurately classifying the output of some AI programs as correct or incorrect, (3) planning-time versus execution-time tradeoffs, and (4) reliability growth due to an evolving knowledge base. We illustrate the approach by modifying the Musa-Okumoto software reliability growth model to incorporate failures due to intrinsic faults and to accept fuzzy failure data. The utility of the model is exemplified with a robot path-planning problem.


Author(s):  
Kiyoshi Honda ◽  
Hironori Washizaki ◽  
Yoshiaki Fukazawa

Today’s development environment has changed drastically; the development periods are shorter than ever and the number of team members has increased. Consequently, controlling the activities and predicting when a development will end are difficult tasks. To adapt to changes, we propose a generalized software reliability model (GSRM) based on a stochastic process to simulate developments, which include uncertainties and dynamics such as unpredictable changes in the requirements and the number of team members. We assess two actual datasets using our formulated equations, which are related to three types of development uncertainties by employing simple approximations in GSRM. The results show that developments can be evaluated quantitatively. Additionally, a comparison of GSRM with existing software reliability models confirms that the approximation by GSRM is more precise than those by existing models.


Author(s):  
С.И. Затенко ◽  
М.В. Тарабан

Представлен сравнительный анализ интервальных обобщенных байесовских моделей надёжности программного обеспечения на основе неоднородных процессов Пуассона с известными и хорошо зарекомендовавшими себя классическими моделями Джоэл–Окомото и Муса–Окомото. В новых интервальных моделях сочетаются принцип максимума функции правдоподобия и байесовский подход. Для нахождения параметров моделей множество всех параметров делится на два подмножества. Используя параметры первого подмножества и статистические данные, строится обобщённая байесовская модель, с помощью которой формируются границы множеств функций распределения вероятностей, зависящие от параметров второго подмножества. В дальнейшем эти параметры вычисляются с использованием принципа максимума функции правдоподобия. Такой подход позволяет получать качественный прогноз надежности программного обеспечения даже на стадии его проектирования, когда статистической информации недостаточно. Модель учитывает рост надежности программного обеспечения в процессе отладки и может настраиваться за счет изменения параметра осторожности. Качество прогноза моделей проверяется на основе сравнения прогнозируемых значений с реальными значениями времени до отказа в процессе отладки программного обеспечения. Для оценки качества прогноза моделей рассчитываются показатели – максимальное отклонение, среднее отклонение и среднеквадратическое отклонение прогнозируемых данных от реальных значений. Анализируются средние значения отклонений для моделей, подсчитанные после прогнозирования 17 и 6 значений числа отказов при параметрах осторожности s = 1 и s = 0,5. Из результатов расчета следует, что качество предлагаемых интервальных модификации выше, по сравнению с классическими моделями. Кроме того, отчетливо видно, что качество предлагаемых интервальных моделей существенно повышается, по сравнению с обычными моделями, когда количество тестов небольшое, т. е. объем статистической информации мал. Анализируя качество прогноза интервальных моделей при различных значениях параметра осторожности s, можно увидеть, что меньшие значения s приводят к более высокому качеству прогноза, когда имеется большой объем статистических данных. Однако качество прогноза падает при относительно малом числе статистических данных. Противоположное заключение можно сделать при анализе случая, когда параметр осторожности s возрастает. Результаты анализа представлены в виде графиков. Данный сравнительный анализ показал, что новые модели надёжности, основанные на использовании интервальных показателей надежности, позволяют получить более качественный прогноз, по сравнению с классическими вероятностными моделями. The article provides a comparative analysis of interval generalized Bayesian software reliability models based on non-uniform Poisson processes with well-known and well-proven classical models of Joel–Okumoto and Musa–Okumoto. The new interval models combine the maximum likelihood function principle and the Bayesian approach. To find the model parameters, the set of all parameters is divided into two subsets. Using the parameters of the first subset and statistical data, a generalized Bayesian model is constructed, with the help of which the boundaries of the sets of probability distribution function sets are formed, depending on the parameters of the second subset. Further, these parameters are calculated using the maximum likelihood function principle. This approach allows to obtain a qualitative prediction of software reliability even at the design stage, when statistical information is insufficient. The model takes into account the increase in software reliability in the process of debugging and can be adjusted by changing the parameter of caution. The quality of the prediction of models is verified by comparing the predicted values with the real values of time to failure in the process of debugging software. To assess the quality of the prediction of models, the following indicators are calculated: the maximum deviation, the average deviation and the standard deviation of the predicted data from the real values. The average values of the deviations for the models, calculated after predicting 17 and 6 values of the number of failures with the caution parameters s = 1 and s = 0,5, are analyzed. From the results of the calculation, it follows that the quality of the proposed interval modifications is higher compared to the classical models. In addition, it is clearly seen that the quality of the proposed interval models is significantly improved compared with conventional models, when the number of tests is small, that is, the amount of statistical information is small. Analyzing the quality of the forecast of interval models for different values of the parameter of caution, one can see that smaller values lead to a higher quality of the forecast when there is a large amount of statistical data. However, the quality of the forecast falls with a relatively small number of statistical data. The opposite conclusion can be made when analyzing the case when the parameter of caution increases. The results of the analysis are presented in the form of graphs. This comparative analysis showed that new reliability models based on the use of interval reliability indicators allow us to obtain a higher- quality prediction compared to classical probabilistic models.


2018 ◽  
Vol 18 (3) ◽  
pp. 37-47
Author(s):  
Nikolay Pavlov ◽  
Anton Iliev ◽  
Asen Rahnev ◽  
Nikolay Kyurkchiev

Abstract In this paper we study the Hausdorff approximation of the shifted Heaviside step function ht0(t) by sigmoidal functions based on the Chen’s and Pham’s cumulative distribution functions and find an expression for the error of the best approximation. We give real examples with data provided by IBM entry software package and Apache HTTP Server using Chen’s software reliability model and Pham’s deterministic software reliability model. Some analyses are made.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 521 ◽  
Author(s):  
Song ◽  
Chang ◽  
Pham

The non-homogeneous Poisson process (NHPP) software has a crucial role in computer systems. Furthermore, the software is used in various environments. It was developed and tested in a controlled environment, while real-world operating environments may be different. Accordingly, the uncertainty of the operating environment must be considered. Moreover, predicting software failures is commonly an important part of study, not only for software developers, but also for companies and research institutes. Software reliability model can measure and predict the number of software failures, software failure intervals, software reliability, and failure rates. In this paper, we propose a new model with an inflection factor of the fault detection rate function, considering the uncertainty of operating environments and analyzing how the predicted value of the proposed new model is different than the other models. We compare the proposed model with several existing NHPP software reliability models using real software failure datasets based on ten criteria. The results show that the proposed new model has significantly better goodness-of-fit and predictability than the other models.


Sign in / Sign up

Export Citation Format

Share Document