scholarly journals ЗАСТОСУВАННЯ ЕМПІРИЧНИХ МОДЕЛЕЙ І МЕТРИК ХОЛСТЕДА ДЛЯ ОЦІНЮВАННЯ ЯКОСТІ ДОДАТКІВ ПРОГРАМНОГО ЗАБЕЗПЕЧЕННЯ

Author(s):  
Лариса Федорівна Пудовкіна ◽  
Вадим Вікторович Сіняєв

The application of empirical models and Halsted metrics to evaluate software quality is considered. The subject of the study are methods of measuring the reliability of software and its calculation model. The purpose of the work is to perform a promising direction for further research of analytical and empirical models of software reliability. The object of the study is the process of evaluating the quality of the soft-ware, which requires a large number of tasks. This leads to a variety of approaches, methods and tools. Objectives: to carry out comparative analysis of analytical and empirical models of software reliability and quality; describe models and methods for benchmarking these software reliability models; to test and evaluate the effective-ness of the models and methods used for the comparative analysis of the analytical and empirical models of the software's reliability. The methods used are software developed that, using Halsted metrics and static code analyzer methods, allows you to evaluate the complexity and quality of software products. This makes it possible to comprehensively consider all aspects related to analytical and empirical models. As a result, software has been developed that, using Halsted metrics and static code analyzer methods, makes it possible to evaluate the complexity and quality of software products. The software performs the following functions: graphing various parameters; outputting information from graphs to text view (with values obtained during the experiment). The lexical analyzer builds graphs that display the following information about the modules analyzed: commenting; accurate and approximate quality level; real and theoretical length; informative; spent intellectual effort. Conclusions. The relevance of comparative analysis of analytical and empirical models of software reliability is determined by the fact that most software is unreliable. The scientific novelty of the obtained results is as follows: by means of comparative analysis of analytical and empirical models of reliability of software functioning, to study in detail the mod-els of reliability and to increase the reliability of software.

1997 ◽  
Vol 29 (2) ◽  
pp. 337-352 ◽  
Author(s):  
Yiping Chen ◽  
Nozer D. Singpurwalla

Assessing the reliability of computer software has been an active area of research in computer science for the past twenty years. To date, well over a hundred probability models for software reliability have been proposed. These models have been motivated by seemingly unrelated arguments and have been the subject of active debate and discussion. In the meantime, the search for an ideal model continues to be pursued. The purpose of this paper is to point out that practically all the proposed models for software reliability are special cases of self-exciting point processes. This perspective unifies the very diverse approaches to modeling reliability growth and provides a common structure under which problems of software reliability can be discussed.


Author(s):  
С.И. Затенко ◽  
М.В. Тарабан

Представлен сравнительный анализ интервальных обобщенных байесовских моделей надёжности программного обеспечения на основе неоднородных процессов Пуассона с известными и хорошо зарекомендовавшими себя классическими моделями Джоэл–Окомото и Муса–Окомото. В новых интервальных моделях сочетаются принцип максимума функции правдоподобия и байесовский подход. Для нахождения параметров моделей множество всех параметров делится на два подмножества. Используя параметры первого подмножества и статистические данные, строится обобщённая байесовская модель, с помощью которой формируются границы множеств функций распределения вероятностей, зависящие от параметров второго подмножества. В дальнейшем эти параметры вычисляются с использованием принципа максимума функции правдоподобия. Такой подход позволяет получать качественный прогноз надежности программного обеспечения даже на стадии его проектирования, когда статистической информации недостаточно. Модель учитывает рост надежности программного обеспечения в процессе отладки и может настраиваться за счет изменения параметра осторожности. Качество прогноза моделей проверяется на основе сравнения прогнозируемых значений с реальными значениями времени до отказа в процессе отладки программного обеспечения. Для оценки качества прогноза моделей рассчитываются показатели – максимальное отклонение, среднее отклонение и среднеквадратическое отклонение прогнозируемых данных от реальных значений. Анализируются средние значения отклонений для моделей, подсчитанные после прогнозирования 17 и 6 значений числа отказов при параметрах осторожности s = 1 и s = 0,5. Из результатов расчета следует, что качество предлагаемых интервальных модификации выше, по сравнению с классическими моделями. Кроме того, отчетливо видно, что качество предлагаемых интервальных моделей существенно повышается, по сравнению с обычными моделями, когда количество тестов небольшое, т. е. объем статистической информации мал. Анализируя качество прогноза интервальных моделей при различных значениях параметра осторожности s, можно увидеть, что меньшие значения s приводят к более высокому качеству прогноза, когда имеется большой объем статистических данных. Однако качество прогноза падает при относительно малом числе статистических данных. Противоположное заключение можно сделать при анализе случая, когда параметр осторожности s возрастает. Результаты анализа представлены в виде графиков. Данный сравнительный анализ показал, что новые модели надёжности, основанные на использовании интервальных показателей надежности, позволяют получить более качественный прогноз, по сравнению с классическими вероятностными моделями. The article provides a comparative analysis of interval generalized Bayesian software reliability models based on non-uniform Poisson processes with well-known and well-proven classical models of Joel–Okumoto and Musa–Okumoto. The new interval models combine the maximum likelihood function principle and the Bayesian approach. To find the model parameters, the set of all parameters is divided into two subsets. Using the parameters of the first subset and statistical data, a generalized Bayesian model is constructed, with the help of which the boundaries of the sets of probability distribution function sets are formed, depending on the parameters of the second subset. Further, these parameters are calculated using the maximum likelihood function principle. This approach allows to obtain a qualitative prediction of software reliability even at the design stage, when statistical information is insufficient. The model takes into account the increase in software reliability in the process of debugging and can be adjusted by changing the parameter of caution. The quality of the prediction of models is verified by comparing the predicted values with the real values of time to failure in the process of debugging software. To assess the quality of the prediction of models, the following indicators are calculated: the maximum deviation, the average deviation and the standard deviation of the predicted data from the real values. The average values of the deviations for the models, calculated after predicting 17 and 6 values of the number of failures with the caution parameters s = 1 and s = 0,5, are analyzed. From the results of the calculation, it follows that the quality of the proposed interval modifications is higher compared to the classical models. In addition, it is clearly seen that the quality of the proposed interval models is significantly improved compared with conventional models, when the number of tests is small, that is, the amount of statistical information is small. Analyzing the quality of the forecast of interval models for different values of the parameter of caution, one can see that smaller values lead to a higher quality of the forecast when there is a large amount of statistical data. However, the quality of the forecast falls with a relatively small number of statistical data. The opposite conclusion can be made when analyzing the case when the parameter of caution increases. The results of the analysis are presented in the form of graphs. This comparative analysis showed that new reliability models based on the use of interval reliability indicators allow us to obtain a higher- quality prediction compared to classical probabilistic models.


1997 ◽  
Vol 29 (02) ◽  
pp. 337-352 ◽  
Author(s):  
Yiping Chen ◽  
Nozer D. Singpurwalla

Assessing the reliability of computer software has been an active area of research in computer science for the past twenty years. To date, well over a hundred probability models for software reliability have been proposed. These models have been motivated by seemingly unrelated arguments and have been the subject of active debate and discussion. In the meantime, the search for an ideal model continues to be pursued. The purpose of this paper is to point out that practically all the proposed models for software reliability are special cases of self-exciting point processes. This perspective unifies the very diverse approaches to modeling reliability growth and provides a common structure under which problems of software reliability can be discussed.


Sign in / Sign up

Export Citation Format

Share Document