scholarly journals Comparison of modified interval models using the inhomogeneous Poisson processes with their standard counterparts

Author(s):  
С.И. Затенко ◽  
М.В. Тарабан

Представлен сравнительный анализ интервальных обобщенных байесовских моделей надёжности программного обеспечения на основе неоднородных процессов Пуассона с известными и хорошо зарекомендовавшими себя классическими моделями Джоэл–Окомото и Муса–Окомото. В новых интервальных моделях сочетаются принцип максимума функции правдоподобия и байесовский подход. Для нахождения параметров моделей множество всех параметров делится на два подмножества. Используя параметры первого подмножества и статистические данные, строится обобщённая байесовская модель, с помощью которой формируются границы множеств функций распределения вероятностей, зависящие от параметров второго подмножества. В дальнейшем эти параметры вычисляются с использованием принципа максимума функции правдоподобия. Такой подход позволяет получать качественный прогноз надежности программного обеспечения даже на стадии его проектирования, когда статистической информации недостаточно. Модель учитывает рост надежности программного обеспечения в процессе отладки и может настраиваться за счет изменения параметра осторожности. Качество прогноза моделей проверяется на основе сравнения прогнозируемых значений с реальными значениями времени до отказа в процессе отладки программного обеспечения. Для оценки качества прогноза моделей рассчитываются показатели – максимальное отклонение, среднее отклонение и среднеквадратическое отклонение прогнозируемых данных от реальных значений. Анализируются средние значения отклонений для моделей, подсчитанные после прогнозирования 17 и 6 значений числа отказов при параметрах осторожности s = 1 и s = 0,5. Из результатов расчета следует, что качество предлагаемых интервальных модификации выше, по сравнению с классическими моделями. Кроме того, отчетливо видно, что качество предлагаемых интервальных моделей существенно повышается, по сравнению с обычными моделями, когда количество тестов небольшое, т. е. объем статистической информации мал. Анализируя качество прогноза интервальных моделей при различных значениях параметра осторожности s, можно увидеть, что меньшие значения s приводят к более высокому качеству прогноза, когда имеется большой объем статистических данных. Однако качество прогноза падает при относительно малом числе статистических данных. Противоположное заключение можно сделать при анализе случая, когда параметр осторожности s возрастает. Результаты анализа представлены в виде графиков. Данный сравнительный анализ показал, что новые модели надёжности, основанные на использовании интервальных показателей надежности, позволяют получить более качественный прогноз, по сравнению с классическими вероятностными моделями. The article provides a comparative analysis of interval generalized Bayesian software reliability models based on non-uniform Poisson processes with well-known and well-proven classical models of Joel–Okumoto and Musa–Okumoto. The new interval models combine the maximum likelihood function principle and the Bayesian approach. To find the model parameters, the set of all parameters is divided into two subsets. Using the parameters of the first subset and statistical data, a generalized Bayesian model is constructed, with the help of which the boundaries of the sets of probability distribution function sets are formed, depending on the parameters of the second subset. Further, these parameters are calculated using the maximum likelihood function principle. This approach allows to obtain a qualitative prediction of software reliability even at the design stage, when statistical information is insufficient. The model takes into account the increase in software reliability in the process of debugging and can be adjusted by changing the parameter of caution. The quality of the prediction of models is verified by comparing the predicted values with the real values of time to failure in the process of debugging software. To assess the quality of the prediction of models, the following indicators are calculated: the maximum deviation, the average deviation and the standard deviation of the predicted data from the real values. The average values of the deviations for the models, calculated after predicting 17 and 6 values of the number of failures with the caution parameters s = 1 and s = 0,5, are analyzed. From the results of the calculation, it follows that the quality of the proposed interval modifications is higher compared to the classical models. In addition, it is clearly seen that the quality of the proposed interval models is significantly improved compared with conventional models, when the number of tests is small, that is, the amount of statistical information is small. Analyzing the quality of the forecast of interval models for different values of the parameter of caution, one can see that smaller values lead to a higher quality of the forecast when there is a large amount of statistical data. However, the quality of the forecast falls with a relatively small number of statistical data. The opposite conclusion can be made when analyzing the case when the parameter of caution increases. The results of the analysis are presented in the form of graphs. This comparative analysis showed that new reliability models based on the use of interval reliability indicators allow us to obtain a higher- quality prediction compared to classical probabilistic models.

Author(s):  
FRANK PADBERG

We present a fast and exact novel algorithm to compute maximum likelihood estimates for the number of defects initially contained in a software, using the hypergeometric software reliability model. The algorithm is based on a rigorous and comprehensive mathematical analysis of the growth behavior of the likelihood function for the hypergeometric model. We also study a numerical example taken from the literature and compare the estimate obtained in the hypergeometric model with the estimates obtained in other reliability models. The hypergeometric estimate is highly accurate.


Author(s):  
Лариса Федорівна Пудовкіна ◽  
Вадим Вікторович Сіняєв

The application of empirical models and Halsted metrics to evaluate software quality is considered. The subject of the study are methods of measuring the reliability of software and its calculation model. The purpose of the work is to perform a promising direction for further research of analytical and empirical models of software reliability. The object of the study is the process of evaluating the quality of the soft-ware, which requires a large number of tasks. This leads to a variety of approaches, methods and tools. Objectives: to carry out comparative analysis of analytical and empirical models of software reliability and quality; describe models and methods for benchmarking these software reliability models; to test and evaluate the effective-ness of the models and methods used for the comparative analysis of the analytical and empirical models of the software's reliability. The methods used are software developed that, using Halsted metrics and static code analyzer methods, allows you to evaluate the complexity and quality of software products. This makes it possible to comprehensively consider all aspects related to analytical and empirical models. As a result, software has been developed that, using Halsted metrics and static code analyzer methods, makes it possible to evaluate the complexity and quality of software products. The software performs the following functions: graphing various parameters; outputting information from graphs to text view (with values obtained during the experiment). The lexical analyzer builds graphs that display the following information about the modules analyzed: commenting; accurate and approximate quality level; real and theoretical length; informative; spent intellectual effort. Conclusions. The relevance of comparative analysis of analytical and empirical models of software reliability is determined by the fact that most software is unreliable. The scientific novelty of the obtained results is as follows: by means of comparative analysis of analytical and empirical models of reliability of software functioning, to study in detail the mod-els of reliability and to increase the reliability of software.


2018 ◽  
Vol 15 (4) ◽  
pp. 77-86 ◽  
Author(s):  
Yuriy P. Lipuntsov

Statistics agencies are the main data provider on the economic position of the macroeconomic level. Most economic decisions on a national scale are based on statistical data. Data processing is a key business process for statistical agencies. At the same time, the quality of statistical data supplied by Rosstat is not always high enough. There are adjustments, a discrepancy between data sets describing the same economic phenomenon is revealed. The purpose of the work is to describe the methods of collecting and processing statistical information that will contribute to improving the quality of the presented data. From the information point of view, the statistical agency is engaged in the organization of information exchange between data providers and consumers, acts as a data aggregator. To organize the information exchange within community you need to create a semantic space to ensure the meaningful filling of the data. The main role in the semantic space is played by the identifiers of objects. The article considers the unified identifiers of statistical accounting objects as a method of collecting and processing statistical information and improving its quality. The international statistical practice use methods of standardizing the turnover of statistical data. Information standards are designed to unify identifiers and namespace for participants of the statistical information turnover and to provide a single semantic space. If you use of unified identifiers, the procedures for processing statistical data become transparent, it allow you grouping by different sections, as well as decomposition of aggregated data into components.The results of the work are recommendations on the use of Core component of the information infrastructure for the collection and analysis of statistical data. In the existing information infrastructure of the Russian digital economy, there are a number of data sources, the use of which will improve the quality of collection and processing of statistical data. To create a semantic space of statistical data in the Russian Federation, the most important section is the registers of Core Components. The use of registers will allow you to organize the binding of statistical data from different domains, as well as to implement the link of aggregated data with microdata. Significant progress is observed in the marking of goods, which allows you to track object’s movement through all stages of the life cycle, as well as the location. The government of the Russian Federation initiated a project on labeling of goods, and this information gives an opportunity to get a clear picture of a significant part of the economy. An additional information source of statistical data can be the corporate sector, where actively used tracking systems that monitor the goods, vehicles, containers, warehousing.Conclusion: There are several options for creation of the semantic space for statistical data. World experience is guided by the use of the Web architecture, which involves the technological identifiers. Semantics of statistical data can be ensured by using the potential of the information infrastructure, which will solve a number of problems of statistical accounting.


Author(s):  
Anupam, Et. al.

Software reliability is a significant quality characteristic, and reliability models are often used to gauge and anticipate programming development. The quality of versatile apps conditions contrasts from that of PC and server conditions because of numerous elements, like the organization, energy, battery, and similarity. Assessing and anticipating versatile application dependability are genuine difficulties in light of the variety of the portable conditions in which the applications are utilized, and the absence of openly accessible deformity information. Also, bug reports are alternatively put together by end-clients. In the current research work, in view of the writing survey and specialist’s assessment working in the field of versatile application advancement, 10 reliability leading factors and 14 sub-factors have been recognized that are fundamental for evaluating reliability of a portable applications.


Sign in / Sign up

Export Citation Format

Share Document