program reliability
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 0)

Author(s):  
Bahman Arasteh ◽  
Reza Solhi

Software play remarkable roles in different critical applications. On the other hand, due to the shrinking of transistor size and reduction in supply voltage, radiation-induced transient errors (soft errors) have become an important source of computer systems failure. As the rate of transient hardware faults increases, researchers have investigated software techniques to control these faults. Performance overhead is the main drawback of software-implemented methods like recovery blocks that use technical redundancy. Enhancing the software reliability against soft errors by utilizing inherently error masking (invulnerable) programming structures is the main goal of this study. During the programming phase and at the source code level, programmers can select different storage classes such as automatic, global, static and register for the data into their program without paying attention to their inherent reliability. In this study, the inherent effects of these storage classes on the program reliability are investigated. Extensive series of profiling and fault-injection experiments were performed on the set of benchmark programs implemented with different storage classes. Regarding the results of experiments, we find that the programs implemented with automatic storage classes have inherently higher reliability than the programs with static and register storage classes without performance overhead. This finding enables the programmers to develop highly reliable programs without technical redundancy and performance overhead.


2020 ◽  
Vol 30 (11n12) ◽  
pp. 1641-1665
Author(s):  
Jiang Wu ◽  
Jianjun Xu ◽  
Xiankai Meng ◽  
Haoyu Zhang ◽  
Zhuo Zhang

Modern compilers provide a huge number of optional compilation optimization options. It is necessary to select the appropriate compilation optimization options for different programs or applications. To mitigate this problem, machine learning is widely used as an efficient technology. How to ensure the integrity and effectiveness of program information is the key to problem mitigation. In addition, when selecting the best compilation optimization option, the optimization goals are often execution speed, code size, and CPU consumption. There is not much research on program reliability. This paper proposes a Gate Graph Attention Neural Network (GGANN)-based compilation optimization option selection model. The data flow and function-call information are integrated into the abstract syntax tree as the program graph-based features. We extend the deep neural network based on GGANN and build a learning model that learns the heuristics method for program reliability. The experiment is performed under the Clang compiler framework. Compared with the traditional machine learning method, our model improves the average accuracy by 5–11% in the optimization option selection for program reliability. At the same time, experiments show that our model has strong scalability.


2020 ◽  
Vol 13 (2) ◽  
pp. 35-39
Author(s):  
Soka Hadiati ◽  
Anita Anita ◽  
Adi Pramuda

This study aims to develop affective assessment instruments for practical assistants in physics laboratories. The development model used Plomp development model, consisting of five stages, namely: (1) the initial investigation stage; (2) Design; (3) Realization / construction stage; (4) testing, evaluation and revision; (5) implementation. The questionnaire is prepared based on theory and proven by its relevance (content validation) by experts. Collecting data in this study using non-test data collection techniques in the form of affective / attitude questionnaires. The affective score obtained will be analyzed the validity of the instrument construct using the analysis rash model theory with the winsteps program. Reliability using Alpha Cronbach. The attitude instrument was designed with reference to the criteria and indicators based on Rao's theory. Content validity by 7 experts showed that all items had good validity. The inter rater reliability of the questions was 0.8. Empirical validity shows that all items are valid. The instrument has item reliability 0.93 and person reliability 0.39 with good and moderate categories. This shows that the consistency of the answers from the subject is still weak, but the quality of the items in the instrument's reliability aspect is quite good. The instrument that has been developed meets the criteria of validity and reliability.


The goal is to look for code performance metrics. Reliability is an important aspect of any program that cannot be ignored and difficult to measure. "Program reliability is defined as the probability of running programs without disruption in a specific environment for a specified period of time." The reliability of the technology differs from the performance of the hardware. Program reliability is difficult because the complexity of the program is high. Different methods can be used to increase system performance, but it is difficult to balance development time, budget, and software quality. But the best way to ensure technology consistency is to build high-quality programs throughout the life cycle of the program. We will discuss software reliability metrics in this paper. Metrics used early on can help detect and correct defects of requirements that will prevent program lifecycle errors later. It also provides consistency quality of the information system database with the help of RStudio, and we can also illustrate reliability based on the value of cyclomatic complexity and we can say whether the data or software is more reliable, less reliable or somewhat reliable.


2018 ◽  
Vol 7 (2) ◽  
pp. 6-10
Author(s):  
Sunil Kumar Singh ◽  
Raj Shree

Faults in software program structures continue to be a primary problem. A software fault is a disorder that reasons software failure in an executable product. A form of software fault predictions techniques were proposed, however none has proven to be continually correct. So, on this examine the overall performance of the Adaptive Neuro Fuzzy Inference System (ANFIS) in predicting software program defects and software program reliability has been reviewed. The datasets are taken from NASA Metrics Data Program (MDP) statistics repository. In the existing work a synthetic intelligence technique viz. Adaptive Neuro Fuzzy Inference System (ANFIS) goes for use for software disorder prediction.


2018 ◽  
Vol 210 ◽  
pp. 04009
Author(s):  
Kazimierz Worwa ◽  
Tadeusz Nowicki ◽  
Robert Waszkowski ◽  
Maciej Kiedrowicz

The testing stage, creating great opportunities to verify and shape software reliability, significantly increases the cost of its production. The effectiveness of the work related to testing, expressed by the interdependence of the level of program product reliability and the cost of testing it, strongly depends on the adopted testing strategy, specifying the organization and scope of the work performed. In this situation, therefore, there is a need to define the conditions for a compromise in terms of reliability and cost requirements set for the software. The practical finding of this compromise can be greatly facilitated if there are possibilities to formally assess the level of software quality and the cost of testing it using appropriate indicators. The paper attempts to describe a method of determining a program testing strategy as a result of solving a two-criteria optimization problem, with the program reliability coefficient and the cost of testing as component criteria. The paper consists of description of the program testing process and mathematical model of this process, formulation of the problem of two-criteria optimization of the program testing strategy, remarks on method of solving the problem that has been formulated. proposed. To illustrate the method of finding an optimal testing strategy that has been proposed a numerical example is considered.


2014 ◽  
Vol 6 (2) ◽  
pp. 359-375
Author(s):  
Kazimierz WORWA

An approach to formal modelling the program testing process is proposed in the paper. Considerations are based on some program reliability-growth model that is constructed for assumed scheme of the program testing process. In this model the program under the testing is characterized by means of so-called characteristic matrix and the program testing process is determined by means of so-called testing strategy. The formula for determining the mean value of the predicted number of errors encountered during the program testing is obtained. This formula can be used if the characteristic matrix and the testing strategy are known. Formulae for evaluating this value when the program characteristic matrix is not known are also proposed in the paper.


Sign in / Sign up

Export Citation Format

Share Document