software failure
Recently Published Documents


TOTAL DOCUMENTS

225
(FIVE YEARS 44)

H-INDEX

18
(FIVE YEARS 2)

2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Software failure prediction is an important activity during agile software development as it can help managers to identify the failure modules. Thus, it can reduce the test time, cost and assign testing resources efficiently. RapidMiner Studio9.4 has been used to perform all the required steps from preparing the primary data to visualizing the results and evaluating the outputs, as well as verifying and improving them in a unified environment. Two datasets are used in this work, the results for the first one indicate that the percentage of failure to predict the time used in the test is for all 181 rows, for all test times recorded, is 3% for Mean time between failures (MTBF). Whereas, SVM achieved a 97% success in predicting compared to previous work whose results indicated that the use of Administrative Delay Time (ADT) achieved a statistically significant overall success rate of 93.5%. At the same time, the second dataset result indicates that the percentage of failure to predict the time used is 1.5% for MTBF, SVM achieved 98.5% prediction.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Sikandar Ali ◽  
Muhammad Adeel ◽  
Sumaira Johar ◽  
Muhammad Zeeshan ◽  
Samad Baseer ◽  
...  

An incident, in the perception of information technology, is an event that is not part of a normal process and disrupts operational procedure. This research work particularly focuses on software failure incidents. In any operational environment, software failure can put the quality and performance of services at risk. Many efforts are made to overcome this incident of software failure and to restore normal service as soon as possible. The main contribution of this study is software failure incidents classification and prediction using machine learning. In this study, an active learning approach is used to selectively label those data which is considered to be more informative to build models. Firstly, the sample with the highest randomness (entropy) is selected for labeling. Secondly, to classify the labeled observation into either failure or no failure classes, a binary classifier is used that predicts the target class label as failure or not. For classification, Support Vector Machine is used as a main classifier to classify the data. We derived our prediction models from the failure log files collected from the ECLIPSE software repository.


Author(s):  
R. Chennappan ◽  
Vidyaa Thulasiraman

The paper presents the software quality management is a highly significant one to ensure the quality and to review the reliability of software products. To improve the software quality by predicting software failures and enhancing the scalability, in this paper, we present a novel reinforced Cuckoo search optimized latent Dirichlet allocation based Ruzchika indexive regression (RCSOLDA-RIR) technique. At first, Multicriteria reinforced Cuckoo search optimization is used to perform the test case selection and find the most optimal solution while considering the multiple criteria and selecting the optimal test cases for testing the software quality. Next, the generative latent Dirichlet allocation model is applied to predict the software failure density with selected optimal test cases with minimum time. Finally, the Ruzchika indexive regression is applied for measuring the similarity between the preceding versions and the new version of software products. Based on the similarity estimation, the software failure density of the new version is also predicted. With this, software error prediction is made in a significant manner, therefore, improving the reliability of software code and service provisioning time between software versions in software systems is also minimized. An experimental assessment of the RCSOLDA-RIR technique achieves better reliability and scalability than the existing methods.


Author(s):  
S. Rumana Firdose

Abstract: During the development of software code there is a pressing necessity to remove the faults or bugs and improve software reliability. To get the accurate result, in every phase of software development cycle assessments needs to be happen, so that in each phase early bugs detection takes place that leads to maintain accuracy at each level. The academic institutions and industries are enhancing the development techniques in software engineering and their by performing regular testing for finding faults in programmers of software during the development. New programs are composed by altered the original code by comprised more of a bias near statements that arise in pessimistic execution paths. Fault localization information technique is used in proposed method to indicate the position of fault. In experimental as well as regression based equations represent the soft computing techniques results is better compare to the other techniques. Evaluation of soft-computing techniques represented that accuracy of the ANN model is superior to the other models. Data bases for performing the training and testing stages were collected, these soft computing techniques had low computational errors than the empirical equations. Finally says that soft computing models are better compare to the regression models. Hence, finding faults and correcting a serious software problem would be better instead of recalling thousands of products, especially in automotive sector. SRGM success mainly reliable by gathering the accurate failure information. The functions of the software reliability growth model were predicted in terms of such information gathered only. SRGM techniques in the literature and it gives a reasonable capability of value for actual software failure data. Therefore, this model, in future, can be applied to operate a wide range of software and its applications. Keywords: SRGM, FDP, FCP


2021 ◽  
Vol 2083 (3) ◽  
pp. 032095
Author(s):  
Zhimin Ni ◽  
Fan Zhao

Abstract For the existing service-oriented software single, favors business processing, cannot guarantee the software business processing into the development of software. When the operator encounters operational problems, software failure problems and other problems related to software operation and operation, software development technicians to provide technical support to ensure the software’s business processing functions. This study will move away from dependence on other software and provide technical support to business software operators accurately and in a timely manner to effectively solve the problems that operators may encounter.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6489
Author(s):  
Jiaxi Liu ◽  
Weizhong Gao ◽  
Jian Dong ◽  
Na Wu ◽  
Fei Ding

Many environmental monitoring applications that are based on the Internet of Things (IoT) require robust and available systems. These systems must be able to tolerate the hardware or software failure of nodes and communication failure between nodes. However, node failure is inevitable due to environmental and human factors, and battery depletion in particular is a major contributor to node failure. The existing failure detection algorithms seldom consider the problem of node battery consumption. In order to rectify this, we propose a low-power failure detector (LP-FD) that can provide an acceptable failure detection service and can save on the battery consumption of nodes. From simulation experiments, results show that the LP-FD can provide better detection speed, accuracy, overhead and battery consumption than other failure detection algorithms.


Author(s):  
Shinji Inoue ◽  
Takaji Fujiwara ◽  
Shigeru Yamada

Safety integrity level (SIL)-based functional safety assessment is widely required in designing safety functions and checking their validity of electrical/electronic/programmable electronic (E/E/PE) safety-related systems after being issued IEC 61508 in 2010. For the hardware of E/E/PE safety-related systems, quantitative functional safety assessment based on target failure measures is needed for deciding or allocating the level of SIL. On the other hand, IEC 61508 does not provide any quantitative safety assessment method for allocating SIL for the software of E/E/PE safety-related systems because the software failure is treated as a systematic failure in IEC 61508. We discuss the needfulness of quantitative safety assessment for software of E/E/PE safety-related systems and propose mathematical fundamentals for conducting quantitative SIL-based safety assessment for the software of E/E/PE safety-related systems by applying the notion of software reliability modeling and assessment technologies. We show numerical examples for explaining how to use our approaches.


2021 ◽  
Author(s):  
Minghui Wang ◽  
Jiangxuan Xie ◽  
Xinan Yang ◽  
Xiangqiao Ao

The network is very important to the normal operation of all aspects of society and economy, and the memory leak of network device is a software failure that seriously damages the stability of the system. Some common memory checking tools are not suitable for network devices that are running online, so the operation staff can only constantly monitor the memory usage and infer from experience, which has been proved to be inefficient and unreliable. In this paper we proposed a novel memory leak detection method for network devices based on Machine learning. It first eliminates the impact of large-scale resource table entries on the memory utilization. Then, by analyzing its monotonicity and computing the correlation coefficient with the memory leak sequence sets pre constructed by simulation, the memory leak fault can be found in time. The simulation experiments show that the scheme is computationally efficient and the precision rate is close to 100%, it works well in the actual network environment, and has excellent performance.


2021 ◽  
pp. 248-258
Author(s):  
Ayat Mohammad ◽  
◽  
Hamed Fawareh

Researchers have often attempted to raise the success rate of software systems over the past century. Improve software quality models and other software elements to make it more customer satisfaction and achieve customer permanence. Several quality models and variables have been proposed to decrease software system failure and complexity. Also, several software quality models were proposed to assess the general and particular types of software products. These models have been proposed to determine the general or particular scopes of software products. The proposed models evaluate based on comparisons between the well-known models to customize the closed model. These comparisons are the leakage of criteria based on distinct views and knowledge of cultural and social requirements. A new factors proposed by the customize software quality models. The proposed cultural model has eight criterions namely: Language, Religion, social habits, publishing, custom, Ethics, and Law. We classified the new criterions factors into three main groups. The outcome of the proposed cultural model demonstrates that the eight criterions factors must be deemed to decrease the satisfactions of software failure and permanence variables. Finally we proposed a cultural language metric for measuring the satisfactions of software failure and permanence variables.


Author(s):  
Richard A. Boateng ◽  
John S. Miller

Accessibility, the number of time-decayed jobs available to each zone within a region, can help prioritize candidate transportation investments. This paper demonstrates how to compute auto accessibility using commonly available resources and identifies strategies needed to render calculations feasible and transparent. (The scope excludes transit and pedestrian impacts.) For the first objective, computational solutions included developing a semi-automated method to import legacy transportation networks, automating turn prohibitions, and using an algorithm to check for inconsistently formed service areas that sometimes occur in a random fashion with geographic information system software. Failure to exercise quality control using these approaches gives erroneous results: not solving the problem of inconsistently formed service areas led to a region within 50 mi of a 1-mi corridor (where improvements are proposed) having an accessibility almost 40 times higher than the correct value. For the second objective, the influence area (i.e., catchment radius) mattered most: for one project, the forecast accessibility improvement dropped by 80% when an area within 45 mi of the project, rather than an area within 15 mi, was the basis of the analysis. Other decisions affected forecast accessibility improvement less: the choice of the number of centroid connectors affected forecasts by an average of 23% (with a 10-mi influence area). Choosing to eliminate negative net accessibility contributions, attributed to geometric approximations in the software, affected forecasts by less than 21% (35-mi influence area or smaller). Ranking five proposed investments in relation to their forecast accessibility benefit demonstrated the importance of documenting users’ computational choices.


Sign in / Sign up

Export Citation Format

Share Document