Software Crucial Functions Ranking and Detection in Dynamic Execution Sequence Patterns

Author(s):  
Bing Zhang ◽  
Chun Shan ◽  
Munawar Hussain ◽  
Jiadong Ren ◽  
Guoyan Huang

Because of the sequence and number of calls of functions, software network cannot reflect the real execution of software. Thus, to detect crucial functions (DCF) based on software network is controversial. To address this issue, from the viewpoint of software dynamic execution, a novel approach to DCF is proposed in this paper. It firstly models, the dynamic execution process as an execution sequence by taking functions as nodes and tracing the stack changes occurring. Second, an algorithm for deleting repetitive patterns is designed to simplify execution sequence and construct software sequence pattern sets. Third, the crucial function detection algorithm is presented to identify the distribution law of the numbers of patterns at different levels and rank those functions so as to generate a decision-function-ranking-list (DFRL) by occurrence times. Finally, top-k discriminative functions in DFRL are chosen as crucial functions, and similarity the index of decision function sets is set up. Comparing with the results from Degree Centrality Ranking and Betweenness Centrality Ranking approaches, our approach can increase the node coverage to 80%, which is proven to be an effective and accurate one by combining advantages of the two classic algorithms in the experiments of different test cases on four open source software. The monitoring and protection on crucial functions can help increase the efficiency of software testing, strength software reliability and reduce software costs.

Author(s):  
Rozita Dara ◽  
Shimin Li ◽  
Weining Liu ◽  
Angi Smith-Ghorbani ◽  
Ladan Tahvildari
Keyword(s):  

2021 ◽  
Vol 43 (13) ◽  
pp. 2888-2898
Author(s):  
Tianze Gao ◽  
Yunfeng Gao ◽  
Yu Li ◽  
Peiyuan Qin

An essential element for intelligent perception in mechatronic and robotic systems (M&RS) is the visual object detection algorithm. With the ever-increasing advance of artificial neural networks (ANN), researchers have proposed numerous ANN-based visual object detection methods that have proven to be effective. However, networks with cumbersome structures do not befit the real-time scenarios in M&RS, necessitating the techniques of model compression. In the paper, a novel approach to training light-weight visual object detection networks is developed by revisiting knowledge distillation. Traditional knowledge distillation methods are oriented towards image classification is not compatible with object detection. Therefore, a variant of knowledge distillation is developed and adapted to a state-of-the-art keypoint-based visual detection method. Two strategies named as positive sample retaining and early distribution softening are employed to yield a natural adaption. The mutual consistency between teacher model and student model is further promoted through a hint-based distillation. By extensive controlled experiments, the proposed method is testified to be effective in enhancing the light-weight network’s performance by a large margin.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ali M. Alakeel

Program assertions have been recognized as a supporting tool during software development, testing, and maintenance. Therefore, software developers place assertions within their code in positions that are considered to be error prone or that have the potential to lead to a software crash or failure. Similar to any other software, programs with assertions must be maintained. Depending on the type of modification applied to the modified program, assertions also might have to undergo some modifications. New assertions may also be introduced in the new version of the program, while some assertions can be kept the same. This paper presents a novel approach for test case prioritization during regression testing of programs that have assertions using fuzzy logic. The main objective of this approach is to prioritize the test cases according to their estimated potential in violating a given program assertion. To develop the proposed approach, we utilize fuzzy logic techniques to estimate the effectiveness of a given test case in violating an assertion based on the history of the test cases in previous testing operations. We have conducted a case study in which the proposed approach is applied to various programs, and the results are promising compared to untreated and randomly ordered test cases.


2017 ◽  
Vol 2 (3) ◽  
pp. 130-139
Author(s):  
A. Kouadri ◽  
A. Kheldoun ◽  
M. Hamadache ◽  
L. Refoufi

This paper presents the application of a new technique based on the variance of three phase stator currents’ instantaneous variance (VIV-TPSC) to detect faults in induction motors. The proposed fault detection algorithm is based on computation of the confidence interval index (CI) at different load conditions. This index provides an estimate of the amount of error in the considered data and determines the accuracy of the computed statistical estimates. The algorithm offers the advantage of being able to detect faults, particularly broken rotor bars, independently of loading conditions. Moreover, the implementation of the algorithm requires only the calculation of the variance of the measured three-phase stator currents’ instantaneous variance. The discrimination between faulty and healthy operations is based on the adherence of VIV-TPSC value to the CI which is calculated after checking out that the variance of instantaneous variance is a random variable obeying to normal distribution law. Rotor and stator resistance values are not used in any part of the CI and VIV-TPSC calculations, giving the algorithm more robustness. The effectiveness and the accuracy of the proposed approach are shown under different faulty operations.


2020 ◽  
Vol 70 (4) ◽  
pp. 366-373
Author(s):  
Congliang Ye ◽  
Qi Zhang

To prevent the initiation failure caused by the uncontrolled fuze and improve the weapon reliability in the high-speed double-event fuel-air explosive (DEFAE) application, it is necessary to study the TDF motion trajectory and set up a twice-detonating fuze (TDF) design system. Hence, a novel approach of realising the fixed single-point center initiation by TDF within the fuel air cloud is proposed. Accordingly, a computational model for the TDF motion state with the nonlinear mechanics analysis is built due to the expensive and difficult full-scale experiment. Moreover, the TDF guidance design system is programmed using MATLAB with the equations of mechanical equilibrium. In addition, by this system, influences of various input parameters on the TDF motion trajectory are studied in detail singly. Conclusively, the result of a certain TDF example indicates that this paper provides an economical idea for the TDF design, and the developed graphical user interface of high-efficiency for the weapon designers to facilitate the high-speed DEFAE missile development.


Author(s):  
Anjan Pakhira ◽  
Peter Andras

Testing is a critical phase in the software life-cycle. While small-scale component-wise testing is done routinely as part of development and maintenance of large-scale software, the system level testing of the whole software is much more problematic due to low level of coverage of potential usage scenarios by test cases and high costs associated with wide-scale testing of large software. Here, the authors investigate the use of cloud computing to facilitate the testing of large-scale software. They discuss the aspects of cloud-based testing and provide an example application of this. They describe the testing of the functional importance of methods of classes in the Google Chrome software. The methods that we test are predicted to be functionally important with respect to a functionality of the software. The authors use network analysis applied to dynamic analysis data generated by the software to make these predictions. They check the validity of these predictions by mutation testing of a large number of mutated variants of the Google Chrome. The chapter provides details of how to set up the testing process on the cloud and discusses relevant technical issues.


2015 ◽  
pp. 1175-1203
Author(s):  
Anjan Pakhira ◽  
Peter Andras

Testing is a critical phase in the software life-cycle. While small-scale component-wise testing is done routinely as part of development and maintenance of large-scale software, the system level testing of the whole software is much more problematic due to low level of coverage of potential usage scenarios by test cases and high costs associated with wide-scale testing of large software. Here, the authors investigate the use of cloud computing to facilitate the testing of large-scale software. They discuss the aspects of cloud-based testing and provide an example application of this. They describe the testing of the functional importance of methods of classes in the Google Chrome software. The methods that we test are predicted to be functionally important with respect to a functionality of the software. The authors use network analysis applied to dynamic analysis data generated by the software to make these predictions. They check the validity of these predictions by mutation testing of a large number of mutated variants of the Google Chrome. The chapter provides details of how to set up the testing process on the cloud and discusses relevant technical issues.


2020 ◽  
Vol 22 (3) ◽  
pp. 510-527
Author(s):  
Maarten van Ormondt ◽  
Kees Nederhoff ◽  
Ap van Dongeren

Abstract The open-source program Delft Dashboard (DDB) is a graphical user interface designed to quickly create, edit input parameters and visualize model inputs for a number of hydrodynamic models, using private or publicly available local and global datasets. It includes a number of toolboxes that facilitate the generation of spatially varying inputs. These include new model schematizations (grids, bathymetry, boundary conditions, etc.), cyclonic wind fields and initial tsunami waves. The use of DDB can have significant benefits. It can save modellers considerable time and effort. Furthermore, the automated nature of both data collection and pre-processing within the program reduces the likelihood of errors that could occur when setting up models manually. Three case studies are presented: simulation of tides in the North Sea, storm surge and wave modelling under tropical cyclone conditions and the simulation of a tsunami. The test cases show that models created with DDB can be set up efficiently while maintaining a predictive skill that is only slightly lower than that of extensively calibrated models.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Benjamin Vedder ◽  
Bo Joel Svensson ◽  
Jonny Vinter ◽  
Magnus Jonsson

Autonomous vehicles need accurate and dependable positioning, and these systems need to be tested extensively. We have evaluated positioning based on ultrawideband (UWB) ranging with our self-driving model car using a highly automated approach. Random drivable trajectories were generated, while the UWB position was compared against the Real-Time Kinematic Satellite Navigation (RTK-SN) positioning system which our model car also is equipped with. Fault injection was used to study the fault tolerance of the UWB positioning system. Addressed challenges are automatically generating test cases for real-time hardware, restoring the state between tests, and maintaining safety by preventing collisions. We were able to automatically generate and carry out hundreds of experiments on the model car in real time and rerun them consistently with and without fault injection enabled. Thereby, we demonstrate one novel approach to perform automated testing on complex real-time hardware.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Haitao He ◽  
Chun Shan ◽  
Xiangmin Tian ◽  
Yalei Wei ◽  
Guoyan Huang

Identifying influential nodes is important for software in terms of understanding the design patterns and controlling the development and the maintenance process. However, there are no efficient methods to discover them so far. Based on the invoking dependency relationships between the nodes, this paper proposes a novel approach to define the node importance for mining the influential software nodes. First, according to the multiple execution information, we construct a weighted software network (WSN) to denote the software execution dependency structure. Second, considering the invoking times and outdegree about software nodes, we improve the method PageRank and put forward the targeted algorithm FunctionRank to evaluate the node importance (NI) in weighted software network. It has higher influence when the node has lager value of NI. Finally, comparing the NI of nodes, we can obtain the most influential nodes in the software network. In addition, the experimental results show that the proposed approach has good performance in identifying the influential nodes.


Sign in / Sign up

Export Citation Format

Share Document