Identification of Co-Changed Classes in Software Applications Using Software Quality Attributes

2020 ◽  
Vol 13 (2) ◽  
pp. 110-128 ◽  
Author(s):  
Anushree Agrawal ◽  
R. K. Singh

When changes are made to software applications often, defects can occur in software applications, and eventually leads to expensive operational faults. Comprehensive testing is not feasible with the limited time and resources available. There is a need for test case selection and prioritization so that testing can be completed with maximum confidence in a minimum time. Advance knowledge of co-changed classes in software applications can be very useful during the software maintenance phase. In this article, the authors have proposed a co-change prediction model based upon the combination of structural code measures and dynamic revision history from change repository. Univariate analysis is applied to identify the useful measures in co-change identification. The proposed model is validated using eight open source software applications. The results are promising and indicate that they can be very beneficial in maintenance of software applications.

Maintenance of open source software is a hectic task as the number of bugs reported is huge. The number of projects, components and versions in an open source project also contribute to the number of bugs that are being reported. Classification of bugs based on priority and identification of the suitable engineers for assignment of bugs for such huge systems still remains a major challenge. Bugs that are misclassified or assigned to engineers who don’t have the component expertise, drastically affect the time taken towards bug resolution. In this paper we have explored the usage of data mining techniques on the classification of bugs and assignment of bugs to engineers.Our focus was on classifying bugs as either severe or non-severe and identification of engineers who have the right expertise to fix the bugs. The prediction of bug severity and identification of engineers were done by mining bug reports from JIRA, an open source software bug tracking tool. The mining process yielded positive results and will be a decision enhancer for severe bugs in the maintenance phase


Author(s):  
HYEON SOO KIM ◽  
YONG RAE KWON ◽  
IN SANG CHUNG

Software restructuring is recognized as a promising method to improve logical structure and understandability of a software system which is composed of modules with loosely-coupled elements. In this paper, we present methods of restructuring an ill-structured module at the software maintenance phase. The methods identify modules performing multiple functions and restructure such modules. For identifying the multi-function modules, the notion of the tightly-coupled module that performs a single specific function is formalized. This method utilizes information on data and control dependence, and applies program slicing to carry out the task of extracting the tightly-coupled modules from the multi-function module. The identified multi-function module is restructured into a number of functional strength modules or an informational strength module. The module strength is used as a criterion to decide how to restructure. The proposed methods can be readily automated and incorporated in a software tool.


2011 ◽  
Vol 60 (2) ◽  
pp. 819-824 ◽  
Author(s):  
Oliviero Barana ◽  
Cédric Boulbe ◽  
Sylvain Brémond ◽  
Simone Mannori ◽  
Philippe Moreau ◽  
...  

Author(s):  
Rajvir Singh ◽  
Anita Singhrova ◽  
Rajesh Bhatia

Detection of fault proneness classes helps software testers to generate effective class level test cases. In this article, a novel technique is presented for an optimized test case generation for ant-1.7 open source software. Class level object oriented (OO) metrics are considered as effective means to find fault proneness classes. The open source software ant-1.7 is considered for the evaluation of proposed techniques as a case study. The proposed mathematical model is the first of its kind generated using Weka open source software to select effective OO metrics. Effective and ineffective OO metrics are identified using feature selection techniques for generating test cases to cover fault proneness classes. In this methodology, only effective metrics are considered for assigning weights to test paths. The results indicate that the proposed methodology is effective and efficient as the average fault exposition potential of generated test cases is 90.16% and test cases execution time saving is 45.11%.


2021 ◽  
Author(s):  
Thi Mai Anh Bui ◽  
Nhat Hai Nguyen

Precisely locating buggy files for a given bug report is a cumbersome and time-consuming task, particularly in a large-scale project with thousands of source files and bug reports. An efficient bug localization module is desirable to improve the productivity of the software maintenance phase. Many previous approaches rank source files according to their relevance to a given bug report based on simple lexical matching scores. However, the lexical mismatches between natural language expressions used to describe bug reports and technical terms of software source code might reduce the bug localization system’s accuracy. Incorporating domain knowledge through some features such as the semantic similarity, the fixing frequency of a source file, the code change history and similar bug reports is crucial to efficiently locating buggy files. In this paper, we propose a bug localization model, BugLocGA that leverages both lexical and semantic information as well as explores the relation between a bug report and a source file through some domain features. Given a bug report, we calculate the ranking score with every source files through a weighted sum of all features, where the weights are trained through a genetic algorithm with the aim of maximizing the performance of the bug localization model using two evaluation metrics: mean reciprocal rank (MRR) and mean average precision (MAP). The empirical results conducted on some widely-used open source software projects have showed that our model outperformed some state of the art approaches by effectively recommending relevant files where the bug should be fixed.


2020 ◽  
Vol 26 (10) ◽  
pp. 1619-1625
Author(s):  
Ahmad Albshesh ◽  
Bella Ungar ◽  
Shomron Ben-Horin ◽  
Rami Eliakim ◽  
Uri Kopylov ◽  
...  

Abstract Background Mucosal healing has been associated with long-term response to therapy for Crohn disease (CD). However, little is known about the significance of terminal ileum (TI) transmural thickness in predicting clinical outcomes. Methods In this retrospective observational cohort study, we examined the association of an index ultrasonographic assessment of TI thickness during the maintenance phase and the subsequent clinical outcome of CD in a cohort of patients treated with infliximab (IFX). Treatment failure was defined as treatment discontinuation because of lack of efficacy, a need for dose escalation, or surgery. Clinical response was defined as treatment continuation in the absence of any of the aforementioned failure criteria. Results Sixty patients with CD receiving IFX therapy were included in the study. The patients were followed for a median of 16 months (5-24 months) after an index intestinal ultrasound. Thirty-eight patients (63.3%) maintained response to the therapy and 22 patients (36.6%) failed the treatment, with a mean follow up of 10.5 months (6.5-17 months) vs 9.25 months (1-10.25 months), respectively. On univariate analysis, the only variables differing between treatment response and failure were a TI thickness of 2.8 vs 5 mm (P < 0.0001) and an IFX trough level of 6.6 vs 3.9 µg/mL (P = 0.008). On multivariable analysis, only a small bowel thickness of ≥4 mm was associated with the risk of treatment failure (odds ratio, 2.9; 95% CI, 1.49-5.55; P = 0.002). Conclusions Our findings suggest that transmural thickness of ≥4 mm can predict subsequent treatment failure in patients with CD treated using IFX, indicating transmural thickness <4 mm as a potential novel valuable therapeutic target.


2019 ◽  
Vol 10 (1) ◽  
pp. 16-33
Author(s):  
Miloud Dahane ◽  
Mustapha Kamel Abdi ◽  
Mourad Bouneffa ◽  
Adeel Ahmad ◽  
Henri Basson

Software evolution control mostly relies on the better structure of the inherent software artifacts and the evaluation of different qualitative factors like maintainability. The attributes of changeability are commonly used to measure the capability of the software to change with minimal side effects. This article describes the use of the design of experiments method to evaluate the influence of variations of software metrics on the change impact in developed software. The coupling metrics are considered to analyze their degree of contribution to cause a change impact. The data from participant software metrics are expressed in the form of mathematical models. These models are then validated on different versions of software to estimate the correlation of coupling metrics with the change impact. The proposed approach is evaluated with the help of a set of experiences which are conducted using statistical analysis tools. It may serve as a measurement tool to qualify the significant indicators that can be included in a Software Maintenance dashboard.


2009 ◽  
pp. 603-619
Author(s):  
Walt Scacchi

This study examines the development of open source software supporting e-commerce (EC) or e-business (EB) capabilities. This entails a case study within a virtual organization engaged in an organizational initiative to develop, deploy, and support free/open source software systems for EC or EB services, like those supporting enterprise resource planning. The objective of this study is to identify and characterize the resource-based software product development capabilities that lie at the center of the initiative, rather than the software itself, or the effectiveness of its operation in a business enterprise. By learning what these resources are, and how they are arrayed into product development capabilities, we can provide the knowledge needed to understand what resources are required to realize the potential of free EC and EB software applications. In addition, the resource-based view draws attention to those resources and capabilities that provide potential competitive advantages and disadvantages to the organization in focus.


2019 ◽  
Vol 31 (06) ◽  
pp. 1950044
Author(s):  
C. C. Manju ◽  
M. Victor Jose

Objective: The antinuclear antibodies (ANA) that present in the human serum have a link with various autoimmune diseases. Human Epithelial type-2 (HEp-2) cells acts as a substance in the Indirect Immuno fluorescence (IIF) test for diagnosing these autoimmune diseases. In recent times, the computer-aided diagnosis of autoimmune diseases by the HEp-2 cell classification has drawn more interest. Though, they often pose limitations like large intra-class and small inter-class variations. Hence, various efforts have been performed to automate the procedure of HEp-2 cell classification. To overcome these problems, this research work intends to propose a new HEp-2 classification process. Materials and Methods: This is regulated by integrating two processes, namely, segmentation and classification. Initially, the segmentation of the HEp-2 cells is carried out by deploying the morphological operations. In this paper, two morphology operations are deployed called opening and closing. Further, the classification process is exploited by proposing a modified Convolutional Neural Network (CNN). The main objective is to classify the HEp-2 cells effectively (Centromere, Golgi, Homogeneous, Nucleolar, NuMem, and Speckled) and is made by exploiting the optimization concept. This is implanted by developing a new algorithm called Distance Sorting Lion Algorithm (DSLA), which selects the optimal convolutional layer in CNN. Results: Through the performance analysis, the performance of the proposed model for test case 1 at learning percentage 60 is 3.84%, 1.79%, 6.22%, 1.69%, and 5.53% better than PSO, FF, GWO, WOA, and LA, respectively. At 80, the performance of the proposed model is 5.77%, 6.46%, 3.95%, 3.24%, and 5.55% better from PSO, FF, GWO, WOA, and LA, respectively. Hence, the performance of the proposed work is proved over other models under different measures. Conclusion: Finally, the performance is evaluated by comparing it with the other conventional algorithms in terms of accuracy, sensitivity, specificity, precision, FPR, FNR, NPV, MCC, F1-Score and FDR, and proves the efficacy of the proposed model.


Sign in / Sign up

Export Citation Format

Share Document