A GENETIC ALGORITHM FOR IMPROVING ACCURACY OF SOFTWARE QUALITY PREDICTIVE MODELS: A SEARCH-BASED SOFTWARE ENGINEERING APPROACH

Author(s):  
DANIELLE AZAR

In this work, we present a genetic algorithm to optimize predictive models used to estimate software quality characteristics. Software quality assessment is crucial in the software development field since it helps reduce cost, time and effort. However, software quality characteristics cannot be directly measured but they can be estimated based on other measurable software attributes (such as coupling, size and complexity). Software quality estimation models establish a relationship between the unmeasurable characteristics and the measurable attributes. However, these models are hard to generalize and reuse on new, unseen software as their accuracy deteriorates significantly. In this paper, we present a genetic algorithm that adapts such models to new data. We give empirical evidence illustrating that our approach out-beats the machine learning algorithm C4.5 and random guess.

2021 ◽  
Author(s):  
Xianwang Li ◽  
Zhongxiang Huang ◽  
Wenhui Ning

Abstract Machine learning is gradually developed and applied to more and more fields. Intelligent manufacturing system is also an important system model that many companies and enterprises are designing and implementing. The purpose of this study is to evaluate and analyze the model design of Intelligent Manufacturing System Based on machine learning algorithm. The method of this study is to first obtain all the relevant attributes of the intelligent manufacturing system model, and then use machine learning algorithm to delete irrelevant attributes to prevent redundancy and deviation of neural network fitting, make the original probability distribution as close as possible to the distribution when using the selected attributes, and use the ratio of industry average to quantitative expression for measurable and obvious data indicators. As a result, the average running time of the intelligent manufacturing system is 17.35 seconds, and the genetic algorithm occupies 15.63 seconds. The machine learning network takes up 1.72 seconds. Under the machine learning algorithm, the training speed is very high, obviously higher than that of the genetic algorithm, and the BP network is 2.1% higher than the Elman algorithm. The evaluation running speed of the system model design is fast and the accuracy is high. This study provides a certain value for the model design evaluation and algorithm of various systems in the intelligent era.


Forests ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 216
Author(s):  
Mi Luo ◽  
Yifu Wang ◽  
Yunhong Xie ◽  
Lai Zhou ◽  
Jingjing Qiao ◽  
...  

Increasing numbers of explanatory variables tend to result in information redundancy and “dimensional disaster” in the quantitative remote sensing of forest aboveground biomass (AGB). Feature selection of model factors is an effective method for improving the accuracy of AGB estimates. Machine learning algorithms are also widely used in AGB estimation, although little research has addressed the use of the categorical boosting algorithm (CatBoost) for AGB estimation. Both feature selection and regression for AGB estimation models are typically performed with the same machine learning algorithm, but there is no evidence to suggest that this is the best method. Therefore, the present study focuses on evaluating the performance of the CatBoost algorithm for AGB estimation and comparing the performance of different combinations of feature selection methods and machine learning algorithms. AGB estimation models of four forest types were developed based on Landsat OLI data using three feature selection methods (recursive feature elimination (RFE), variable selection using random forests (VSURF), and least absolute shrinkage and selection operator (LASSO)) and three machine learning algorithms (random forest regression (RFR), extreme gradient boosting (XGBoost), and categorical boosting (CatBoost)). Feature selection had a significant influence on AGB estimation. RFE preserved the most informative features for AGB estimation and was superior to VSURF and LASSO. In addition, CatBoost improved the accuracy of the AGB estimation models compared with RFR and XGBoost. AGB estimation models using RFE for feature selection and CatBoost as the regression algorithm achieved the highest accuracy, with root mean square errors (RMSEs) of 26.54 Mg/ha for coniferous forest, 24.67 Mg/ha for broad-leaved forest, 22.62 Mg/ha for mixed forests, and 25.77 Mg/ha for all forests. The combination of RFE and CatBoost had better performance than the VSURF–RFR combination in which random forests were used for both feature selection and regression, indicating that feature selection and regression performed by a single machine learning algorithm may not always ensure optimal AGB estimation. It is promising to extending the application of new machine learning algorithms and feature selection methods to improve the accuracy of AGB estimates.


Author(s):  
Kehan Gao ◽  
Taghi M. Khoshgoftaar

Timely and accurate prediction of the quality of software modules in the early stages of the software development life cycle is very important in the field of software reliability engineering. With such predictions, a software quality assurance team can assign the limited quality improvement resources to the needed areas and prevent problems from occurring during system operation. Software metrics-based quality estimation models are tools that can achieve such predictions. They are generally of two types: a classification model that predicts the class membership of modules into two or more quality-based classes (Khoshgoftaar et al., 2005b), and a quantitative prediction model that estimates the number of faults (or some other quality factor) that are likely to occur in software modules (Ohlsson et al., 1998). In recent years, a variety of techniques have been developed for software quality estimation (Briand et al., 2002; Khoshgoftaar et al., 2002; Ohlsson et al., 1998; Ping et al., 2002), most of which are suited for either prediction or classification, but not for both. For example, logistic regression (Khoshgoftaar & Allen, 1999) can only be used for classification, whereas multiple linear regression (Ohlsson et al., 1998) can only be used for prediction. Some software quality estimation techniques, such as case-based reasoning (Khoshgoftaar & Seliya, 2003), can be used to calibrate both prediction and classification models, however, they require distinct modeling approaches for both types of models. In contrast to such software quality estimation methods, count models such as the Poisson regression model (PRM) and the zero-inflated Poisson (ziP) regression model (Khoshgoftaar et al., 2001) can be applied to yield both with just one modeling approach. Moreover, count models are capable of providing the probability that a module has a given number of faults. Despite the attractiveness of calibrating software quality estimation models with count modeling techniques, we feel that their application in software reliability engineering has been very limited (Khoshgoftaar et al., 2001). This study can be used as a basis for assessing the usefulness of count models for predicting the number of faults and quality-based class of software modules.


2017 ◽  
Author(s):  
Sujay Kakarmath ◽  
Sara Golas ◽  
Jennifer Felsted ◽  
Joseph Kvedar ◽  
Kamal Jethwani ◽  
...  

BACKGROUND Big data solutions, particularly machine learning predictive algorithms, have demonstrated the ability to unlock value from data in real time in many settings outside of health care. Rapid growth in electronic medical record adoption and the shift from a volume-based to a value-based reimbursement structure in the US health care system has spurred investments in machine learning solutions. Machine learning methods can be used to build flexible, customized, and automated predictive models to optimize resource allocation and improve the efficiency and quality of health care. However, these models are prone to the problems of overfitting, confounding, and decay in predictive performance over time. It is, therefore, necessary to evaluate machine learning–based predictive models in an independent dataset before they can be adopted in the clinical practice. In this paper, we describe the protocol for independent, prospective validation of a machine learning–based model trained to predict the risk of 30-day re-admission in patients with heart failure. OBJECTIVE This study aims to prospectively validate a machine learning–based predictive model for inpatient admissions in patients with heart failure by comparing its predictions of risk for 30-day re-admissions against outcomes observed prospectively in an independent patient cohort. METHODS All adult patients with heart failure who are discharged alive from an inpatient admission will be prospectively monitored for 30-day re-admissions through reports generated by the electronic medical record system. Of these, patients who are part of the training dataset will be excluded to avoid information leakage to the algorithm. An expected sample size of 1228 index admissions will be required to observe a minimum of 100 30-day re-admission events. Deidentified structured and unstructured data will be fed to the algorithm, and its prediction will be recorded. The overall model performance will be assessed using the concordance statistic. Furthermore, multiple discrimination thresholds for screening high-risk patients will be evaluated according to the sensitivity, specificity, predictive values, and estimated cost savings to our health care system. RESULTS The project received funding in April 2017 and data collection began in June 2017. Enrollment was completed in July 2017. Data analysis is currently underway, and the first results are expected to be submitted for publication in October 2018. CONCLUSIONS To the best of our knowledge, this is one of the first studies to prospectively evaluate a predictive machine learning algorithm in a real-world setting. Findings from this study will help to measure the robustness of predictions made by machine learning algorithms and set a realistic benchmark for expectations of gains that can be made through its application to health care. REGISTERED REPORT IDENTIFIER RR1-10.2196/9466


Sign in / Sign up

Export Citation Format

Share Document