A Review on Machine-Learning Based Code Smell Detection Techniquesin Object-Oriented Software System(s)

Author(s):  
Amandeep Kaur ◽  
Sushma Jain ◽  
Shivani Goel ◽  
Gaurav Dhiman

Context: Code smells are symptoms, that something may be wrong in software systems that can cause complications in maintaining software quality. In literature, there exists many code smells and their identification is far from trivial. Thus, several techniques have also been proposed to automate code smell detection in order to improve software quality. Objective: This paper presents an up-to-date review of simple and hybrid machine learning based code smell detection techniques and tools. Methods: We collected all the relevant research published in this field till 2020. We extracted the data from those articles and classified them into two major categories. In addition, we compared the selected studies based on several aspects like, code smells, machine learning techniques, datasets, programming languages used by datasets, dataset size, evaluation approach, and statistical testing. Results: Majority of empirical studies have proposed machine- learning based code smell detection tools. Support vector machine and decision tree algorithms are frequently used by the researchers. Along with this, a major proportion of research is conducted on Open Source Softwares (OSS) such as, Xerces, Gantt Project and ArgoUml. Furthermore, researchers paid more attention towards Feature Envy and Long Method code smells. Conclusion: We identified several areas of open research like, need of code smell detection techniques using hybrid approaches, need of validation employing industrial datasets, etc.

Author(s):  
Frederico Luiz Caram ◽  
Bruno Rafael De Oliveira Rodrigues ◽  
Amadeu Silveira Campanelli ◽  
Fernando Silva Parreiras

Code smells or bad smells are an accepted approach to identify design flaws in the source code. Although it has been explored by researchers, the interpretation of programmers is rather subjective. One way to deal with this subjectivity is to use machine learning techniques. This paper provides the reader with an overview of machine learning techniques and code smells found in the literature, aiming at determining which methods and practices are used when applying machine learning for code smells identification and which machine learning techniques have been used for code smells identification. A mapping study was used to identify the techniques used for each smell. We found that the Bloaters was the main kind of smell studied, addressed by 35% of the papers. The most commonly used technique was Genetic Algorithms (GA), used by 22.22% of the papers. Regarding the smells addressed by each technique, there was a high level of redundancy, in a way that the smells are covered by a wide range of algorithms. Nevertheless, Feature Envy stood out, being targeted by 63% of the techniques. When it comes to performance, the best average was provided by Decision Tree, followed by Random Forest, Semi-supervised and Support Vector Machine Classifier techniques. 5 out of the 25 analyzed smells were not handled by any machine learning techniques. Most of them focus on several code smells and in general there is no outperforming technique, except for a few specific smells. We also found a lack of comparable results due to the heterogeneity of the data sources and of the provided results. We recommend the pursuit of further empirical studies to assess the performance of these techniques in a standardized dataset to improve the comparison reliability and replicability.


2011 ◽  
Vol 20 (05) ◽  
pp. 969-980 ◽  
Author(s):  
CÁSSIO M. M. PEREIRA ◽  
RODRIGO F. DE MELLO

Recently, there has been an increased interest in self-healing systems. These types of systems are able to cope with failures in the environment they execute and work continuously by taking proactive actions to correct these problems. The detection of faults plays a prominent role in self-healing systems, as faults are the original causes of failures. Fault detection techniques proposed in the literature have been based on three mainstream approaches: process heartbeats, statistical analysis and machine learning. However, these approaches present limitations. Heartbeat-based techniques only detect failures, not faults. Statistical approaches generally assume linear models. Most machine learning techniques assume the data is independent and identically distributed. In order to overcome all these limitations we propose a new approach to address fault detection, which also gives insight into how process behavior changes over time in the presence of faults. Experiments show that the proposed approach achieves a twofold increase in F -measure when compared to Support Vector Machines (SVM) and Auto-Regressive Integrated Moving Average (ARIMA).


2020 ◽  
Vol 11 (2) ◽  
pp. 20-40
Author(s):  
Somya Goyal ◽  
Pradeep Kumar Bhatia

Software quality prediction is one the most challenging tasks in the development and maintenance of software. Machine learning (ML) is widely being incorporated for the prediction of the quality of a final product in the early development stages of the software development life cycle (SDLC). An ML prediction model uses software metrics and faulty data from previous projects to detect high-risk modules for future projects, so that the testing efforts can be targeted to those specific ‘risky' modules. Hence, ML-based predictors contribute to the detection of development anomalies early and inexpensively and ensure the timely delivery of a successful, failure-free and supreme quality software product within budget. This article has a comparison of 30 software quality prediction models (5 technique * 6 dataset) built on five ML techniques: artificial neural network (ANN); support vector machine (SVMs); Decision Tree (DTs); k-Nearest Neighbor (KNN); and Naïve Bayes Classifiers (NBC), using six datasets: CM1, KC1, KC2, PC1, JM1, and a combined one. These models exploit the predictive power of static code metrics, McCabe complexity metrics, for quality prediction. All thirty predictors are compared using a receiver operator curve (ROC), area under the curve (AUC), and accuracy as performance evaluation criteria. The results show that the ANN technique for software quality prediction is promising for accurate quality prediction irrespective of the dataset used.


Author(s):  
Prashant Udawant ◽  
Atul Patidar ◽  
Abhijeet Singh ◽  
Atyant Yadav

There are various applications of image processing in the field of engineering, agriculture, graphic design, commerce, historical research and architecture. This paper studies and compares most of the research works done in the field of image processing and machine learning for the purpose of image classification based on the features extracted from the image through different feature extraction techniques. The machine learning techniques studied in this paper are Convolution Neural Network (CNN), Support Vector Machine (SVM) and Fuzzy logic. The paper studies and compares these methods for their implementation in classification of digital images. Color based segmentation models are used to segment the specific features from image and categories them into different classes. First image preprocessing is done on the image to reduce the noise from the image. Then image segmentation and edge detection techniques are used to identify the objects in the image and extract the features through which the image can be labeled with a specific class.


Author(s):  
Toshita Sharma ◽  
Manan Shah

AbstractDiabetes mellitus has been an increasing concern owing to its high morbidity, and the average age of individual affected by of individual affected by this disease has now decreased to mid-twenties. Given the high prevalence, it is necessary to address with this problem effectively. Many researchers and doctors have now developed detection techniques based on artificial intelligence to better approach problems that are missed due to human errors. Data mining techniques with algorithms such as - density-based spatial clustering of applications with noise and ordering points to identify the cluster structure, the use of machine vision systems to learn data on facial images, gain better features for model training, and diagnosis via presentation of iridocyclitis for detection of the disease through iris patterns have been deployed by various practitioners. Machine learning classifiers such as support vector machines, logistic regression, and decision trees, have been comparative discussed various authors. Deep learning models such as artificial neural networks and recurrent neural networks have been considered, with primary focus on long short-term memory and convolutional neural network architectures in comparison with other machine learning models. Various parameters such as the root-mean-square error, mean absolute errors, area under curves, and graphs with varying criteria are commonly used. In this study, challenges pertaining to data inadequacy and model deployment are discussed. The future scope of such methods has also been discussed, and new methods are expected to enhance the performance of existing models, allowing them to attain greater insight into the conditions on which the prevalence of the disease depends.


2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Author(s):  
Anantvir Singh Romana

Accurate diagnostic detection of the disease in a patient is critical and may alter the subsequent treatment and increase the chances of survival rate. Machine learning techniques have been instrumental in disease detection and are currently being used in various classification problems due to their accurate prediction performance. Various techniques may provide different desired accuracies and it is therefore imperative to use the most suitable method which provides the best desired results. This research seeks to provide comparative analysis of Support Vector Machine, Naïve bayes, J48 Decision Tree and neural network classifiers breast cancer and diabetes datsets.


2020 ◽  
Author(s):  
Azhagiya Singam Ettayapuram Ramaprasad ◽  
Phum Tachachartvanich ◽  
Denis Fourches ◽  
Anatoly Soshilov ◽  
Jennifer C.Y. Hsieh ◽  
...  

Perfluoroalkyl and Polyfluoroalkyl Substances (PFASs) pose a substantial threat as endocrine disruptors, and thus early identification of those that may interact with steroid hormone receptors, such as the androgen receptor (AR), is critical. In this study we screened 5,206 PFASs from the CompTox database against the different binding sites on the AR using both molecular docking and machine learning techniques. We developed support vector machine models trained on Tox21 data to classify the active and inactive PFASs for AR using different chemical fingerprints as features. The maximum accuracy was 95.01% and Matthew’s correlation coefficient (MCC) was 0.76 respectively, based on MACCS fingerprints (MACCSFP). The combination of docking-based screening and machine learning models identified 29 PFASs that have strong potential for activity against the AR and should be considered priority chemicals for biological toxicity testing.


2020 ◽  
Author(s):  
Nalika Ulapane ◽  
Karthick Thiyagarajan ◽  
sarath kodagoda

<div>Classification has become a vital task in modern machine learning and Artificial Intelligence applications, including smart sensing. Numerous machine learning techniques are available to perform classification. Similarly, numerous practices, such as feature selection (i.e., selection of a subset of descriptor variables that optimally describe the output), are available to improve classifier performance. In this paper, we consider the case of a given supervised learning classification task that has to be performed making use of continuous-valued features. It is assumed that an optimal subset of features has already been selected. Therefore, no further feature reduction, or feature addition, is to be carried out. Then, we attempt to improve the classification performance by passing the given feature set through a transformation that produces a new feature set which we have named the “Binary Spectrum”. Via a case study example done on some Pulsed Eddy Current sensor data captured from an infrastructure monitoring task, we demonstrate how the classification accuracy of a Support Vector Machine (SVM) classifier increases through the use of this Binary Spectrum feature, indicating the feature transformation’s potential for broader usage.</div><div><br></div>


2020 ◽  
Vol 21 ◽  
Author(s):  
Sukanya Panja ◽  
Sarra Rahem ◽  
Cassandra J. Chu ◽  
Antonina Mitrofanova

Background: In recent years, the availability of high throughput technologies, establishment of large molecular patient data repositories, and advancement in computing power and storage have allowed elucidation of complex mechanisms implicated in therapeutic response in cancer patients. The breadth and depth of such data, alongside experimental noise and missing values, requires a sophisticated human-machine interaction that would allow effective learning from complex data and accurate forecasting of future outcomes, ideally embedded in the core of machine learning design. Objective: In this review, we will discuss machine learning techniques utilized for modeling of treatment response in cancer, including Random Forests, support vector machines, neural networks, and linear and logistic regression. We will overview their mathematical foundations and discuss their limitations and alternative approaches all in light of their application to therapeutic response modeling in cancer. Conclusion: We hypothesize that the increase in the number of patient profiles and potential temporal monitoring of patient data will define even more complex techniques, such as deep learning and causal analysis, as central players in therapeutic response modeling.


Sign in / Sign up

Export Citation Format

Share Document