Do we need different machine learning algorithms for QSAR modeling? A comprehensive assessment of 16 machine learning algorithms on 14 QSAR data sets

Author(s):  
Zhenxing Wu ◽  
Minfeng Zhu ◽  
Yu Kang ◽  
Elaine Lai-Han Leung ◽  
Tailong Lei ◽  
...  

Abstract Although a wide variety of machine learning (ML) algorithms have been utilized to learn quantitative structure–activity relationships (QSARs), there is no agreed single best algorithm for QSAR learning. Therefore, a comprehensive understanding of the performance characteristics of popular ML algorithms used in QSAR learning is highly desirable. In this study, five linear algorithms [linear function Gaussian process regression (linear-GPR), linear function support vector machine (linear-SVM), partial least squares regression (PLSR), multiple linear regression (MLR) and principal component regression (PCR)], three analogizers [radial basis function support vector machine (rbf-SVM), K-nearest neighbor (KNN) and radial basis function Gaussian process regression (rbf-GPR)], six symbolists [extreme gradient boosting (XGBoost), Cubist, random forest (RF), multiple adaptive regression splines (MARS), gradient boosting machine (GBM), and classification and regression tree (CART)] and two connectionists [principal component analysis artificial neural network (pca-ANN) and deep neural network (DNN)] were employed to learn the regression-based QSAR models for 14 public data sets comprising nine physicochemical properties and five toxicity endpoints. The results show that rbf-SVM, rbf-GPR, XGBoost and DNN generally illustrate better performances than the other algorithms. The overall performances of different algorithms can be ranked from the best to the worst as follows: rbf-SVM > XGBoost > rbf-GPR > Cubist > GBM > DNN > RF > pca-ANN > MARS > linear-GPR ≈ KNN > linear-SVM ≈ PLSR > CART ≈ PCR ≈ MLR. In terms of prediction accuracy and computational efficiency, SVM and XGBoost are recommended to the regression learning for small data sets, and XGBoost is an excellent choice for large data sets. We then investigated the performances of the ensemble models by integrating the predictions of multiple ML algorithms. The results illustrate that the ensembles of two or three algorithms in different categories can indeed improve the predictions of the best individual ML algorithms.

2020 ◽  
Vol 9 (9) ◽  
pp. 507
Author(s):  
Sanjiwana Arjasakusuma ◽  
Sandiaga Swahyu Kusuma ◽  
Stuart Phinn

Machine learning has been employed for various mapping and modeling tasks using input variables from different sources of remote sensing data. For feature selection involving high- spatial and spectral dimensionality data, various methods have been developed and incorporated into the machine learning framework to ensure an efficient and optimal computational process. This research aims to assess the accuracy of various feature selection and machine learning methods for estimating forest height using AISA (airborne imaging spectrometer for applications) hyperspectral bands (479 bands) and airborne light detection and ranging (lidar) height metrics (36 metrics), alone and combined. Feature selection and dimensionality reduction using Boruta (BO), principal component analysis (PCA), simulated annealing (SA), and genetic algorithm (GA) in combination with machine learning algorithms such as multivariate adaptive regression spline (MARS), extra trees (ET), support vector regression (SVR) with radial basis function, and extreme gradient boosting (XGB) with trees (XGbtree and XGBdart) and linear (XGBlin) classifiers were evaluated. The results demonstrated that the combinations of BO-XGBdart and BO-SVR delivered the best model performance for estimating tropical forest height by combining lidar and hyperspectral data, with R2 = 0.53 and RMSE = 1.7 m (18.4% of nRMSE and 0.046 m of bias) for BO-XGBdart and R2 = 0.51 and RMSE = 1.8 m (15.8% of nRMSE and −0.244 m of bias) for BO-SVR. Our study also demonstrated the effectiveness of BO for variables selection; it could reduce 95% of the data to select the 29 most important variables from the initial 516 variables from lidar metrics and hyperspectral data.


2019 ◽  
Vol 8 (2) ◽  
pp. 3697-3705 ◽  

Forest fires have become one of the most frequently occurring disasters in recent years. The effects of forest fires have a lasting impact on the environment as it lead to deforestation and global warming, which is also one of its major cause of occurrence. Forest fires are dealt by collecting the satellite images of forest and if there is any emergency caused by the fires then the authorities are notified to mitigate its effects. By the time the authorities get to know about it, the fires would have already caused a lot of damage. Data mining and machine learning techniques can provide an efficient prevention approach where data associated with forests can be used for predicting the eventuality of forest fires. This paper uses the dataset present in the UCI machine learning repository which consists of physical factors and climatic conditions of the Montesinho park situated in Portugal. Various algorithms like Logistic regression, Support Vector Machine, Random forest, K-Nearest neighbors in addition to Bagging and Boosting predictors are used, both with and without Principal Component Analysis (PCA). Among the models in which PCA was applied, Logistic Regression gave the highest F-1 score of 68.26 and among the models where PCA was absent, Gradient boosting gave the highest score of 68.36.


2021 ◽  
Author(s):  
Leonie Lampe ◽  
Sebastian Niehaus ◽  
Hans-Jürgen Huppertz ◽  
Alberto Merola ◽  
Janis Reinelt ◽  
...  

Abstract Importance The entry of artificial intelligence into medicine is pending. Several methods have been used for predictions of structured neuroimaging data, yet nobody compared them in this context.Objective Multi-class prediction is key for building computational aid systems for differential diagnosis. We compared support vector machine, random forest, gradient boosting, and deep feed-forward neural networks for the classification of different neurodegenerative syndromes based on structural magnetic resonance imaging.Design, Setting, and Participants Atlas-based volumetry was performed on multi-centric T1weighted MRI data from 940 subjects, i.e. 124 healthy controls and 816 patients with ten different neurodegenerative diseases, leading to a multi-diagnostic multi-class classification task with eleven different classes.Interventions n.a.Main Outcomes and Measures Cohen’s Kappa, Accuracy, and F1-score to assess model performance.Results Over all, the neural network produced both the best performance measures as well as the most robust results. The smaller classes however were better classified by either the ensemble learning methods or the support vector machine, while performance measures for small classes were comparatively low, as expected. Diseases with regionally specific and pronounced atrophy patterns were generally better classified than diseases with wide-spread and rather weak atrophy.Conclusions and Relevance Our study furthermore underlines the necessity of larger data sets but also calls for a careful consideration of different machine learning methods that can handle the type of data and the classification task best.Trial Registration n.a.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4519
Author(s):  
Livia Petrescu ◽  
Cătălin Petrescu ◽  
Ana Oprea ◽  
Oana Mitruț ◽  
Gabriela Moise ◽  
...  

This paper focuses on the binary classification of the emotion of fear, based on the physiological data and subjective responses stored in the DEAP dataset. We performed a mapping between the discrete and dimensional emotional information considering the participants’ ratings and extracted a substantial set of 40 types of features from the physiological data, which represented the input to various machine learning algorithms—Decision Trees, k-Nearest Neighbors, Support Vector Machine and artificial networks—accompanied by dimensionality reduction, feature selection and the tuning of the most relevant hyperparameters, boosting classification accuracy. The methodology we approached included tackling different situations, such as resolving the problem of having an imbalanced dataset through data augmentation, reducing overfitting, computing various metrics in order to obtain the most reliable classification scores and applying the Local Interpretable Model-Agnostic Explanations method for interpretation and for explaining predictions in a human-understandable manner. The results show that fear can be predicted very well (accuracies ranging from 91.7% using Gradient Boosting Trees to 93.5% using dimensionality reduction and Support Vector Machine) by extracting the most relevant features from the physiological data and by searching for the best parameters which maximize the machine learning algorithms’ classification scores.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2119
Author(s):  
Victor Flores ◽  
Claudio Leiva

The copper mining industry is increasingly using artificial intelligence methods to improve copper production processes. Recent studies reveal the use of algorithms, such as Artificial Neural Network, Support Vector Machine, and Random Forest, among others, to develop models for predicting product quality. Other studies compare the predictive models developed with these machine learning algorithms in the mining industry as a whole. However, not many copper mining studies published compare the results of machine learning techniques for copper recovery prediction. This study makes a detailed comparison between three models for predicting copper recovery by leaching, using four datasets resulting from mining operations in Northern Chile. The algorithms used for developing the models were Random Forest, Support Vector Machine, and Artificial Neural Network. To validate these models, four indicators or values of merit were used: accuracy (acc), precision (p), recall (r), and Matthew’s correlation coefficient (mcc). This paper describes the dataset preparation and the refinement of the threshold values used for the predictive variable most influential on the class (the copper recovery). Results show both a precision over 98.50% and also the model with the best behavior between the predicted and the real values. Finally, the obtained models have the following mean values: acc = 0.943, p = 88.47, r = 0.995, and mcc = 0.232. These values are highly competitive when compared with those obtained in similar studies using other approaches in the context.


2021 ◽  
Vol 2 (8) ◽  
pp. 675-684
Author(s):  
Jin Wang ◽  
Youjun Jiang ◽  
Li Li ◽  
Chao Yang ◽  
Ke Li ◽  
...  

The purpose of grain storage management is to dynamically analyze the quality change of the reserved grains, adopt scientific and effective management methods to delay the speed of the quality deterioration, and reduce the loss rate during storage. At present, the supervision of the grain quality in the reserve mainly depends on the periodic measurements of the quality of the grains and the milled products. The data obtained by the above approach is accurate and reliable, but the workload is too large while the frequency is high. The obtained conclusions are also limited to the studied area and not applicable to be extended into other scenarios. Therefore, there is an urgent need of a general method that can quickly predict the quality of grains given different species, regions and storage periods based on historical data. In this study, we introduced Back-Propagation (BP) neural network algorithm and support vector machine algorithm into the quality prediction of the reserved grains. We used quality index, temperature and humidity data to build both an intertemporal prediction model and a synchronous prediction model. The results show that the BP neural network based on the storage characters from the first three periods can accurately predict the key storage characters intertemporally. The support vector machine can provide precise predictions of the key storage characters synchronously. The average predictive error for each of wheat, rice and corn is less than 15%, while the one for soybean is about 20%, all of which can meet the practical demands. In conclusion, the machine learning algorithms are helpful to improve the management effectiveness of grain storage.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0258788
Author(s):  
Sarra Ayouni ◽  
Fahima Hajjej ◽  
Mohamed Maddeh ◽  
Shaha Al-Otaibi

The educational research is increasingly emphasizing the potential of student engagement and its impact on performance, retention and persistence. This construct has emerged as an important paradigm in the higher education field for many decades. However, evaluating and predicting the student’s engagement level in an online environment remains a challenge. The purpose of this study is to suggest an intelligent predictive system that predicts the student’s engagement level and then provides the students with feedback to enhance their motivation and dedication. Three categories of students are defined depending on their engagement level (Not Engaged, Passively Engaged, and Actively Engaged). We applied three different machine-learning algorithms, namely Decision Tree, Support Vector Machine and Artificial Neural Network, to students’ activities recorded in Learning Management System reports. The results demonstrate that machine learning algorithms could predict the student’s engagement level. In addition, according to the performance metrics of the different algorithms, the Artificial Neural Network has a greater accuracy rate (85%) compared to the Support Vector Machine (80%) and Decision Tree (75%) classification techniques. Based on these results, the intelligent predictive system sends feedback to the students and alerts the instructor once a student’s engagement level decreases. The instructor can identify the students’ difficulties during the course and motivate them through e-mail reminders, course messages, or scheduling an online meeting.


2019 ◽  
Vol 18 (3) ◽  
pp. 742-766 ◽  
Author(s):  
Anna Kurtukova ◽  
Alexander Romanov

The paper is devoted to the analysis of the problem of determining the source code author , which is of interest to researchers in the field of information security, computer forensics, assessment of the quality of the educational process, protection of intellectual property. The paper presents a detailed analysis of modern solutions to the problem. The authors suggest two new identification techniques based on machine learning algorithms: support vector machine, fast correlation filter and informative features; the technique based on hybrid convolutional recurrent neural network. The experimental database includes samples of source codes written in Java, C ++, Python, PHP, JavaScript, C, C # and Ruby. The data was obtained using a web service for hosting IT-projects – Github. The total number of source codes exceeds 150 thousand samples. The average length of each of them is 850 characters. The case size is 542 authors. The experiments were conducted with source codes written in the most popular programming languages. Accuracy of the developed techniques for different numbers of authors was assessed using 10-fold cross-validation. An additional series of experiments was conducted with the number of authors from 2 to 50 for the most popular Java programming language. The graphs of the relationship between identification accuracy and case size are plotted. The analysis of result showed that the method based on hybrid neural network gives 97% accuracy, and it’s at the present time the best-known result. The technique based on the support vector machine made it possible to achieve 96% accuracy. The difference between the results of the hybrid neural network and the support vector machine was approximately 5%.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kushalkumar Thakkar ◽  
Suhas Suresh Ambekar ◽  
Manoj Hudnurkar

Purpose Longitudinal facial cracks (LFC) are one of the major defects occurring in the continuous-casting stage of thin slab caster using funnel molds. Longitudinal cracks occur mainly owing to non-uniform cooling, varying thermal conductivity along mold length and use of high superheat during casting, improper casting powder characteristics. These defects are difficult to capture and are visible only in the final stages of a process or even at the customer end. Besides, there is a seasonality associated with this defect where defect intensity increases during the winter season. To address the issue, a model-based on data analytics is developed. Design/methodology/approach Around six-month data of steel manufacturing process is taken and around 60 data collection point is analyzed. The model uses different classification machine learning algorithms such as logistic regression, decision tree, ensemble methods of a decision tree, support vector machine and Naïve Bays (for different cut off level) to investigate data. Findings Proposed research framework shows that most of models give good results between cut off level 0.6–0.8 and random forest, gradient boosting for decision trees and support vector machine model performs better compared to other model. Practical implications Based on predictions of model steel manufacturing companies can identify the optimal operating range where this defect can be reduced. Originality/value An analytical approach to identify LFC defects provides objective models for reduction of LFC defects. By reducing LFC defects, quality of steel can be improved.


2021 ◽  
Vol 9 (4) ◽  
pp. 376 ◽  
Author(s):  
Yunfei Yang ◽  
Haiwen Tu ◽  
Lei Song ◽  
Lin Chen ◽  
De Xie ◽  
...  

Resistance is one of the important performance indicators of ships. In this paper, a prediction method based on the Radial Basis Function neural network (RBFNN) is proposed to predict the resistance of a 13500 transmission extension unit (13500TEU) container ship at different drafts. The predicted draft state in the known range is called interpolation prediction; otherwise, it is extrapolation prediction. First, ship features are extracted to make the resistance Rt prediction. The resistance prediction results show that the performance of the RBFNN is significantly better than the other four machine learning models, backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost). Then, the ship data is processed in a dimensionless manner, and the models mentioned above are used to predict the total resistance coefficient Ct of the container ship. The prediction results show that the RBFNN prediction model still performs well. Good results can be obtained by RBFNN in interpolation prediction, even when using part of dimensionless features. Finally, the accuracy of the prediction method based on RBFNN is greatly improved compared with the modified admiralty coefficient.


Sign in / Sign up

Export Citation Format

Share Document