bootstrap aggregating
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 20)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 15 (4) ◽  
pp. 735-744
Author(s):  
Putri Sri Astuti ◽  
Memi Nor Hayati ◽  
Rito Goejantoro

Classification is the process of grouping objects that have the same characteristics into several categories. This study applies a combination of classification algorithms, namely Bootstrap Aggregating K-Nearest Neighbor in credit scoring analysis. The aim is to classify the credit payment status of electronic goods and furniture at PT KB Finansia Multi Finance in 2020 and determine the level of accuracy produced. Credit payment status is grouped into 2 categories, namely smoothly and not smoothly. There are 7 independent variables that are used to describe the characteristics of the debtor, namely age, number of dependents, length of stay, years of service, income, amount of payment, and payment period. The application of the classification algorithm at the credit scoring analysis is expected to assist creditors in making decisions to accept or reject credit applications from prospective debtors. The results showed that the accuracy obtained from the Bootstrap Aggregating K-Nearest Neighbor algorithm with a proportion of 90:10, m=80%, C=73, and K=5 was the best, which was 92.308%.


SinkrOn ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 100-106
Author(s):  
Noor Ell Goldameir ◽  
Anne Mudya Yolanda ◽  
Arisman Adnan ◽  
Lusi Febrianti

Successful development of the quality of human life in a region is determined by the Human Development Index (HDI). Human development performance based on the HDI can be measured: long and healthy life, knowledge, and a decent standard of living. The HDI is usually grouped into several categories to facilitate the classification of the HDI level of each region. This study aimed to determine the ability of the bootstrap aggregating (bagging) method to classify the HDI by district/city. Bagging is a stochastic machine learning approach that can eliminate the variance of the classifier by producing a bootstrap ensemble to obtain better accuracy results. The dependent variable in this study was the HDI by district/city in 2020. In contrast, life expectancy at birth, expected years of schooling, mean years of schooling, and real expenditure per capita are adjusted as independent variables. Bagging was applied to the high and low categories of HDI data. The bagging method demonstrated good classification performance due to only eight classification errors, namely the HDI data which should be in the high category but classified into the low category by the bagging method. Based on the results of calculations with 25 replications, it can be concluded that the bagging method has a very good performance, with an accuracy value of 92.3%, the sensitivity of 100%, and specificity of 83.33%. The bagging method is considered very good for the classifying the HDI by district/city in Indonesia in 2020 because it has a balanced accuracy of 91.67%.


2021 ◽  
Vol 14 (6) ◽  
pp. 254
Author(s):  
Ryno du Plooy ◽  
Pierre J. Venter

In this paper, the pricing performances of two learning networks, namely an artificial neural network and a bootstrap aggregating ensemble network, were compared when pricing the Johannesburg Stock Exchange (JSE) Top 40 European call options in a modern option pricing framework using a constructed implied volatility surface. In addition to this, the numerical accuracy of the better performing network was compared to a Monte Carlo simulation in a separate numerical experiment. It was found that the bootstrap aggregating ensemble network outperformed the artificial neural network and produced price estimates within the error bounds of a Monte Carlo simulation when pricing derivatives in a multi-curve framework setting.


2020 ◽  
Vol 20 (2) ◽  
pp. 18-26
Author(s):  
F. Saadaari Saadaari ◽  
D.. Mireku-Gyimah ◽  
B. M. Olaleye

The consequences of collapsed stopes can be dire in the mining industry. This can lead to the revocation of a mining license in most jurisdictions, especially when the harm costs lives. Therefore, as a mine planning and technical services engineer, it is imperative to estimate the stability status of stopes. This study has attempted to produce a stope stability prediction model adopted from stability graph using ensemble learning techniques. This study was conducted using 472 case histories from 120 stopes of AngloGold Ashanti Ghana, Obuasi Mine. Random Forest, Gradient Boosting, Bootstrap Aggregating and Adaptive Boosting classification algorithms were used to produce the models. A comparative analysis was done using six classification performance metrics namely Accuracy, Precision, Sensitivity, F1-score, Specificity and Mathews Correlation Coefficient (MCC) to determine which ensemble learning technique performed best in predicting the stability of a stope. The Bootstrap Aggregating model obtained the highest MCC score of 96.84% while the Adaptive Boosting model obtained the lowest score. The Specificity scores in decreasing order of performance were 98.95%, 97.89%, 96.32% and 95.26% for Bootstrap Aggregating, Gradient Boosting, Random Forest and Adaptive Boosting respectively. The results showed equal Accuracy, Precision, F1-score and Sensitivity score of 97.89% for the Bootstrap Aggregating model while the same observation was made for Adaptive Boosting, Gradient Boosting and Random Forest with 90.53%, 92.63% and 95.79% scores respectively. At a 95% confidence interval using Wilson Score Interval, the results showed that the Bootstrap Aggregating model produced the minimal error and hence was selected as the alternative stope design tool for predicting the stability status of stopes.   Keywords: Stope Stability, Ensemble Learning Techniques, Stability Graph, Machine Learning


Heliyon ◽  
2020 ◽  
Vol 6 (9) ◽  
pp. e04933
Author(s):  
Bambang Widjanarko Otok ◽  
Marsuddin Musa ◽  
Purhadi ◽  
Septia Devi Prihastuti Yasmirullah

2020 ◽  
Author(s):  
David O’Connor ◽  
Evelyn M.R. Lake ◽  
Dustin Scheinost ◽  
R. Todd Constable

AbstractIt is a long-standing goal of neuroimaging to produce reliable generalized models of brain behavior relationships. More recently data driven predicative models have become popular. Overfitting is a common problem with statistical models, which impedes model generalization. Cross validation (CV) is often used to give more balanced estimates of performance. However, CV does not provide guidance on how best to apply the models generated out-of-sample. As a solution, this study proposes an ensemble learning method, in this case bootstrap aggregating, or bagging, encompassing both model parameter estimation and feature selection. Here we investigate the use of bagging when generating predictive models of fluid intelligence (fIQ) using functional connectivity (FC). We take advantage of two large openly available datasets, the Human Connectome Project (HCP), and the Philadelphia Neurodevelopmental Cohort (PNC). We generate bagged and non-bagged models of fIQ in the HCP. Over various test-train splits, these models are evaluated in sample, on left out HCP data, and out-of-sample, on PNC data. We find that in sample, a non-bagged model performs best, however out-of-sample the bagged models perform best. We also find that feature selection can vary substantially within-sample. A more considered approach to feature selection, alongside data driven predictive modeling, is needed to improve cross sample performance of FC based brain behavior models.


Sign in / Sign up

Export Citation Format

Share Document