Credit default prediction modeling: an application of support vector machine

2017 ◽  
Vol 19 (2) ◽  
pp. 158-187 ◽  
Author(s):  
Fahmida E. Moula ◽  
Chi Guotai ◽  
Mohammad Zoynul Abedin
Author(s):  
Dmytro Pokidin

Econometric models of credit scoring started with the introduction of Altman’s simple z-model in 1968, but since then these models have become more and more sophisticated, some even use Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques. This paper focuses on the use of SVM as a model for default prediction. I start with an introduction to SVM as well as to some of its widespread alternatives. Then, these different techniques are used to model NBU data on banks’ clients, which allows us to compare the accuracy of SVM to the accuracy of other models. While SVM is generally more accurate, I discuss some of the features of SVM that make its practical implementation controversial. I then discuss some ways for overcoming those features. I also present the results of the Logistic Regression (Logit) model which will be used by the NBU.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Xiang Zhou ◽  
Wenyu Zhang ◽  
Yefeng Jiang

It has great significance for the healthy development of credit industry to control the credit default risk by using the information technology. For some traditional research about the credit default prediction model, more attention is paid to the model accuracy, while the business characteristics of the credit risk prevention are easy to be ignored. Meanwhile, to reduce the complicity of the model, the data features need be extracted manually, which will decrease the high-dimensional correlation among the analyzing data and then result in the low prediction performance of the model. So, in the paper, the CNN (convolutional neural network) is used to establish a personal credit default prediction model, and both ACC (accuracy) and AUC (the area under the ROC curve) are taken as the performance evaluation index of the model. Experimental results show the model ACC (accuracy) is above 95% and AUC (the area under the ROC curve) is above 99%, and the model performance is much better than the classical algorithm including the SVM (support vector machine), Bayes, and RF (random forest).


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Maisa Cardoso Aniceto ◽  
Flavio Barboza ◽  
Herbert Kimura

AbstractCredit risk evaluation has a relevant role to financial institutions, since lending may result in real and immediate losses. In particular, default prediction is one of the most challenging activities for managing credit risk. This study analyzes the adequacy of borrower’s classification models using a Brazilian bank’s loan database, and exploring machine learning techniques. We develop Support Vector Machine, Decision Trees, Bagging, AdaBoost and Random Forest models, and compare their predictive accuracy with a benchmark based on a Logistic Regression model. Comparisons are analyzed based on usual classification performance metrics. Our results show that Random Forest and Adaboost perform better when compared to other models. Moreover, Support Vector Machine models show poor performance using both linear and nonlinear kernels. Our findings suggest that there are value creating opportunities for banks to improve default prediction models by exploring machine learning techniques.


2020 ◽  
Author(s):  
V Vasilevska ◽  
K Schlaaf ◽  
H Dobrowolny ◽  
G Meyer-Lotz ◽  
HG Bernstein ◽  
...  

2019 ◽  
Vol 15 (2) ◽  
pp. 275-280
Author(s):  
Agus Setiyono ◽  
Hilman F Pardede

It is now common for a cellphone to receive spam messages. Great number of received messages making it difficult for human to classify those messages to Spam or no Spam.  One way to overcome this problem is to use Data Mining for automatic classifications. In this paper, we investigate various data mining techniques, named Support Vector Machine, Multinomial Naïve Bayes and Decision Tree for automatic spam detection. Our experimental results show that Support Vector Machine algorithm is the best algorithm over three evaluated algorithms. Support Vector Machine achieves 98.33%, while Multinomial Naïve Bayes achieves 98.13% and Decision Tree is at 97.10 % accuracy.


2011 ◽  
Vol 131 (8) ◽  
pp. 1495-1501
Author(s):  
Dongshik Kang ◽  
Masaki Higa ◽  
Hayao Miyagi ◽  
Ikugo Mitsui ◽  
Masanobu Fujita ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document