Predicting Corporate Credit Ratings Using Content Analysis of Annual Reports – A Naïve Bayesian Network Approach

Author(s):  
Petr Hajek ◽  
Vladimir Olej ◽  
Ondrej Prochazka
Author(s):  
Weiqing Wan ◽  
Qingyan Zeng ◽  
Zhicheng Wen

2014 ◽  
Vol 260 ◽  
pp. 120-148 ◽  
Author(s):  
M. Julia Flores ◽  
José A. Gámez ◽  
Ana M. Martínez

Author(s):  
Kaizhu Huang ◽  
Zenglin Xu ◽  
Irwin King ◽  
Michael R. Lyu ◽  
Zhangbing Zhou

Naive Bayesian network (NB) is a simple yet powerful Bayesian network. Even with a strong independency assumption among the features, it demonstrates competitive performance against other state-of-the-art classifiers, such as support vector machines (SVM). In this chapter, we propose a novel discriminative training approach originated from SVM for deriving the parameters of NB. This new model, called discriminative naive Bayesian network (DNB), combines both merits of discriminative methods (e.g., SVM) and Bayesian networks. We provide theoretic justifications, outline the algorithm, and perform a series of experiments on benchmark real-world datasets to demonstrate our model’s advantages. Its performance outperforms NB in classification tasks and outperforms SVM in handling missing information tasks.


2012 ◽  
Vol 11 (1) ◽  
pp. 676-679
Author(s):  
Rei-Jie Du ◽  
Shuang-Cheng Wang ◽  
Han-Xing Wang ◽  
Cui-Ping Leng

Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 729 ◽  
Author(s):  
SiQi Gao ◽  
Hua Lou ◽  
LiMin Wang ◽  
Yang Liu ◽  
Tiehu Fan

To mitigate the negative effect of classification bias caused by overfitting, semi-naive Bayesian techniques seek to mine the implicit dependency relationships in unlabeled testing instances. By redefining some criteria from information theory, Target Learning (TL) proposes to build for each unlabeled testing instance P the Bayesian Network Classifier BNC P , which is independent and complementary to BNC T learned from training data T . In this paper, we extend TL to Universal Target Learning (UTL) to identify redundant correlations between attribute values and maximize the bits encoded in the Bayesian network in terms of log likelihood. We take the k-dependence Bayesian classifier as an example to investigate the effect of UTL on BNC P and BNC T . Our extensive experimental results on 40 UCI datasets show that UTL can help BNC improve the generalization performance.


Sign in / Sign up

Export Citation Format

Share Document