A Double Weighted Naive Bayes with Niching Cultural Algorithm for Multi-Label Classification

Author(s):  
Xuesong Yan ◽  
Qinghua Wu ◽  
Victor S. Sheng

Multi-label classification is to assign an instance to multiple classes. Naive Bayes (NB) is one of the most popular algorithms for pattern recognition and classification. It has a high performance in single label classification. It is naturally extended for multi-label classification under the assumption of label independence. As we know, NB is based on a simple but unrealistic assumption that attributes are conditionally independent given the class. Therefore, a double weighted NB (DWNB) is proposed to demonstrate the influences of predicting different labels based on different attributes. Our DWNB utilizes the niching cultural algorithm (NLA) to determine the weight configuration automatically. Our experimental results show that our proposed DWNB outperforms NB and its extensions significantly in multi-label classification.

Author(s):  
Son Doan ◽  
◽  
Susumu Horiguchi ◽  

Text categorization involves assigning a natural language document to one or more predefined classes. One of the most interesting issues is feature selection. We propose an approach using multicriteria ranking of eatures, a new procedure for feature selection, and apply these to text categorization. Experimental results dealing with Reuters-21578 and 20Newsgroups benchmark data and the naive Bayes algorithm show that our proposal outperforms conventional feature selection in text categorization performance.


2013 ◽  
Vol 303-306 ◽  
pp. 1609-1612
Author(s):  
Huai Lin Dong ◽  
Xiao Dan Zhu ◽  
Qing Feng Wu ◽  
Juan Juan Huang

Naïve Bayes classification algorithm based on validity (NBCABV) optimizes the training data by eliminating the noise samples of training data with validity to improve the effect of classification, while it ignores the associations of properties. In consideration of the associations of properties, an improved method that is classification algorithm for Naïve Bayes based on validity and correlation (CANBBVC) is proposed to delete more noise samples with validity and correlation, thus resulting in better classification performance. Experimental results show this model has higher classification accuracy comparing the one based on validity solely.


Author(s):  
Tobias Sombra ◽  
Rose Santini ◽  
Emerson Morais ◽  
Walmir Couto ◽  
Alex Zissou ◽  
...  

Quantitative evaluation of a dataset can play an important role in pattern recognition of technical-scientific research involving behavior and dynamics in social networks. As an example, are the adaptive feature weighting approaches by naive Bayes text algorithm. This work aims to present an exploratory data analysis with a quantitative approach that involves pattern recognition using the Mendeley research network; to identify logics given the popularity of document access. To better analyze the results, the work was divided into four categories, each with three subcategories, that is, five, three, and two output classes. The name for these categories came up due to data collection, which also presented documents with open access, dismembering proceedings, and journals for two more categories. As a result, the performance for the test examples showed a lower error rate related to the subcategory two output classes in the criterion of popularity by using the naive Bayes algorithm in Mendeley.


Author(s):  
Prof. R. S. Shishupal ◽  
Varsha ◽  
Supriya Mane ◽  
Vinita Singh ◽  
Damini Wasekar

The growing social media has increased the chances of fake job postings. To avoid fraudulent posts for job, an android application is designed for classification using machine learning. This paper proposes the implementation and working of machine learning based android application. For these various classifiers are used and results of these classifiers are compared for prediction of fake job profiles. Various single classifiers are used and based on the experimental results ,Multinomial Naive Bayes is the best classification to detect fake job over other classifiers.


2012 ◽  
Vol 21 (01) ◽  
pp. 1250007 ◽  
Author(s):  
LIANGXIAO JIANG ◽  
DIANHONG WANG ◽  
ZHIHUA CAI

Many approaches are proposed to improve naive Bayes by weakening its conditional independence assumption. In this paper, we work on the approach of instance weighting and propose an improved naive Bayes algorithm by discriminative instance weighting. We called it Discriminatively Weighted Naive Bayes. In each iteration of it, different training instances are discriminatively assigned different weights according to the estimated conditional probability loss. The experimental results based on a large number of UCI data sets validate its effectiveness in terms of the classification accuracy and AUC. Besides, the experimental results on the running time show that our Discriminatively Weighted Naive Bayes performs almost as efficiently as the state-of-the-art Discriminative Frequency Estimate learning method, and significantly more efficient than Boosted Naive Bayes. At last, we apply the idea of discriminatively weighted learning in our algorithm to some state-of-the-art naive Bayes text classifiers, such as multinomial naive Bayes, complement naive Bayes and the one-versus-all-but-one model, and have achieved remarkable improvements.


Sign in / Sign up

Export Citation Format

Share Document