scholarly journals A Bayesian Classifier Learning Algorithm Based on Optimization Model

2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Sanyang Liu ◽  
Mingmin Zhu ◽  
Youlong Yang

Naive Bayes classifier is a simple and effective classification method, but its attribute independence assumption makes it unable to express the dependence among attributes and affects its classification performance. In this paper, we summarize the existing improved algorithms and propose a Bayesian classifier learning algorithm based on optimization model (BC-OM). BC-OM uses the chi-squared statistic to estimate the dependence coefficients among attributes, with which it constructs the objective function as an overall measure of the dependence for a classifier structure. Therefore, a problem of searching for an optimal classifier can be turned into finding the maximum value of the objective function in feasible fields. In addition, we have proved the existence and uniqueness of the numerical solution. BC-OM offers a new opinion for the research of extended Bayesian classifier. Theoretical and experimental results show that the new algorithm is correct and effective.

Author(s):  
Р.И. Кузьмич ◽  
А.А. Ступина ◽  
С.Н. Ежеманская ◽  
А.П. Шугалей

Предлагаются две оптимизационные модели для построения информативных закономерностей. Приводится эмпирическое подтверждение целесообразности использования критерия бустинга в качестве целевой функции оптимизационной модели для получения информативных закономерностей. Информативность, закономерность, критерий бустинга, оптимизационная модель Comparison of two optimization models for constructing patterns in the method of logical analysis of data Two optimization models for constructing informative patterns are proposed. An empirical confirmation of the expediency of using the boosting criterion as an objective function of the optimization model for obtaining informative patterns is given.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 721 ◽  
Author(s):  
YuGuang Long ◽  
LiMin Wang ◽  
MingHui Sun

Due to the simplicity and competitive classification performance of the naive Bayes (NB), researchers have proposed many approaches to improve NB by weakening its attribute independence assumption. Through the theoretical analysis of Kullback–Leibler divergence, the difference between NB and its variations lies in different orders of conditional mutual information represented by these augmenting edges in the tree-shaped network structure. In this paper, we propose to relax the independence assumption by further generalizing tree-augmented naive Bayes (TAN) from 1-dependence Bayesian network classifiers (BNC) to arbitrary k-dependence. Sub-models of TAN that are built to respectively represent specific conditional dependence relationships may “best match” the conditional probability distribution over the training data. Extensive experimental results reveal that the proposed algorithm achieves bias-variance trade-off and substantially better generalization performance than state-of-the-art classifiers such as logistic regression.


2013 ◽  
Vol 2013 ◽  
pp. 1-11
Author(s):  
Zhicong Zhang ◽  
Kaishun Hu ◽  
Shuai Li ◽  
Huiyu Huang ◽  
Shaoyong Zhao

Chip attach is the bottleneck operation in semiconductor assembly. Chip attach scheduling is in nature unrelated parallel machine scheduling considering practical issues, for example, machine-job qualification, sequence-dependant setup times, initial machine status, and engineering time. The major scheduling objective is to minimize the total weighted unsatisfied Target Production Volume in the schedule horizon. To apply Q-learning algorithm, the scheduling problem is converted into reinforcement learning problem by constructing elaborate system state representation, actions, and reward function. We select five heuristics as actions and prove the equivalence of reward function and the scheduling objective function. We also conduct experiments with industrial datasets to compare the Q-learning algorithm, five action heuristics, and Largest Weight First (LWF) heuristics used in industry. Experiment results show that Q-learning is remarkably superior to the six heuristics. Compared with LWF, Q-learning reduces three performance measures, objective function value, unsatisfied Target Production Volume index, and unsatisfied job type index, by considerable amounts of 80.92%, 52.20%, and 31.81%, respectively.


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Qingchao Liu ◽  
Jian Lu ◽  
Shuyan Chen ◽  
Kangjia Zhao

This study presents the applicability of the Naïve Bayes classifier ensemble for traffic incident detection. The standard Naive Bayes (NB) has been applied to traffic incident detection and has achieved good results. However, the detection result of the practically implemented NB depends on the choice of the optimal threshold, which is determined mathematically by using Bayesian concepts in the incident-detection process. To avoid the burden of choosing the optimal threshold and tuning the parameters and, furthermore, to improve the limited classification performance of the NB and to enhance the detection performance, we propose an NB classifier ensemble for incident detection. In addition, we also propose to combine the Naïve Bayes and decision tree (NBTree) to detect incidents. In this paper, we discuss extensive experiments that were performed to evaluate the performances of three algorithms: standard NB, NB ensemble, and NBTree. The experimental results indicate that the performances of five rules of the NB classifier ensemble are significantly better than those of standard NB and slightly better than those of NBTree in terms of some indicators. More importantly, the performances of the NB classifier ensemble are very stable.


2009 ◽  
Vol 16-19 ◽  
pp. 1164-1168 ◽  
Author(s):  
Ping Liu ◽  
San Yang Liu

The unconstrained optimization model applying to radial deviation measurement is established for assessing coaxality errors by the positioned minimum zone method. The properties of the objective function in the optimization model are thoroughly researched. On the basis of the modern theory of convex functions, it is strictly proved that the objective function is a continuous and non-differentiable and convex function defined on the four-dimensional Euclidean space R4. Therefore, the global minimal value of the objective function is unique and any of its minimal point must be its global minimal point. Thus, any existing optimization algorithm, as long as it is convergent, can be used to solve the objective function to get the wanted values of coaxality errors by the positioned minimum zone assessment. An example is given to verify the theoretical results presented.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Teng Li ◽  
Huan Chang ◽  
Jun Wu

This paper presents a novel algorithm to numerically decompose mixed signals in a collaborative way, given supervision of the labels that each signal contains. The decomposition is formulated as an optimization problem incorporating nonnegative constraint. A nonnegative data factorization solution is presented to yield the decomposed results. It is shown that the optimization is efficient and decreases the objective function monotonically. Such a decomposition algorithm can be applied on multilabel training samples for pattern classification. The real-data experimental results show that the proposed algorithm can significantly facilitate the multilabel image classification performance with weak supervision.


Sign in / Sign up

Export Citation Format

Share Document