scholarly journals A decomposition structure learning algorithm in Bayesian network based on a two-stage combination method

Author(s):  
Huiping Guo ◽  
Hongru Li

AbstractDecomposition hybrid algorithms with the recursive framework which recursively decompose the structural task into structural subtasks to reduce computational complexity are employed to learn Bayesian network (BN) structure. Merging rules are commonly adopted as the combination method in the combination step. The direction determination rule of merging rules has problems in using the idea of keeping v-structures unchanged before and after combination to determine directions of edges in the whole structure. It breaks down in one case due to appearances of wrong v-structures, and is hard to operate in practice. Therefore, we adopt a novel approach for direction determination and propose a two-stage combination method. In the first-stage combination method, we determine nodes, links of edges by merging rules and adopt the idea of permutation and combination to determine directions of contradictory edges. In the second-stage combination method, we restrict edges between nodes that do not satisfy the decomposition property and their parent nodes by determining the target domain according to the decomposition property. Simulation experiments on four networks show that the proposed algorithm can obtain BN structure with higher accuracy compared with other algorithms. Finally, the proposed algorithm is applied to the thickening process of gold hydrometallurgy to solve the practical problem.

2020 ◽  
Vol 24 (5) ◽  
pp. 1087-1106
Author(s):  
Huiping Guo ◽  
Hongru Li

It is important for Bayesian network (BN) structure learning, a NP-problem, to improve the accuracy and hybrid algorithms are a kind of effective structure learning algorithms at present. Most hybrid algorithms adopt the strategy of one heuristic search and can be divided into two groups: one heuristic search based on initial BN skeleton and one heuristic search based on initial solutions. The former often fails to guarantee globality of the optimal structure and the latter fails to get the optimal solution because of large search space. In this paper, an efficient hybrid algorithm is proposed with the strategy of two-stage searches. For first-stage search, it firstly determines the local search space based on Maximal Information Coefficient by introducing penalty factors p1, p2, then searches the local space by Binary Particle Swarm Optimization. For second-stage search, an efficient ADR (the abbreviation of Add, Delete, Reverse) algorithm based on three basic operators is designed to extend the local space to the whole space. Experiment results show that the proposed algorithm can obtain better performance of BN structure learning.


2013 ◽  
Vol 427-429 ◽  
pp. 1614-1619
Author(s):  
Shao Jin Han ◽  
Jian Xun Li

The traditional structure learning algorithms are mainly faced with a large sample dataset. But the sample dataset practically is small. Based on it, we introduce the Probability Density Kernel Estimation (PDKE), which would achieve the expansion of the original sample sets. Then, the K2 algorithm is used to learn the Bayesian network structure. By optimizing the kernel function and window width, PDKE achieves the effective expansion of the original dataset. After the confirm of variable order based on mutual information, a small sample set of Bayesian structure learning algorithm would be established. Finally, simulation results confirm that the new algorithm is effective and practical.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Ruo-Hai Di ◽  
Ye Li ◽  
Ting-Peng Li ◽  
Lian-Dong Wang ◽  
Peng Wang

Dynamic programming is difficult to apply to large-scale Bayesian network structure learning. In view of this, this article proposes a BN structure learning algorithm based on dynamic programming, which integrates improved MMPC (maximum-minimum parents and children) and MWST (maximum weight spanning tree). First, we use the maximum weight spanning tree to obtain the maximum number of parent nodes of the network node. Second, the MMPC algorithm is improved by the symmetric relationship to reduce false-positive nodes and obtain the set of candidate parent-child nodes. Finally, with the maximum number of parent nodes and the set of candidate parent nodes as constraints, we prune the parent graph of dynamic programming to reduce the number of scoring calculations and the complexity of the algorithm. Experiments have proved that when an appropriate significance level α is selected, the MMPCDP algorithm can greatly reduce the number of scoring calculations and running time while ensuring its accuracy.


Sign in / Sign up

Export Citation Format

Share Document