A Breadth-First Search Algorithm for Mining Generalized Frequent Itemsets Based on Set Enumeration Tree

Author(s):  
Yu Xing Mao ◽  
Bai Le Shi
Author(s):  
Md. Sabir Hossain ◽  
Ahsan Sadee Tanim ◽  
Nabila Nawal ◽  
Sharmin Akter

Background: Tour recommendation and path planning are the most challenging jobs for tourists as they decide Points of Interest (POI).Objective: To reduce the physical effort of the tourists and recommend them a personalized tour is the main objective of this paper. Most of the time people had to find the places he wants to visit in a difficult way. It kills a lot of time.Methods: To cope with this situation we have used different methodology. First, a greedy algorithm is used for filtering the POIs and BFS (Breadth First Search) algorithm will find POI in terms of user interest. The maximum number of visited POI within a limited time will be considered. Then, the Dijkstra algorithm finds the shortest path from the point of departure to the end of tours.Results:  This work shows its users list of places according to the user's interest in a particular city. It also suggests them places to visit in a range from the location of the user where a user can dynamically change this range and it also suggests nearby places they may want to visit.Conclusion: This tour recommendation system provides its users with a better trip planning and thus makes their holidays enjoyable.


2008 ◽  
Vol 17 (02) ◽  
pp. 303-320 ◽  
Author(s):  
WEI SONG ◽  
BINGRU YANG ◽  
ZHANGYAN XU

Because of the inherent computational complexity, mining the complete frequent item-set in dense datasets remains to be a challenging task. Mining Maximal Frequent Item-set (MFI) is an alternative to address the problem. Set-Enumeration Tree (SET) is a common data structure used in several MFI mining algorithms. For this kind of algorithm, the process of mining MFI's can also be viewed as the process of searching in set-enumeration tree. To reduce the search space, in this paper, a new algorithm, Index-MaxMiner, for mining MFI is proposed by employing a hybrid search strategy blending breadth-first and depth-first. Firstly, the index array is proposed, and based on bitmap, an algorithm for computing index array is presented. By adding subsume index to frequent items, Index-MaxMiner discovers the candidate MFI's using breadth-first search at one time, which avoids first-level nodes that would not participate in the answer set and reduces drastically the number of candidate itemsets. Then, for candidate MFI's, depth-first search strategy is used to generate all MFI's. Thus, the jumping search in SET is implemented, and the search space is reduced greatly. The experimental results show that the proposed algorithm is efficient especially for dense datasets.


Author(s):  
A. Yoo ◽  
E. Chow ◽  
K. Henderson ◽  
W. McLendon ◽  
B. Hendrickson ◽  
...  

1987 ◽  
Vol 25 (5) ◽  
pp. 329-333 ◽  
Author(s):  
Yunzhou Zhu ◽  
To-Yat Cheung

2012 ◽  
Vol 433-440 ◽  
pp. 475-479
Author(s):  
Ke Dang ◽  
Hui Ming Zhang ◽  
Xin He

The island operation is a new operating mode of the distribution network after the distributed generation is introduced. And it is a beneficial supplement for the operation of the distribution network. The distributed power supply capacity is made full use to ensure the safe operation of the maximum number of significant load when the fault occurs. A combinational algorithm of the planned island partition is presented according to the principle of power balance in the distribution network failure. Heuristic search strategy and the breadth-first search algorithm are adopted by taking into account the characteristics of different types of distributed power. The validity of the method is proved by the example.


Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 211 ◽  
Author(s):  
Pierluigi Crescenzi ◽  
Clémence Magnien ◽  
Andrea Marino

The harmonic closeness centrality measure associates, to each node of a graph, the average of the inverse of its distances from all the other nodes (by assuming that unreachable nodes are at infinite distance). This notion has been adapted to temporal graphs (that is, graphs in which edges can appear and disappear during time) and in this paper we address the question of finding the top-k nodes for this metric. Computing the temporal closeness for one node can be done in O(m) time, where m is the number of temporal edges. Therefore computing exactly the closeness for all nodes, in order to find the ones with top closeness, would require O(nm) time, where n is the number of nodes. This time complexity is intractable for large temporal graphs. Instead, we show how this measure can be efficiently approximated by using a “backward” temporal breadth-first search algorithm and a classical sampling technique. Our experimental results show that the approximation is excellent for nodes with high closeness, allowing us to detect them in practice in a fraction of the time needed for computing the exact closeness of all nodes. We validate our approach with an extensive set of experiments.


Sign in / Sign up

Export Citation Format

Share Document