Development of a new hesitant fuzzy ranking model for NTMP ranking problem

2021 ◽  
Author(s):  
Kumru Didem Atalay ◽  
Yusuf Tansel İç ◽  
Barış Keçeci ◽  
Mustafa Yurdakul ◽  
Melis Boran
2019 ◽  
Vol 24 (13) ◽  
pp. 10095-10110
Author(s):  
Mustafa Yurdakul ◽  
Yusuf Tansel İç ◽  
Kumru Didem Atalay

Author(s):  
Qi Zeng ◽  
Liangchen Luo ◽  
Wenhao Huang ◽  
Yang Tang

Extracting valuable facts or informative summaries from multi-dimensional tables, i.e. insight mining, is an important task in data analysis and business intelligence. However, ranking the importance of insights remains a challenging and unexplored task. The main challenge is that explicitly scoring an insight or giving it a rank requires a thorough understanding of the tables and costs a lot of manual efforts, which leads to the lack of available training data for the insight ranking problem. In this paper, we propose an insight ranking model that consists of two parts: A neural ranking model explores the data characteristics, such as the header semantics and the data statistical features, and a memory network model introduces table structure and context information into the ranking process. We also build a dataset with text assistance. Experimental results show that our approach largely improves the ranking precision as reported in multi evaluation metrics.


1992 ◽  
Vol 02 (01) ◽  
pp. 31-41 ◽  
Author(s):  
PILAR DE LA TORRE ◽  
RAYMOND GREENLAW ◽  
TERESA M. PRZYTYCKA

This paper places the optimal tree ranking problem in [Formula: see text]. A ranking is a labeling of the nodes with natural numbers such that if nodes u and v have the same label then there exists another node with a greater label on the path between them. An optimal ranking is a ranking in which the largest label assigned to any node is as small as possible among all rankings. An O(n) sequential algorithm is known. Researchers have speculated that this problem is P-complete. We show that for an n-node tree, one can compute an optimal ranking in O( log n) time using n2/ log n CREW PRAM processors. In fact, our ranking is super critical in that the label assigned to each node is absolutely as small as possible. We achieve these results by showing that a more general problem, which we call the super critical numbering problem, is in [Formula: see text]. No [Formula: see text] algorithm for the super critical tree ranking problem, approximate or otherwise, was previously known; the only known [Formula: see text] algorithm for optimal tree ranking was an approximate one.


2013 ◽  
Vol 48 (1) ◽  
pp. 51-62 ◽  
Author(s):  
Amir M. Ben-Amram ◽  
Samir Genaim

Sign in / Sign up

Export Citation Format

Share Document