Research on Multi-document Summarization Model Based on Dynamic Manifold-Ranking

Author(s):  
Meiling Liu ◽  
Honge Ren ◽  
Dequan Zheng ◽  
Tiejun Zhao
2013 ◽  
Vol 380-384 ◽  
pp. 2811-2816
Author(s):  
Kai Lei ◽  
Yi Fan Zeng

Query-oriented multi-document summarization (QMDS) attempts to generate a concise piece of text byextracting sentences from a target document collection, with the aim of not only conveying the key content of that corpus, also, satisfying the information needs expressed by that query. Due to its great applicable value, QMDS has been intensively studied in recent decades. Three properties are supposed crucial for a good summary, i.e., relevance, prestige and low redundancy (orso-called diversity). Unfortunately, most existing work either disregarded the concern of diversity, or handled it with non-optimized heuristics, usually based on greedy sentences election. Inspired by the manifold-ranking process, which deals with query-biased prestige, and DivRank algorithm which captures query-independent diversity ranking, in this paper, we propose a novel biased diversity ranking model, named ManifoldDivRank, for query-sensitive summarization tasks. The top-ranked sentences discovered by our algorithm not only enjoy query-oriented high prestige, more importantly, they are dissimilar with each other. Experimental results on DUC2005and DUC2006 benchmark data sets demonstrate the effectiveness of our proposal.


2013 ◽  
Vol 20 (4) ◽  
pp. 585-612
Author(s):  
Hitoshi Nishikawa ◽  
Tsutomu Hirao ◽  
Toshiro Makino ◽  
Yoshihiro Matsuo ◽  
Yuji Matsumoto

2020 ◽  
pp. 1498-1511
Author(s):  
Dheyaa Abdulameer Mohammed ◽  
Nasreen J. Kadhim

Currently, the prominence of automatic multi document summarization task belongs to the information rapid increasing on the Internet. Automatic document summarization technology is progressing and may offer a solution to the problem of information overload.  Automatic text summarization system has the challenge of producing a high quality summary. In this study, the design of generic text summarization model based on sentence extraction has been redirected into a more semantic measure reflecting individually the two significant objectives: content coverage and diversity when generating summaries from multiple documents as an explicit optimization model. The proposed two models have been then coupled and defined as a single-objective optimization problem. Also, for improving the performance of the proposed model, different integrations concerning two similarity measures have been introduced and applied to the proposed model along with the single similarity measures that are based on using Cosine, Dice and  similarity measures for measuring text similarity. For solving the proposed model, Genetic Algorithm (GA) has been used. Document sets supplied by Document Understanding Conference 2002 ( ) have been used for the proposed system as an evaluation dataset. Also, as an evaluation metric, Recall-Oriented Understudy for Gisting Evaluation ( ) toolkit has been used for performance evaluation of the proposed method. Experimental results have illustrated the positive impact of measuring text similarity using double integration of similarity measures against single similarity measure when applied to the proposed model wherein the best performance in terms of  and  has been recorded for the integration of Cosine similarity and  similarity.


2010 ◽  
Vol 01 (02) ◽  
pp. 105-111 ◽  
Author(s):  
Rasim Alguliev ◽  
Ramiz Aliguliyev ◽  
Makrufa Hajirahimova

2020 ◽  
Vol 43 ◽  
Author(s):  
Peter Dayan

Abstract Bayesian decision theory provides a simple formal elucidation of some of the ways that representation and representational abstraction are involved with, and exploit, both prediction and its rather distant cousin, predictive coding. Both model-free and model-based methods are involved.


Sign in / Sign up

Export Citation Format

Share Document