Combinatorial Fusion Analysis for Meta Search Information Retrieval

Author(s):  
D. Frank Hsu ◽  
Isak Taksa
2008 ◽  
pp. 1157-1181 ◽  
Author(s):  
D. Frank Hsu ◽  
Yun-Sheng Chung ◽  
Kristal Bruce S.

Combination methods have been investigated as a possible means to improve performance in multi-variable (multi-criterion or multi-objective) classification, prediction, learning, and optimization problems. In addition, information collected from multi-sensor or multi-source environment also often needs to be combined to produce more accurate information, to derive better estimation, or to make more knowledgeable decisions. In this chapter, we present a method, called Combinatorial Fusion Analysis (CFA), for analyzing combination and fusion of multiple scoring. CFA characterizes each Scoring system as having included a Score function, a Rank function, and a Rank/score function. Both rank combination and score combination are explored as to their combinatorial complexity and computational efficiency. Information derived from the scoring characteristics of each scoring system is used to perform system selection and to decide method combination. In particular, the rank/score graph defined by Hsu, Shapiro and Taksa (Hsu et al., 2002; Hsu & Taksa, 2005) is used to measure the diversity between scoring systems. We illustrate various applications of the framework using examples in information retrieval and biomedical informatics.


Author(s):  
D. Frank Hsu ◽  
Yun-Sheng Chung ◽  
Bruce S. Kristal

2013 ◽  
Vol 14 (01) ◽  
pp. 1350003 ◽  
Author(s):  
CHUN-YI LIU ◽  
CHUAN-YI TANG ◽  
D. FRANK HSU

Combining multiple information retrieval (IR) systems has been shown to improve performance over individual systems. However, it remains a challenging problem to determine when and how a set of individual systems should to be combined. In this paper, we investigate these issues using combinatorial fusion analysis and five data sets provide by TREC 2, 3, 4, 5, and 6. In particular, we compare the performance of combining six IR systems selected by random choice vs. by performance measurement from these five TREC data sets. Two experiments are conducted, which include: (1) combination of two systems and their performance outcome in terms of performance ratio and cognitive diversity, and (2) combinatorial fusion of t-systems, t = 2 to 6, using both score and rank combinations and exploration of the effect of diversity on the performance outcome. It is demonstrated in both experiments that combination of two or more systems improves the performance more significantly when the systems are selected by performance evaluation than those selected by random choice. Our work provides a distinctive method of system selection for the combination of multiple retrieval systems.


2019 ◽  
Vol 53 (2) ◽  
pp. 45-53
Author(s):  
Omar Alonso ◽  
Gianmaria Silvello

This is a report on the first edition of the International Conference on Design of Experimental Search & Information REtrieval Systems ( DESIRES 2018) held in Bertinoro, Italy, from August 28 to August 31, 2018.


Author(s):  
J. Vivekavardhan

Search Engines (SEs) and Meta-Search Engines (MSEs) are the tools that allows people to find information on the World Wide Web. SEs and MSEs on internet have improved continually with application of new methodologies to satisfy their users by providing them with relevant information. Understanding and Utilization of SEs and MSEs are useful for information scientist, knowledge manager, librarians and most importantly for authors and researchers for effective information retrieval and scholarly communication. The paper explores on how Search Engines and Meta-Search Engines discover web pages, indexes content, and provide search results. The paper discusses about the technological evolution of SEs and MSEs, working process and different types of SEs and MSEs. Finally paper presents conclusions and suggestions for further research.


Sign in / Sign up

Export Citation Format

Share Document