Strategies for Improving the Efficacy of Fusion Question Answering Systems

Author(s):  
José Antonio Robles-Flores ◽  
Gregory Schymik ◽  
Julie Smith-David ◽  
Robert St. Louis

Web search engines typically retrieve a large number of web pages and overload business analysts with irrelevant information. One approach that has been proposed for overcoming some of these problems is automated Question Answering (QA). This paper describes a case study that was designed to determine the efficacy of QA systems for generating answers to original, fusion, list questions (questions that have not previously been asked and answered, questions for which the answer cannot be found on a single web site, and questions for which the answer is a list of items). Results indicate that QA algorithms are not very good at producing complete answer lists and that searchers are not very good at constructing answer lists from snippets. These findings indicate a need for QA research to focus on crowd sourcing answer lists and improving output format.

2011 ◽  
Vol 2 (1) ◽  
pp. 46-63
Author(s):  
José Antonio Robles-Flores ◽  
Gregory Schymik ◽  
Julie Smith-David ◽  
Robert St. Louis

Web search engines typically retrieve a large number of web pages and overload business analysts with irrelevant information. One approach that has been proposed for overcoming some of these problems is automated Question Answering (QA). This paper describes a case study that was designed to determine the efficacy of QA systems for generating answers to original, fusion, list questions (questions that have not previously been asked and answered, questions for which the answer cannot be found on a single web site, and questions for which the answer is a list of items). Results indicate that QA algorithms are not very good at producing complete answer lists and that searchers are not very good at constructing answer lists from snippets. These findings indicate a need for QA research to focus on crowd sourcing answer lists and improving output format.


The Dark Web ◽  
2018 ◽  
pp. 359-374
Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

ICT plays a vital role in human development through information extraction and includes computer networks and telecommunication networks. One of the important modules of ICT is computer networks, which are the backbone of the World Wide Web (WWW). Search engines are computer programs that browse and extract information from the WWW in a systematic and automatic manner. This paper examines the three main components of search engines: Extractor, a web crawler which starts with a URL; Analyzer, an indexer that processes words on the web page and stores the resulting index in a database; and Interface Generator, a query handler that understands the need and preferences of the user. This paper concentrates on the information available on the surface web through general web pages and the hidden information behind the query interface, called deep web. This paper emphasizes the Extraction of relevant information to generate the preferred content for the user as the first result of his or her search query. This paper discusses the aspect of deep web with analysis of a few existing deep web search engines.


Author(s):  
Daniel Crabtree

Web search engines help users find relevant web pages by returning a result set containing the pages that best match the user’s query. When the identified pages have low relevance, the query must be refined to capture the search goal more effectively. However, finding appropriate refinement terms is difficult and time consuming for users, so researchers developed query expansion approaches to identify refinement terms automatically. There are two broad approaches to query expansion, automatic query expansion (AQE) and interactive query expansion (IQE) (Ruthven et al., 2003). AQE has no user involvement, which is simpler for the user, but limits its performance. IQE has user involvement, which is more complex for the user, but means it can tackle more problems such as ambiguous queries. Searches fail by finding too many irrelevant pages (low precision) or by finding too few relevant pages (low recall). AQE has a long history in the field of information retrieval, where the focus has been on improving recall (Velez et al., 1997). Unfortunately, AQE often decreased precision as the terms used to expand a query often changed the query’s meaning (Croft and Harper (1979) identified this effect and named it query drift). The problem is that users typically consider just the first few results (Jansen et al., 2005), which makes precision vital to web search performance. In contrast, IQE has historically balanced precision and recall, leading to an earlier uptake within web search. However, like AQE, the precision of IQE approaches needs improvement. Most recently, approaches have started to improve precision by incorporating semantic knowledge.


Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

ICT plays a vital role in human development through information extraction and includes computer networks and telecommunication networks. One of the important modules of ICT is computer networks, which are the backbone of the World Wide Web (WWW). Search engines are computer programs that browse and extract information from the WWW in a systematic and automatic manner. This paper examines the three main components of search engines: Extractor, a web crawler which starts with a URL; Analyzer, an indexer that processes words on the web page and stores the resulting index in a database; and Interface Generator, a query handler that understands the need and preferences of the user. This paper concentrates on the information available on the surface web through general web pages and the hidden information behind the query interface, called deep web. This paper emphasizes the Extraction of relevant information to generate the preferred content for the user as the first result of his or her search query. This paper discusses the aspect of deep web with analysis of a few existing deep web search engines.


Author(s):  
Wen-Chen Hu ◽  
Hung-Jen Yang ◽  
Jyh-haw Yeh ◽  
Chung-wei Lee

The World Wide Web now holds more than six billion pages covering almost all daily issues. The Web’s fast growing size and lack of structural style present a new challenge for information retrieval (Lawrence & Giles, 1999a). Traditional search techniques are based on users typing in search keywords which the search services can then use to locate the desired Web pages. However, this approach normally retrieves too many documents, of which only a small fraction are relevant to the users’ needs. Furthermore, the most relevant documents do not necessarily appear at the top of the query output list. Numerous search technologies have been applied to Web search engines; however, the dominant search methods have yet to be identified. This article provides an overview of the existing technologies for Web search engines and classifies them into six categories: i) hyperlink exploration, ii) information retrieval, iii) metasearches, iv) SQL approaches, v) content-based multimedia searches, and vi) others. At the end of this article, a comparative study of major commercial and experimental search engines is presented, and some future research directions for Web search engines are suggested. Related Web search technology review can also be found in Arasu, Cho, Garcia-Molina, Paepcke, and Raghavan (2001) and Lawrence and Giles (1999b).


Author(s):  
Kamal Al-Sabahi ◽  
Zhang Zuping

In the era of information overload, text summarization has become a focus of attention in a number of diverse fields such as, question answering systems, intelligence analysis, news recommendation systems, search results in web search engines, and so on. A good document representation is the key point in any successful summarizer. Learning this representation becomes a very active research in natural language processing field (NLP). Traditional approaches mostly fail to deliver a good representation. Word embedding has proved an excellent performance in learning the representation. In this paper, a modified BM25 with Word Embeddings are used to build the sentence vectors from word vectors. The entire document is represented as a set of sentence vectors. Then, the similarity between every pair of sentence vectors is computed. After that, TextRank, a graph-based model, is used to rank the sentences. The summary is generated by picking the top-ranked sentences according to the compression rate. Two well-known datasets, DUC2002 and DUC2004, are used to evaluate the models. The experimental results show that the proposed models perform comprehensively better compared to the state-of-the-art methods.


2015 ◽  
Vol 12 (1) ◽  
pp. 91-114 ◽  
Author(s):  
Víctor Prieto ◽  
Manuel Álvarez ◽  
Víctor Carneiro ◽  
Fidel Cacheda

Search engines use crawlers to traverse the Web in order to download web pages and build their indexes. Maintaining these indexes up-to-date is an essential task to ensure the quality of search results. However, changes in web pages are unpredictable. Identifying the moment when a web page changes as soon as possible and with minimal computational cost is a major challenge. In this article we present the Web Change Detection system that, in a best case scenario, is capable to detect, almost in real time, when a web page changes. In a worst case scenario, it will require, on average, 12 minutes to detect a change on a low PageRank web site and about one minute on a web site with high PageRank. Meanwhile, current search engines require more than a day, on average, to detect a modification in a web page (in both cases).


First Monday ◽  
2008 ◽  
Author(s):  
Mark Meiss ◽  
Filippo Menczer

Understanding the qualitative differences between the sets of results from different search engines can be a difficult task. How many links must you follow from each list before you can reach a conclusion? We describe a user interface that allows users to quickly identify the most significant differences in content between two lists of Web pages. We have implemented this interface in CenSEARCHip, a system for comparing the effects of censorship policies on search engines.


AI Magazine ◽  
2015 ◽  
Vol 36 (4) ◽  
pp. 61-70 ◽  
Author(s):  
Daniel M. Russell

For the vast majority of queries (for example, navigation, simple fact lookup, and others), search engines do extremely well. Their ability to quickly provide answers to queries is a remarkable testament to the power of many of the fundamental methods of AI. They also highlight many of the issues that are common to sophisticated AI question-answering systems. It has become clear that people think of search programs in ways that are very different from traditional information sources. Rapid and ready-at-hand access, depth of processing, and the way they enable people to offload some ordinary memory tasks suggest that search engines have become more of a cognitive amplifier than a simple repository or front-end to the Internet. Like all sophisticated tools, people still need to learn how to use them. Although search engines are superb at finding and presenting information—up to and including extracting complex relations and making simple inferences—knowing how to frame questions and evaluate their results for accuracy and credibility remains an ongoing challenge. Some questions are still deep and complex, and still require knowledge on the part of the search user to work through to a successful answer. And the fact that the underlying information content, user interfaces, and capabilities are all in a continual state of change means that searchers need to continually update their knowledge of what these programs can (and cannot) do.


Sign in / Sign up

Export Citation Format

Share Document