A systematic review of question answering systems for non-factoid questions

Author(s):  
Eduardo Gabriel Cortes ◽  
Vinicius Woloszyn ◽  
Dante Barone ◽  
Sebastian Möller ◽  
Renata Vieira
2014 ◽  
Vol 46 (1) ◽  
pp. 61-82 ◽  
Author(s):  
Antonio Ferrández ◽  
Alejandro Maté ◽  
Jesús Peral ◽  
Juan Trujillo ◽  
Elisa De Gregorio ◽  
...  

2007 ◽  
Vol 33 (1) ◽  
pp. 105-133 ◽  
Author(s):  
Catalina Hallett ◽  
Donia Scott ◽  
Richard Power

This article describes a method for composing fluent and complex natural language questions, while avoiding the standard pitfalls of free text queries. The method, based on Conceptual Authoring, is targeted at question-answering systems where reliability and transparency are critical, and where users cannot be expected to undergo extensive training in question composition. This scenario is found in most corporate domains, especially in applications that are risk-averse. We present a proof-of-concept system we have developed: a question-answering interface to a large repository of medical histories in the area of cancer. We show that the method allows users to successfully and reliably compose complex queries with minimal training.


2001 ◽  
Vol 7 (4) ◽  
pp. 361-378 ◽  
Author(s):  
ELLEN M. VOORHEES

The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far, first in TREC-8 and again in TREC-9. In each case, the goal was to retrieve small snippets of text that contain the actual answer to a question rather than the document lists traditionally returned by text retrieval systems. The best performing systems were able to answer about 70% of the questions in TREC-8 and about 65% of the questions in TREC-9. While the 65% score is a slightly worse result than the TREC-8 scores in absolute terms, it represents a very significant improvement in question answering systems. The TREC-9 task was considerably harder than the TREC-8 task because TREC-9 used actual users’ questions while TREC-8 used questions constructed for the track. Future tracks will continue to challenge the QA community with more difficult, and more realistic, question answering tasks.


Sign in / Sign up

Export Citation Format

Share Document