Analyzing the Text of Clinical Literature for Question Answering

Author(s):  
Yun Niu ◽  
Graeme Hirst

The task of question answering (QA) is to find an accurate and precise answer to a natural language question in some predefined text. Most existing QA systems handle fact-based questions that usually take named entities as the answers. In this chapter, the authors take clinical QA as an example to deal with more complex information needs. They propose an approach using Semantic class analysis as the organizing principle to answer clinical questions. They investigate three Semantic classes that correspond to roles in the commonly accepted PICO format of describing clinical scenarios. The three Semantic classes are: the description of the patient (or the problem), the intervention used to treat the problem, and the clinical outcome. The authors focus on automatic analysis of two important properties of the Semantic classes.

Author(s):  
Xinmeng Li ◽  
Mamoun Alazab ◽  
Qian Li ◽  
Keping Yu ◽  
Quanjun Yin

AbstractKnowledge graph question answering is an important technology in intelligent human–robot interaction, which aims at automatically giving answer to human natural language question with the given knowledge graph. For the multi-relation question with higher variety and complexity, the tokens of the question have different priority for the triples selection in the reasoning steps. Most existing models take the question as a whole and ignore the priority information in it. To solve this problem, we propose question-aware memory network for multi-hop question answering, named QA2MN, to update the attention on question timely in the reasoning process. In addition, we incorporate graph context information into knowledge graph embedding model to increase the ability to represent entities and relations. We use it to initialize the QA2MN model and fine-tune it in the training process. We evaluate QA2MN on PathQuestion and WorldCup2014, two representative datasets for complex multi-hop question answering. The result demonstrates that QA2MN achieves state-of-the-art Hits@1 accuracy on the two datasets, which validates the effectiveness of our model.


Author(s):  
Lianli Gao ◽  
Pengpeng Zeng ◽  
Jingkuan Song ◽  
Yuan-Fang Li ◽  
Wu Liu ◽  
...  

To date, visual question answering (VQA) (i.e., image QA and video QA) is still a holy grail in vision and language understanding, especially for video QA. Compared with image QA that focuses primarily on understanding the associations between image region-level details and corresponding questions, video QA requires a model to jointly reason across both spatial and long-range temporal structures of a video as well as text to provide an accurate answer. In this paper, we specifically tackle the problem of video QA by proposing a Structured Two-stream Attention network, namely STA, to answer a free-form or open-ended natural language question about the content of a given video. First, we infer rich longrange temporal structures in videos using our structured segment component and encode text features. Then, our structured two-stream attention component simultaneously localizes important visual instance, reduces the influence of background video and focuses on the relevant text. Finally, the structured two-stream fusion component incorporates different segments of query and video aware context representation and infers the answers. Experiments on the large-scale video QA dataset TGIF-QA show that our proposed method significantly surpasses the best counterpart (i.e., with one representation for the video input) by 13.0%, 13.5%, 11.0% and 0.3 for Action, Trans., TrameQA and Count tasks. It also outperforms the best competitor (i.e., with two representations) on the Action, Trans., TrameQA tasks by 4.1%, 4.7%, and 5.1%.


2019 ◽  
Vol 4 (4) ◽  
pp. 323-335 ◽  
Author(s):  
Peihao Tong ◽  
Qifan Zhang ◽  
Junjie Yao

Abstract With the growing availability of different knowledge graphs in a variety of domains, question answering over knowledge graph (KG-QA) becomes a prevalent information retrieval approach. Current KG-QA methods usually resort to semantic parsing, search or neural matching models. However, they cannot well tackle increasingly long input questions and complex information needs. In this work, we propose a new KG-QA approach, leveraging the rich domain context in the knowledge graph. We incorporate the new approach with question and answer domain context descriptions. Specifically, for questions, we enrich them with users’ subsequent input questions within a session and expand the input question representation. For the candidate answers, we equip them with surrounding context structures, i.e., meta-paths within the targeting knowledge graph. On top of these, we design a cross-attention mechanism to improve the question and answer matching performance. An experimental study on real datasets verifies these improvements. The new approach is especially beneficial for specific knowledge graphs with complex questions.


2007 ◽  
Vol 13 (2) ◽  
pp. 185-189
Author(s):  
ROBERT DALE

“Powerset Hype to Boiling Point”, said a February headline on TechCrunch. In the last installment of this column, I asked whether 2007 would be the year of question-answering. My query was occasioned by a number of new attempts at natural language question-answering that were being promoted in the marketplace as the next advance upon search, and particularly by the buzz around the stealth-mode natural language search company Powerset. That buzz continued with a major news item in the first quarter of this year: in February, Xerox PARC and PowerSet struck a much-anticipated deal whereby PowerSet won exclusive rights to use PARC's natural language technology, as announced in a VentureBeat posting. Following the scoop, other news sources drew the battle lines with titles like “Can natural language search bring down Google?”, “Xerox vs. Google?”, and “Powerset and Xerox PARC team up to beat Google”. An April posting on Barron's Online noted that an analyst at Global Equities Research had cited Powerset in his downgrading of Google from Buy to Neutral. And, all this on the basis of a product which, at the time of writing, very few people have actually seen. Indications are that the search engine is expected to go live by the end of the year, so we have a few more months to wait to see whether this really is a Google-killer. Meanwhile, another question remaining unanswered is what happened to the Powerset engineer who seemed less sure about the technology's capabilities: see the segment at the end of D7TV's PartyCrasher video from the Powerset launch party. For a more confident appraisal of natural language search, check out the podcast of Barney Pell, CEO of Powerset, giving a lecture at the University of California–Berkeley.


2010 ◽  
Vol 23 (2-3) ◽  
pp. 241-265 ◽  
Author(s):  
Ulrich Furbach ◽  
Ingo Glöckner ◽  
Björn Pelzer

Sign in / Sign up

Export Citation Format

Share Document