Health assistant: answering your questions anytime from biomedical literature

2019 ◽  
Vol 35 (20) ◽  
pp. 4129-4139 ◽  
Author(s):  
Zan-Xia Jin ◽  
Bo-Wen Zhang ◽  
Fan Fang ◽  
Le-Le Zhang ◽  
Xu-Cheng Yin

Abstract Motivation With the abundant medical resources, especially literature available online, it is possible for people to understand their own health status and relevant problems autonomously. However, how to obtain the most appropriate answer from the increasingly large-scale database, remains a great challenge. Here, we present a biomedical question answering framework and implement a system, Health Assistant, to enable the search process. Methods In Health Assistant, a search engine is firstly designed to rank biomedical documents based on contents. Then various query processing and search techniques are utilized to find the relevant documents. Afterwards, the titles and abstracts of top-N documents are extracted to generate candidate snippets. Finally, our own designed query processing and retrieval approaches for short text are applied to locate the relevant snippets to answer the questions. Results Our system is evaluated on the BioASQ benchmark datasets, and experimental results demonstrate the effectiveness and robustness of our system, compared to BioASQ participant systems and some state-of-the-art methods on both document retrieval and snippet retrieval tasks. Availability and implementation A demo of our system is available at https://github.com/jinzanxia/biomedical-QA.

Author(s):  
Siva Reddy ◽  
Mirella Lapata ◽  
Mark Steedman

In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art.


2021 ◽  
Author(s):  
Shreya Mishra ◽  
Raghav Awasthi ◽  
Frank Papay ◽  
Kamal Maheshawari ◽  
Jacek B Cywinski ◽  
...  

Question answering (QA) is one of the oldest research areas of AI and Compu- national Linguistics. QA has seen significant progress with the development of state-of-the-art models and benchmark datasets over the last few years. However, pre-trained QA models perform poorly for clinical QA tasks, presumably due to the complexity of electronic healthcare data. With the digitization of healthcare data and the increasing volume of unstructured data, it is extremely important for healthcare providers to have a mechanism to query the data to find appropriate answers. Since diagnosis is central to any decision-making for the clinicians and patients, we have created a pipeline to develop diagnosis-specific QA datasets and curated a QA database for the Cerebrovascular Accident (CVA). CVA, also commonly known as Stroke, is an important and commonly occurring diagnosis amongst critically ill patients. Our method when compared to clinician validation achieved an accuracy of 0.90(with 90% CI [0.82,0.99]). Using our method, we hope to overcome the key challenges of building and validating a highly accurate QA dataset in a semiautomated manner which can help improve performance of QA models.


2023 ◽  
Vol 55 (1) ◽  
pp. 1-39
Author(s):  
Thanh Tuan Nguyen ◽  
Thanh Phuong Nguyen

Representing dynamic textures (DTs) plays an important role in many real implementations in the computer vision community. Due to the turbulent and non-directional motions of DTs along with the negative impacts of different factors (e.g., environmental changes, noise, illumination, etc.), efficiently analyzing DTs has raised considerable challenges for the state-of-the-art approaches. For 20 years, many different techniques have been introduced to handle the above well-known issues for enhancing the performance. Those methods have shown valuable contributions, but the problems have been incompletely dealt with, particularly recognizing DTs on large-scale datasets. In this article, we present a comprehensive taxonomy of DT representation in order to purposefully give a thorough overview of the existing methods along with overall evaluations of their obtained performances. Accordingly, we arrange the methods into six canonical categories. Each of them is then taken in a brief presentation of its principal methodology stream and various related variants. The effectiveness levels of the state-of-the-art methods are then investigated and thoroughly discussed with respect to quantitative and qualitative evaluations in classifying DTs on benchmark datasets. Finally, we point out several potential applications and the remaining challenges that should be addressed in further directions. In comparison with two existing shallow DT surveys (i.e., the first one is out of date as it was made in 2005, while the newer one (published in 2016) is an inadequate overview), we believe that our proposed comprehensive taxonomy not only provides a better view of DT representation for the target readers but also stimulates future research activities.


Author(s):  
Chao Li ◽  
Cheng Deng ◽  
Lei Wang ◽  
De Xie ◽  
Xianglong Liu

In recent years, hashing has attracted more and more attention owing to its superior capacity of low storage cost and high query efficiency in large-scale cross-modal retrieval. Benefiting from deep leaning, continuously compelling results in cross-modal retrieval community have been achieved. However, existing deep cross-modal hashing methods either rely on amounts of labeled information or have no ability to learn an accuracy correlation between different modalities. In this paper, we proposed Unsupervised coupled Cycle generative adversarial Hashing networks (UCH), for cross-modal retrieval, where outer-cycle network is used to learn powerful common representation, and inner-cycle network is explained to generate reliable hash codes. Specifically, our proposed UCH seamlessly couples these two networks with generative adversarial mechanism, which can be optimized simultaneously to learn representation and hash codes. Extensive experiments on three popular benchmark datasets show that the proposed UCH outperforms the state-of-the-art unsupervised cross-modal hashing methods.


2020 ◽  
Vol 34 (07) ◽  
pp. 13041-13049 ◽  
Author(s):  
Luowei Zhou ◽  
Hamid Palangi ◽  
Lei Zhang ◽  
Houdong Hu ◽  
Jason Corso ◽  
...  

This paper presents a unified Vision-Language Pre-training (VLP) model. The model is unified in that (1) it can be fine-tuned for either vision-language generation (e.g., image captioning) or understanding (e.g., visual question answering) tasks, and (2) it uses a shared multi-layer transformer network for both encoding and decoding, which differs from many existing methods where the encoder and decoder are implemented using separate models. The unified VLP model is pre-trained on a large amount of image-text pairs using the unsupervised learning objectives of two tasks: bidirectional and sequence-to-sequence (seq2seq) masked vision-language prediction. The two tasks differ solely in what context the prediction conditions on. This is controlled by utilizing specific self-attention masks for the shared transformer network. To the best of our knowledge, VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30k Captions, and VQA 2.0. The code and the pre-trained models are available at https://github.com/LuoweiZhou/VLP.


2021 ◽  
Author(s):  
Soham Dasgupta ◽  
Aishwarya Jayagopal ◽  
Abel Lim Jun Hong ◽  
Ragunathan Mariappan ◽  
Vaibhav Rajan

BACKGROUND Adverse Drug Events (ADEs) are unintended side-effects of drugs that cause substantial clinical and economic burden globally. Not all ADEs are discovered during clinical trials and so, post-marketing surveillance, called pharmacovigilance, is routinely conducted to find unknown ADEs. A wealth of information, that facilitates ADE discovery, lies in the enormous and continuously growing body of biomedical literature. Knowledge graphs (KG) encode information from the literature, where vertices and edges represent clinical concepts and their relations respectively. The scale and unstructured form of the literature necessitates the use of natural language processing (NLP) to automatically create such KGs. Previous studies have demonstrated the utility of such literature-derived KGs in ADE prediction. Through unsupervised learning of representations (features) of clinical concepts from the KG, that are used in machine learning models, state-of-the-art results for ADE prediction were obtained on benchmark datasets. OBJECTIVE In literature-derived KGs there is `noise’ in the form of false positive (erroneous) and false negative (absent) nodes and edges due to limitations of the NLP techniques used to infer the KGs. Previous representation learning methods do not account for such inaccuracies in the graph. NLP algorithms can quantify the confidence in their inference of extracted concepts and relations from the literature. Our hypothesis that motivates this work is that by utilizing such confidence scores during representation learning, the learnt embeddings would yield better features for ADE prediction models. METHODS We develop methods to utilize these confidence scores on two well-known representation learning methods – Deepwalk and TransE – to develop their `weighted’ versions – Weighted Deepwalk and Weighted TransE. These methods are used to learn representations from a large literature-derived KG, SemMedDB, containing more than 93 million clinical relations. They are compared with Embeddings of Sematic Predictions (ESP), that, to our knowledge, is the best reported representation learning method on SemMedDB with state-of-the-art results for ADE prediction. Representations learnt from different methods are used (separately) as features of drugs and diseases to build classification models for ADE prediction using benchmark datasets. The classification performance of all the methods is compared rigorously over multiple cross-validation settings. RESULTS The `weighted’ versions we design are able to learn representations that yield more accurate predictive models compared to both the corresponding unweighted versions of Deepwalk and TransE, as well as ESP, in our experiments. Performance improvements are up to 5.75% in F1 score and 8.4% in AUC, thus advancing the state-of-the-art in ADE prediction from literature-derived KGs. Implementation of our new methods and all experiments are available at https://bitbucket.org/cdal/kb_embeddings. CONCLUSIONS Our classification models can be used to aid pharmacovigilance teams in detecting potentially new ADEs. Our experiments demonstrate the importance of modelling inaccuracies in the inferred KGs for representation learning, which may also be useful in other predictive models that utilize literature-derived KGs.


Author(s):  
Ming Yan ◽  
Jiangnan Xia ◽  
Chen Wu ◽  
Bin Bi ◽  
Zhongzhou Zhao ◽  
...  

A fundamental trade-off between effectiveness and efficiency needs to be balanced when designing an online question answering system. Effectiveness comes from sophisticated functions such as extractive machine reading comprehension (MRC), while efficiency is obtained from improvements in preliminary retrieval components such as candidate document selection and paragraph ranking. Given the complexity of the real-world multi-document MRC scenario, it is difficult to jointly optimize both in an end-to-end system. To address this problem, we develop a novel deep cascade learning model, which progressively evolves from the documentlevel and paragraph-level ranking of candidate texts to more precise answer extraction with machine reading comprehension. Specifically, irrelevant documents and paragraphs are first filtered out with simple functions for efficiency consideration. Then we jointly train three modules on the remaining texts for better tracking the answer: the document extraction, the paragraph extraction and the answer extraction. Experiment results show that the proposed method outperforms the previous state-of-the-art methods on two large-scale multidocument benchmark datasets, i.e., TriviaQA and DuReader. In addition, our online system can stably serve typical scenarios with millions of daily requests in less than 50ms.


2020 ◽  
Author(s):  
Xiongnan Jin ◽  
Yooyoung Lee ◽  
Jonathan Fiscus ◽  
Haiying Guan ◽  
Amy N. Yates ◽  
...  

2020 ◽  
Vol 34 (05) ◽  
pp. 7651-7658 ◽  
Author(s):  
Yang Deng ◽  
Wai Lam ◽  
Yuexiang Xie ◽  
Daoyuan Chen ◽  
Yaliang Li ◽  
...  

Community question answering (CQA) gains increasing popularity in both academy and industry recently. However, the redundancy and lengthiness issues of crowdsourced answers limit the performance of answer selection and lead to reading difficulties and misunderstandings for community users. To solve these problems, we tackle the tasks of answer selection and answer summary generation in CQA with a novel joint learning model. Specifically, we design a question-driven pointer-generator network, which exploits the correlation information between question-answer pairs to aid in attending the essential information when generating answer summaries. Meanwhile, we leverage the answer summaries to alleviate noise in original lengthy answers when ranking the relevancy degrees of question-answer pairs. In addition, we construct a new large-scale CQA corpus, WikiHowQA, which contains long answers for answer selection as well as reference summaries for answer summarization. The experimental results show that the joint learning method can effectively address the answer redundancy issue in CQA and achieves state-of-the-art results on both answer selection and text summarization tasks. Furthermore, the proposed model is shown to be of great transferring ability and applicability for resource-poor CQA tasks, which lack of reference answer summaries.


Author(s):  
Chenyu You ◽  
Nuo Chen ◽  
Yuexian Zou

Spoken question answering (SQA) has recently drawn considerable attention in the speech community. It requires systems to find correct answers from the given spoken passages simultaneously. The common SQA systems consist of the automatic speech recognition (ASR) module and text-based question answering module. However, previous methods suffer from severe performance degradation due to ASR errors. To alleviate this problem, this work proposes a novel multi-modal residual knowledge distillation method (MRD-Net), which further distills knowledge at the acoustic level from the audio-assistant (Audio-A). Specifically, we utilize the teacher (T) trained on manual transcriptions to guide the training of the student (S) on ASR transcriptions. We also show that introducing an Audio-A helps this procedure by learning residual errors between T and S. Moreover, we propose a simple yet effective attention mechanism to adaptively leverage audio-text features as the new deep attention knowledge to boost the network performance. Extensive experiments demonstrate that the proposed MRD-Net achieves superior results compared with state-of-the-art methods on three spoken question answering benchmark datasets.


Sign in / Sign up

Export Citation Format

Share Document