question answering systems
Recently Published Documents


TOTAL DOCUMENTS

315
(FIVE YEARS 110)

H-INDEX

19
(FIVE YEARS 5)

2022 ◽  
Vol 40 (1) ◽  
pp. 1-43
Author(s):  
Ruqing Zhang ◽  
Jiafeng Guo ◽  
Lu Chen ◽  
Yixing Fan ◽  
Xueqi Cheng

Question generation is an important yet challenging problem in Artificial Intelligence (AI), which aims to generate natural and relevant questions from various input formats, e.g., natural language text, structure database, knowledge base, and image. In this article, we focus on question generation from natural language text, which has received tremendous interest in recent years due to the widespread applications such as data augmentation for question answering systems. During the past decades, many different question generation models have been proposed, from traditional rule-based methods to advanced neural network-based methods. Since there have been a large variety of research works proposed, we believe it is the right time to summarize the current status, learn from existing methodologies, and gain some insights for future development. In contrast to existing reviews, in this survey, we try to provide a more comprehensive taxonomy of question generation tasks from three different perspectives, i.e., the types of the input context text, the target answer, and the generated question. We take a deep look into existing models from different dimensions to analyze their underlying ideas, major design principles, and training strategies We compare these models through benchmark tasks to obtain an empirical understanding of the existing techniques. Moreover, we discuss what is missing in the current literature and what are the promising and desired future directions.


2022 ◽  
Vol 70 (3) ◽  
pp. 4763-4780
Author(s):  
Abdulaziz Al-Besher ◽  
Kailash Kumar ◽  
M. Sangeetha ◽  
Tinashe Butsa

Author(s):  
Haonan Li ◽  
Ehsan Hamzei ◽  
Ivan Majic ◽  
Hua Hua ◽  
Jochen Renz ◽  
...  

Existing question answering systems struggle to answer factoid questions when geospatial information is involved. This is because most systems cannot accurately detect the geospatial semantic elements from the natural language questions, or capture the semantic relationships between those elements. In this paper, we propose a geospatial semantic encoding schema and a semantic graph representation which captures the semantic relations and dependencies in geospatial questions. We demonstrate that our proposed graph representation approach aids in the translation from natural language to a formal, executable expression in a query language. To decrease the need for people to provide explanatory information as part of their question and make the translation fully automatic, we treat the semantic encoding of the question as a sequential tagging task, and the graph generation of the query as a semantic dependency parsing task. We apply neural network approaches to automatically encode the geospatial questions into spatial semantic graph representations. Compared with current template-based approaches, our method generalises to a broader range of questions, including those with complex syntax and semantics. Our proposed approach achieves better results on GeoData201 than existing methods.


2021 ◽  
Author(s):  
Satoshi Nishimura ◽  
Chiaki Oshiyama ◽  
Yuichi Oota

Long-term care costs, burdens of caregivers, and their resultant shortage are prevalent concerns in an increasingly aging society. Use of information and communication technologies is one possible solution to expedite human resource development; however, there are few knowledge resources that are interpretable by computers. In this study, we present Example of structured manuals for elderly care as a knowledge resource for question-answering systems. We conducted semi-structured interviews with caregivers in a care facility and retrieved information from an open data set to collect “Oops” incidents in daily caregiving. Based on the collected incidents, we created questions worded in natural language and corresponding computer-interpretable queries. Consequently, we obtained 150 Oops incidents and 33 computer-interpretable queries and confirmed the queries can retrieve relevant knowledge from the knowledge resource.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Xuan Gu ◽  
Zhengya Sun ◽  
Wensheng Zhang

Abstract Background Symptom phrase recognition is essential to improve the use of unstructured medical consultation corpora for the development of automated question answering systems. A majority of previous works typically require enough manually annotated training data or as complete a symptom dictionary as possible. However, when applied to real scenarios, they will face a dilemma due to the scarcity of the annotated textual resources and the diversity of the spoken language expressions. Methods In this paper, we propose a composition-driven method to recognize the symptom phrases from Chinese medical consultation corpora without any annotations. The basic idea is to directly learn models that capture the composition, i.e., the arrangement of the symptom components (semantic units of words). We introduce an automatic annotation strategy for the standard symptom phrases which are collected from multiple data sources. In particular, we combine the position information and the interaction scores between symptom components to characterize the symptom phrases. Equipped with such models, we are allowed to robustly extract symptom phrases that are not seen before. Results Without any manual annotations, our method achieves strong positive results on symptom phrase recognition tasks. Experiments also show that our method enjoys great potential with access to plenty of corpora. Conclusions Compositionality offers a feasible solution for extracting information from unstructured free text with scarce labels.


2021 ◽  
Vol 26 (jai2021.26(2)) ◽  
pp. 88-95
Author(s):  
Hlybovets A ◽  
◽  
Tsaruk A ◽  

Within the framework of this paper, the analysis of software systems of question-answering type and their basic architectures has been carried out. With the development of machine learning technologies, creation of natural language processing (NLP) engines, as well as the rising popularity of virtual personal assistant programs that use the capabilities of speech synthesis (text-to-speech), there is a growing need in developing question-answering systems which can provide personalized answers to users' questions. All modern cloud providers proposed frameworks for organization of question answering systems but still we have a problem with personalized dialogs. Personalization is very important, it can put forward additional demands to a question-answering system’s capabilities to take this information into account while processing users’ questions. Traditionally, a question-answering system (QAS) is developed in the form of an application that contains a knowledge base and a user interface, which provides a user with answers to questions, and a means of interaction with an expert. In this article we analyze modern approaches to architecture development and try to build system from the building blocks that already exist on the market. Main criteria for the NLP modules were: support of the Ukrainian language, natural language understanding, functions of automatic definition of entities (attributes), ability to construct a dialogue flow, quality and completeness of documentation, API capabilities and integration with external systems, possibilities of external knowledge bases integration After provided analyses article propose the detailed architecture of the question-answering subsystem with elements of self-learning in the Ukrainian language. In the work you can find detailed description of main semantic components of the system (architecture components)


2021 ◽  
Vol 2062 (1) ◽  
pp. 012027
Author(s):  
Poonam Gupta ◽  
Ruchi Garg ◽  
Amandeep Kaur

Abstract In the present scenario COVID-19 pandemic has ruined the entire world. This situation motivates the researchers to resolve the query raised by the people around the world in an efficient manner. However, less number of resources available in order to gain the information and knowledge about COVID-19 arises a need to evaluate the existing Question Answering (QA) systems on COVID-19. In this paper, we compare the various QA systems available in order to answer the questions raised by the people like doctors, medical researchers etc. related to corona virus. QA systems process the queries submitted in natural language to find the best relevant answer among all the candidate answers for the COVID-19 related questions. These systems utilize the text mining and information retrieval on COVID-19 literature. This paper describes the survey of QA systems-CovidQA, CAiRE (Center for Artificial Intelligence Research)-COVID system, CO-search semantic search engine, COVIDASK, RECORD (Research Engine for COVID Open Research Dataset) available for COVID-19. All these QA systems are also compared in terms of their significant parameters-like Precision at rank 1 (P@1), Recall at rank 3(R@3), Mean Reciprocal Rank(MRR), F1-Score, Exact Match(EM), Mean Average Precision, Score metric etc.; on which efficiency of these systems relies.


Author(s):  
Michael Caballero

Question Answering (QA) is a subfield of Natural Language Processing (NLP) and computer science focused on building systems that automatically answer questions from humans in natural language. This survey summarizes the history and current state of the field and is intended as an introductory overview of QA systems. After discussing QA history, this paper summarizes the different approaches to the architecture of QA systems -- whether they are closed or open-domain and whether they are text-based, knowledge-based, or hybrid systems. Lastly, some common datasets in this field are introduced and different evaluation metrics are discussed.


Author(s):  
Eduardo Gabriel Cortes ◽  
Vinicius Woloszyn ◽  
Dante Barone ◽  
Sebastian Möller ◽  
Renata Vieira

Sign in / Sign up

Export Citation Format

Share Document