Searching Health Information in Question-Answering Systems

Author(s):  
María-Dolores Olvera-Lobo ◽  
Juncal Gutiérrez-Artacho

Question-Answering Systems (QA Systems) can be viewed as a new alternative to the more familiar Information Retrieval Systems. These systems try to offer detailed, understandable answers to factual questions, in order to retrieve a collection of documents related to a particular search (Jackson & Schilder, 2005). The authors carry out a study to evaluate the quality and efficiency of open- and restricted-domain QA systems as sources for physicians and users in general through one monolingual evaluation and another multilingual. Their objective led them to use definition-type questions in order to evaluate QA systems and determine if they are useful to retrieve medical information. In addition, they analyze and evaluate the results obtained, and identify the source or sources used by the systems and their procedure (Olvera-Lobo & Gutiérrez-Artacho, 2010, 2011).

2013 ◽  
Vol 86 ◽  
pp. 276-294
Author(s):  
Katia Vila ◽  
Antonio Fernández ◽  
José M. Gómez ◽  
Antonio Ferrández ◽  
Josval Díaz

2018 ◽  
Vol 2 (4) ◽  
pp. 140 ◽  
Author(s):  
Ramadhana Rosyadi ◽  
Said Al-Faraby ◽  
Adiwijaya Adiwijaya

Islam has 25 prophets as guidelines for human life, documents containing information about the stories of the lives of the prophets during their lifetime. This study aims to build a more specific question and answer system by generating relevant answers not in the form of documents. Question Answering System is able to overcome problems in the Question and answer system, information retrieval systems where the answers issued are correct with responses to requests submitted, not in the form of documents that may contain answers. This study uses the Pattern Based method as extracting sentence pieces which are the answers to find answers that match the patterns that have been made. The selection of datasets causes a number of questions that can be submitted to be limited to information stored in the data itself. Besides that, questions are also limited in the form of Question words that are Factoid, namely Who, when, where, what and how. Accuracy results obtained using the Pattern Based method on Question Answering System are 39.36%.


2014 ◽  
Vol 9 (4) ◽  
pp. 47
Author(s):  
Joanne L. Jordan

A Review of: Mu, X., Lu, K., Ryu, H. (2014). Explicitly integrating MeSH thesaurus help into health information retrieval systems: An empirical user study. Information Processing and Management, 50(1), 24-40. http://dx.doi.org/10.1016/j.ipm.2013.03.005 Abstract Objectives – To compare the effectiveness of a search interface with built-in thesaurus (MeSH) terms and tree browsers (MeshMed) to a simple search interface (SimpleMed) in supporting health information retrieval. Researchers also examined the contribution of the MeSH term and tree browser components towards effective information retrieval and assessed whether and how these elements influence the users’ search methods and strategies. Design – Empirical comparison study. Setting – A four-year university in the United States of America. Subjects – 45 undergraduate and postgraduate students from 12 different academic departments. Methods – Researchers recruited 55 students, of which 10 were excluded, using flyers posted across a university campus from a wide range of disciplines. Participants were paid a small stipend taking part in the study. The authors developed two information retrieval systems, SimpleMed and MeshMed, to search across a test collection, OHSUMED, a database containing 348,566 Medline citations used in information retrieval research. SimpleMed includes a search browser and a popup window displaying record details. The MeshMed search interface includes two additional browsers, one for looking up details of MeSH terms and another showing where the term fits into the tree structure. The search tasks had two parts: to define a key biomedical term, and to explore the association between concepts. After a brief tutorial covering the key functions of both systems, avoiding suggestion of one interface being better than the other, each participant then searched for six topics, three on each interface, allocated randomly using a 6x6 Latin square design. The study tracked participants’ perceived topic familiarity using a 9-point Likert scale, measured before and after each search, with changes in score recorded. It examined the time spent in each search system, as recorded objectively by system logs, to measure engagement with searching task. Finally, the study examined whether participants found an answer to the set question, and whether that response was wrong, partially correct, or correct. Participants were asked about the portion of time they spent on each of the system components, and transaction log data was used to capture transitions between the search components. The participants also added their comments to a questionnaire after the search phase of the experiment. Main results – The baseline mean topic familiarity scores were similar for both interfaces, with SimpleMed’s mean of 2.01, with a standard deviation 1.43, compared to MeSHMed’s mean of 2.08 with a standard deviation of 1.60. The mean was taken for topic familiarity change scores over three questions on each interface and compared using a paired sample two-tailed t-test. This showed a statistically significant difference between the mean change in topic familiarity scores for SimpleMed and MeSHMed. Only 46 (17%) of the questions were not answered, 34 (74%) when participants were using SimpleMed and 12 (26%) when using MeSHMed. Researchers found a chi-squared test association between the interface and whether the answer was correct, suggesting that MeSHMed users were less likely to answer questions incorrectly. The question-answer scores positively correlated to the topic familiarity change scores, indicating that those participants whose familiarity with the topic improved the most were more likely to answer the question correctly. The mean amount of time spent overall using the two interfaces was not significantly different, though researchers do not provide data on mean times, only total time and test statistics. On the MeSHMed interface, on average participants found the Term Browser feature the most useful aspect and spent the most amount of time in this component. The Tree Browser feature was rated as contributing the least to the searching task and the participants spent the least amount of time in this part of the interface. Patterns of transitions between the components are reported, the most common of which were from the Search Browser to the Popup records, from the Term to the Search Browser, and vice versa. These observations suggest that participants were verifying the terms and clicking back and forth between the components to carry out iterative and more accurate searches. The authors identify seven typical patterns and described four different combinations of transitions between components. Based on questionnaire feedback, participants found the Term Browser helpful to define the medical terms used, and for additional suggested terms to add to their search. The Tree Browser allowed participants to see how terms relate to each other, and helped identify related terms, despite many negative feedback comments about this feature. Almost all participants (43 of 45) preferred MeSHMed for searching, finding the extra components helpful to produce better results. Conclusion – MeSHMed was shown to be more effective than SimpleMed for improving topic familiarity and finding correct answers to the set questions. Most participants reported a preference for the MeSHMed interface that included a Term Browser and Tree Browser to the straightforward SimpleMed interface. Both MeSHMed components contributed to the search process; the Term Browser was particularly helpful for defining and developing new concepts, and the Tree Browser added a view of the relationship between terms. The authors suggest that health information retrieval systems include visible and accessible thesaurus searching to assist with developing search strategies.


1967 ◽  
Vol 06 (02) ◽  
pp. 45-51 ◽  
Author(s):  
A. Kent ◽  
J. Belzer ◽  
M. Kuhfeerst ◽  
E. D. Dym ◽  
D. L. Shirey ◽  
...  

An experiment is described which attempts to derive quantitative indicators regarding the potential relevance predictability of the intermediate stimuli used to represent documents in information retrieval systems. In effect, since the decision to peruse an entire document is often predicated upon the examination of one »level of processing« of the document (e.g., the citation and/or abstract), it became interesting to analyze the properties of what constitutes »relevance«. However, prior to such an analysis, an even more elementary step had to be made, namely, to determine what portions of a document should be examined.An evaluation of the ability of intermediate response products (IRPs), functioning as cues to the information content of full documents, to predict the relevance determination that would be subsequently made on these documents by motivated users of information retrieval systems, was made under controlled experimental conditions. The hypothesis that there might be other intermediate response products (selected extracts from the document, i.e., first paragraph, last paragraph, and the combination of first and last paragraph), that would be as representative of the full document as the traditional IRPs (citation and abstract) was tested systematically. The results showed that:1. there is no significant difference among the several IRP treatment groups on the number of cue evaluations of relevancy which match the subsequent user relevancy decision on the document;2. first and last paragraph combinations have consistently predicted relevancy to a higher degree than the other IRPs;3. abstracts were undistinguished as predictors; and4. the apparent high predictability rating for citations was not substantive.Some of these results are quite different than would be expected from previous work with unmotivated subjects.


Sign in / Sign up

Export Citation Format

Share Document