Assessment of Search Interface of Information Retrieval Systems A Case Study of Select Academic Databases

Author(s):  
Huma Shafiq ◽  
Zahid Ashraf Wani
2014 ◽  
Vol 9 (4) ◽  
pp. 47
Author(s):  
Joanne L. Jordan

A Review of: Mu, X., Lu, K., Ryu, H. (2014). Explicitly integrating MeSH thesaurus help into health information retrieval systems: An empirical user study. Information Processing and Management, 50(1), 24-40. http://dx.doi.org/10.1016/j.ipm.2013.03.005 Abstract Objectives – To compare the effectiveness of a search interface with built-in thesaurus (MeSH) terms and tree browsers (MeshMed) to a simple search interface (SimpleMed) in supporting health information retrieval. Researchers also examined the contribution of the MeSH term and tree browser components towards effective information retrieval and assessed whether and how these elements influence the users’ search methods and strategies. Design – Empirical comparison study. Setting – A four-year university in the United States of America. Subjects – 45 undergraduate and postgraduate students from 12 different academic departments. Methods – Researchers recruited 55 students, of which 10 were excluded, using flyers posted across a university campus from a wide range of disciplines. Participants were paid a small stipend taking part in the study. The authors developed two information retrieval systems, SimpleMed and MeshMed, to search across a test collection, OHSUMED, a database containing 348,566 Medline citations used in information retrieval research. SimpleMed includes a search browser and a popup window displaying record details. The MeshMed search interface includes two additional browsers, one for looking up details of MeSH terms and another showing where the term fits into the tree structure. The search tasks had two parts: to define a key biomedical term, and to explore the association between concepts. After a brief tutorial covering the key functions of both systems, avoiding suggestion of one interface being better than the other, each participant then searched for six topics, three on each interface, allocated randomly using a 6x6 Latin square design. The study tracked participants’ perceived topic familiarity using a 9-point Likert scale, measured before and after each search, with changes in score recorded. It examined the time spent in each search system, as recorded objectively by system logs, to measure engagement with searching task. Finally, the study examined whether participants found an answer to the set question, and whether that response was wrong, partially correct, or correct. Participants were asked about the portion of time they spent on each of the system components, and transaction log data was used to capture transitions between the search components. The participants also added their comments to a questionnaire after the search phase of the experiment. Main results – The baseline mean topic familiarity scores were similar for both interfaces, with SimpleMed’s mean of 2.01, with a standard deviation 1.43, compared to MeSHMed’s mean of 2.08 with a standard deviation of 1.60. The mean was taken for topic familiarity change scores over three questions on each interface and compared using a paired sample two-tailed t-test. This showed a statistically significant difference between the mean change in topic familiarity scores for SimpleMed and MeSHMed. Only 46 (17%) of the questions were not answered, 34 (74%) when participants were using SimpleMed and 12 (26%) when using MeSHMed. Researchers found a chi-squared test association between the interface and whether the answer was correct, suggesting that MeSHMed users were less likely to answer questions incorrectly. The question-answer scores positively correlated to the topic familiarity change scores, indicating that those participants whose familiarity with the topic improved the most were more likely to answer the question correctly. The mean amount of time spent overall using the two interfaces was not significantly different, though researchers do not provide data on mean times, only total time and test statistics. On the MeSHMed interface, on average participants found the Term Browser feature the most useful aspect and spent the most amount of time in this component. The Tree Browser feature was rated as contributing the least to the searching task and the participants spent the least amount of time in this part of the interface. Patterns of transitions between the components are reported, the most common of which were from the Search Browser to the Popup records, from the Term to the Search Browser, and vice versa. These observations suggest that participants were verifying the terms and clicking back and forth between the components to carry out iterative and more accurate searches. The authors identify seven typical patterns and described four different combinations of transitions between components. Based on questionnaire feedback, participants found the Term Browser helpful to define the medical terms used, and for additional suggested terms to add to their search. The Tree Browser allowed participants to see how terms relate to each other, and helped identify related terms, despite many negative feedback comments about this feature. Almost all participants (43 of 45) preferred MeSHMed for searching, finding the extra components helpful to produce better results. Conclusion – MeSHMed was shown to be more effective than SimpleMed for improving topic familiarity and finding correct answers to the set questions. Most participants reported a preference for the MeSHMed interface that included a Term Browser and Tree Browser to the straightforward SimpleMed interface. Both MeSHMed components contributed to the search process; the Term Browser was particularly helpful for defining and developing new concepts, and the Tree Browser added a view of the relationship between terms. The authors suggest that health information retrieval systems include visible and accessible thesaurus searching to assist with developing search strategies.


2012 ◽  
Vol 12 (2) ◽  
pp. 137-150 ◽  
Author(s):  
Raj Kumar Bhardwaj

AbstractIn this digital age, users require immediate access to information. To foster the process of research, the legal fraternity demands efficient online legal information systems. Raj Kumar Bhardwaj provides a view from India and reports on a case study that has been conducted on the use of various legal information databases in the Faculty of Law, University of Delhi, India. In his paper, he also reviews and discusses the various aspects relating to legal information retrieval systems, with particular reference to the various essential legal databases that cover Indian law.


1967 ◽  
Vol 06 (02) ◽  
pp. 45-51 ◽  
Author(s):  
A. Kent ◽  
J. Belzer ◽  
M. Kuhfeerst ◽  
E. D. Dym ◽  
D. L. Shirey ◽  
...  

An experiment is described which attempts to derive quantitative indicators regarding the potential relevance predictability of the intermediate stimuli used to represent documents in information retrieval systems. In effect, since the decision to peruse an entire document is often predicated upon the examination of one »level of processing« of the document (e.g., the citation and/or abstract), it became interesting to analyze the properties of what constitutes »relevance«. However, prior to such an analysis, an even more elementary step had to be made, namely, to determine what portions of a document should be examined.An evaluation of the ability of intermediate response products (IRPs), functioning as cues to the information content of full documents, to predict the relevance determination that would be subsequently made on these documents by motivated users of information retrieval systems, was made under controlled experimental conditions. The hypothesis that there might be other intermediate response products (selected extracts from the document, i.e., first paragraph, last paragraph, and the combination of first and last paragraph), that would be as representative of the full document as the traditional IRPs (citation and abstract) was tested systematically. The results showed that:1. there is no significant difference among the several IRP treatment groups on the number of cue evaluations of relevancy which match the subsequent user relevancy decision on the document;2. first and last paragraph combinations have consistently predicted relevancy to a higher degree than the other IRPs;3. abstracts were undistinguished as predictors; and4. the apparent high predictability rating for citations was not substantive.Some of these results are quite different than would be expected from previous work with unmotivated subjects.


2005 ◽  
Vol 14 (5) ◽  
pp. 335-346
Author(s):  
Por Carlos Benito Amat ◽  
Por Carlos Benito Amat

Libri ◽  
2020 ◽  
Vol 70 (3) ◽  
pp. 227-237
Author(s):  
Mahdi Zeynali-Tazehkandi ◽  
Mohsen Nowkarizi

AbstractEvaluation of information retrieval systems is a fundamental topic in Library and Information Science. The aim of this paper is to connect the system-oriented and the user-oriented approaches to relevant philosophical schools. By reviewing the related literature, it was found that the evaluation of information retrieval systems is successful if it benefits from both system-oriented and user-oriented approaches (composite). The system-oriented approach is rooted in Parmenides’ philosophy of stability (immovable) which Plato accepts and attributes to the world of forms; the user-oriented approach is rooted in Heraclitus’ flux philosophy (motion) which Plato defers and attributes to the tangible world. Thus, using Plato’s theory is a comprehensive approach for recognizing the concept of relevance. The theoretical and philosophical foundations determine the type of research methods and techniques. Therefore, Plato’s dialectical method is an appropriate composite method for evaluating information retrieval systems.


Sign in / Sign up

Export Citation Format

Share Document