Dagstuhl seminar 19461 on conversational search

2020 ◽  
Vol 54 (1) ◽  
pp. 1-11
Author(s):  
Avishek Anand ◽  
Lawrence Cavedon ◽  
Matthias Hagen ◽  
Hideo Joho ◽  
Mark Sanderson ◽  
...  

In the week of November 10--15, 2019, 44 researchers from the fields of information retrieval and Web search, natural language processing, human computer interaction, and dialogue systems met for the Dagstuhl Seminar 19461 "Conversational Search" to share the latest development in the area of conversational search and discuss its research agenda and future directions. The clear signal from the seminar is that research opportunities to advance conversational search are available to many areas and that collaboration in an interdisciplinary community is essential to achieve the goals. This report overviews the program and selected findings of the working groups.

2019 ◽  
Vol 53 (2) ◽  
pp. 3-10
Author(s):  
Muthu Kumar Chandrasekaran ◽  
Philipp Mayr

The 4 th joint BIRNDL workshop was held at the 42nd ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019) in Paris, France. BIRNDL 2019 intended to stimulate IR researchers and digital library professionals to elaborate on new approaches in natural language processing, information retrieval, scientometrics, and recommendation techniques that can advance the state-of-the-art in scholarly document understanding, analysis, and retrieval at scale. The workshop incorporated different paper sessions and the 5 th edition of the CL-SciSumm Shared Task.


Author(s):  
Sijia Liu ◽  
Yanshan Wang ◽  
Andrew Wen ◽  
Liwei Wang ◽  
Na Hong ◽  
...  

BACKGROUND Widespread adoption of electronic health records has enabled the secondary use of electronic health record data for clinical research and health care delivery. Natural language processing techniques have shown promise in their capability to extract the information embedded in unstructured clinical data, and information retrieval techniques provide flexible and scalable solutions that can augment natural language processing systems for retrieving and ranking relevant records. OBJECTIVE In this paper, we present the implementation of a cohort retrieval system that can execute textual cohort selection queries on both structured data and unstructured text—Cohort Retrieval Enhanced by Analysis of Text from Electronic Health Records (CREATE). METHODS CREATE is a proof-of-concept system that leverages a combination of structured queries and information retrieval techniques on natural language processing results to improve cohort retrieval performance using the Observational Medical Outcomes Partnership Common Data Model to enhance model portability. The natural language processing component was used to extract common data model concepts from textual queries. We designed a hierarchical index to support the common data model concept search utilizing information retrieval techniques and frameworks. RESULTS Our case study on 5 cohort identification queries, evaluated using the precision at 5 information retrieval metric at both the patient-level and document-level, demonstrates that CREATE achieves a mean precision at 5 of 0.90, which outperforms systems using only structured data or only unstructured text with mean precision at 5 values of 0.54 and 0.74, respectively. CONCLUSIONS The implementation and evaluation of Mayo Clinic Biobank data demonstrated that CREATE outperforms cohort retrieval systems that only use one of either structured data or unstructured text in complex textual cohort queries.


2015 ◽  
Vol 103 (1) ◽  
pp. 131-138 ◽  
Author(s):  
Yves Bestgen

Abstract Average precision (AP) is one of the most widely used metrics in information retrieval and natural language processing research. It is usually thought that the expected AP of a system that ranks documents randomly is equal to the proportion of relevant documents in the collection. This paper shows that this value is only approximate, and provides a procedure for efficiently computing the exact value. An analysis of the difference between the approximate and the exact value shows that the discrepancy is large when the collection contains few documents, but becomes very small when it contains at least 600 documents.


Sign in / Sign up

Export Citation Format

Share Document