reciprocal rank
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 13)

H-INDEX

6
(FIVE YEARS 2)

2022 ◽  
Vol 24 (3) ◽  
pp. 1-16
Author(s):  
Manvi Breja ◽  
Sanjay Kumar Jain

Why-type non-factoid questions are ambiguous and involve variations in their answers. A challenge in returning one appropriate answer to user requires the process of appropriate answer extraction, re-ranking and validation. There are cases where the need is to understand the meaning and context of a document rather than finding exact words involved in question. The paper addresses this problem by exploring lexico-syntactic, semantic and contextual query-dependent features, some of which are based on deep learning frameworks to depict the probability of answer candidate being relevant for the question. The features are weighted by the score returned by ensemble ExtraTreesClassifier according to features importance. An answer re-ranker model is implemented that finds the highest ranked answer comprising largest value of feature similarity between question and answer candidate and thus achieving 0.64 Mean Reciprocal Rank (MRR). Further, answer is validated by matching the answer type of answer candidate and returns the highest ranked answer candidate with matched answer type to a user.


2022 ◽  
Vol 24 (3) ◽  
pp. 0-0

Why-type non-factoid questions are ambiguous and involve variations in their answers. A challenge in returning one appropriate answer to user requires the process of appropriate answer extraction, re-ranking and validation. There are cases where the need is to understand the meaning and context of a document rather than finding exact words involved in question. The paper addresses this problem by exploring lexico-syntactic, semantic and contextual query-dependent features, some of which are based on deep learning frameworks to depict the probability of answer candidate being relevant for the question. The features are weighted by the score returned by ensemble ExtraTreesClassifier according to features importance. An answer re-ranker model is implemented that finds the highest ranked answer comprising largest value of feature similarity between question and answer candidate and thus achieving 0.64 Mean Reciprocal Rank (MRR). Further, answer is validated by matching the answer type of answer candidate and returns the highest ranked answer candidate with matched answer type to a user.


2021 ◽  
Vol 39 (4) ◽  
pp. 1-29
Author(s):  
Sheng-Chieh Lin ◽  
Jheng-Hong Yang ◽  
Rodrigo Nogueira ◽  
Ming-Feng Tsai ◽  
Chuan-Ju Wang ◽  
...  

Conversational search plays a vital role in conversational information seeking. As queries in information seeking dialogues are ambiguous for traditional ad hoc information retrieval (IR) systems due to the coreference and omission resolution problems inherent in natural language dialogue, resolving these ambiguities is crucial. In this article, we tackle conversational passage retrieval, an important component of conversational search, by addressing query ambiguities with query reformulation integrated into a multi-stage ad hoc IR system. Specifically, we propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting. For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals. For the latter, we reformulate conversational queries into natural, stand-alone, human-understandable queries with a pretrained sequence-to-sequence model. Detailed analyses of the two CQR methods are provided quantitatively and qualitatively, explaining their advantages, disadvantages, and distinct behaviors. Moreover, to leverage the strengths of both CQR methods, we propose combining their output with reciprocal rank fusion, yielding state-of-the-art retrieval effectiveness, 30% improvement in terms of NDCG@3 compared to the best submission of Text REtrieval Conference (TREC) Conversational Assistant Track (CAsT) 2019.


Author(s):  
Di Wu ◽  
Xiao-Yuan Jing ◽  
Haowen Chen ◽  
Xiaohui Kong ◽  
Jifeng Xuan

Application Programming Interface (API) tutorial is an important API learning resource. To help developers learn APIs, an API tutorial is often split into a number of consecutive units that describe the same topic (i.e. tutorial fragment). We regard a tutorial fragment explaining an API as a relevant fragment of the API. Automatically recommending relevant tutorial fragments can help developers learn how to use an API. However, existing approaches often employ supervised or unsupervised manner to recommend relevant fragments, which suffers from much manual annotation effort or inaccurate recommended results. Furthermore, these approaches only support developers to input exact API names. In practice, developers often do not know which APIs to use so that they are more likely to use natural language to describe API-related questions. In this paper, we propose a novel approach, called Tutorial Fragment Recommendation (TuFraRec), to effectively recommend relevant tutorial fragments for API-related natural language questions, without much manual annotation effort. For an API tutorial, we split it into fragments and extract APIs from each fragment to build API-fragment pairs. Given a question, TuFraRec first generates several clarification APIs that are related to the question. We use clarification APIs and API-fragment pairs to construct candidate API-fragment pairs. Then, we design a semi-supervised metric learning (SML)-based model to find relevant API-fragment pairs from the candidate list, which can work well with a few labeled API-fragment pairs and a large number of unlabeled API-fragment pairs. In this way, the manual effort for labeling the relevance of API-fragment pairs can be reduced. Finally, we sort and recommend relevant API-fragment pairs based on the recommended strategy. We evaluate TuFraRec on 200 API-related natural language questions and two public tutorial datasets (Java and Android). The results demonstrate that on average TuFraRec improves NDCG@5 by 0.06 and 0.09, and improves Mean Reciprocal Rank (MRR) by 0.07 and 0.09 on two tutorial datasets as compared with the state-of-the-art approach.


2021 ◽  
Vol 9 (1) ◽  
pp. 65-73
Author(s):  
Katarina N. Lakonawa ◽  
Sebastianus A. S. Mola ◽  
Adriana Fanggidae

Penggunaan bahasa tak baku semakin marak dalam komunikasi di media sosial. Penggunaan bahasa tak baku tidak terbatas pada kalimat, klausa, atau frasa saja namun juga pada penggunaan kata. Pada penelitian ini, akan dilakukan normalisasi kata yang tak baku/ nonstandard word (NSW) tersebut ke kata baku/ standard word (SW) Bahasa Indonesia. Metode stemmer Nazief-Adriani (Nazief-Adriani stemmer (NAS)) dikembangkan menjadi nonstandard stemmer (NSS) dengan meningkatkan kemampuannya untuk mendeteksi imbuhan tak baku. Tujuan penelitian ini adalah membandingkan penggunaan NAS dan NSS dalam normalisasi NSW.  Algoritma kemiripan Needleman-Wunsch digunakan untuk membobot hasil pencocokan. Hasil pengujian dengan Mean Reciprocal Rank (MRR) pada sebanyak 3.438 NSW didapatkan penggunaan NSS dengan jumlah kueri = 9 (Q=9) memiliki tertinggi sebesar 79.26% dengan rata-rata sebesar 50.48%. Sedangkan pengujian MRR menggunakan NAS dengan Q=9 mendapatkan hasil tertinggi sebesar 72.87% dan rata-rata sebesar 47.23%. Dari dua pengujian MRR yang dilakukan, ada 3 huruf yang memiliki hasil stemming tertinggi, baik dalam pengujian menggunakan NAS maupun menggunakan NSS yaitu huruf  awal r, f dan j. Peningkatan nilai MRR paling signifikan terjadi pada huruf awal ‘d’, ‘n’  dan  ‘t’ yang merupakan huruf awal dari sebagian imbuhan tak standar.


2020 ◽  
Vol 34 (08) ◽  
pp. 13286-13293
Author(s):  
Akshay Gugnani ◽  
Hemant Misra

This paper presents a job recommender system to match resumes to job descriptions (JD), both of which are non-standard and unstructured/semi-structured in form. First, the paper proposes a combination of natural language processing (NLP) techniques for the task of skill extraction. The performance of the combined techniques on an industrial scale dataset yielded a precision and recall of 0.78 and 0.88 respectively. The paper then introduces the concept of extracting implicit skills – the skills which are not explicitly mentioned in a JD but may be implicit in the context of geography, industry or role. To mine and infer implicit skills for a JD, we find the other JDs similar to this JD. This similarity match is done in the semantic space. A Doc2Vec model is trained on 1.1 Million JDs covering several domains crawled from the web, and all the JDs are projected onto this semantic space. The skills absent in the JD but present in similar JDs are obtained, and the obtained skills are weighted using several techniques to obtain the set of final implicit skills. Finally, several similarity measures are explored to match the skills extracted from a candidate's resume to explicit and implicit skills of JDs. Empirical results for matching resumes and JDs demonstrate that the proposed approach gives a mean reciprocal rank of 0.88, an improvement of 29.4% when compared to the performance of a baseline method that uses only explicit skills.


2019 ◽  
Vol 27 (2) ◽  
pp. 194-201 ◽  
Author(s):  
Dina Demner-Fushman ◽  
Yassine Mrabet ◽  
Asma Ben Abacha

Abstract Objective Consumers increasingly turn to the internet in search of health-related information; and they want their questions answered with short and precise passages, rather than needing to analyze lists of relevant documents returned by search engines and reading each document to find an answer. We aim to answer consumer health questions with information from reliable sources. Materials and Methods We combine knowledge-based, traditional machine and deep learning approaches to understand consumers’ questions and select the best answers from consumer-oriented sources. We evaluate the end-to-end system and its components on simple questions generated in a pilot development of MedlinePlus Alexa skill, as well as the short and long real-life questions submitted to the National Library of Medicine by consumers. Results Our system achieves 78.7% mean average precision and 87.9% mean reciprocal rank on simple Alexa questions, and 44.5% mean average precision and 51.6% mean reciprocal rank on real-life questions submitted by National Library of Medicine consumers. Discussion The ensemble of deep learning, domain knowledge, and traditional approaches recognizes question type and focus well in the simple questions, but it leaves room for improvement on the real-life consumers’ questions. Information retrieval approaches alone are sufficient for finding answers to simple Alexa questions. Answering real-life questions, however, benefits from a combination of information retrieval and inference approaches. Conclusion A pilot practical implementation of research needed to help consumers find reliable answers to their health-related questions demonstrates that for most questions the reliable answers exist and can be found automatically with acceptable accuracy.


Sign in / Sign up

Export Citation Format

Share Document