query reformulation
Recently Published Documents


TOTAL DOCUMENTS

222
(FIVE YEARS 37)

H-INDEX

19
(FIVE YEARS 2)

2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

Understanding the actual need of user from a question is very crucial in non-factoid why-question answering as Why-questions are complex and involve ambiguity and redundancy in their understanding. The precise requirement is to determine the focus of question and reformulate them accordingly to retrieve expected answers to a question. The paper analyzes different types of why-questions and proposes an algorithm for each class to determine the focus and reformulate it into a query by appending focal terms and cue phrase ‘because’ with it. Further, a user interface is implemented which asks input why-question, applies different components of question , reformulates it and finally retrieve web pages by posing query to Google search engine. To measure the accuracy of the process, user feedback is taken which asks them to assign scoring from 1 to 10, on how relevant are the retrieved web pages according to their understanding. The results depict that maximum precision of 89% is achieved in Informational type why-questions and minimum of 48% in opinionated type why-questions.


2021 ◽  
Author(s):  
Songchun Yang ◽  
Xiangwen Zheng ◽  
Yu Xiao ◽  
Yu Yang ◽  
Dongsheng Zhao

2021 ◽  
Vol 39 (4) ◽  
pp. 1-29
Author(s):  
Sheng-Chieh Lin ◽  
Jheng-Hong Yang ◽  
Rodrigo Nogueira ◽  
Ming-Feng Tsai ◽  
Chuan-Ju Wang ◽  
...  

Conversational search plays a vital role in conversational information seeking. As queries in information seeking dialogues are ambiguous for traditional ad hoc information retrieval (IR) systems due to the coreference and omission resolution problems inherent in natural language dialogue, resolving these ambiguities is crucial. In this article, we tackle conversational passage retrieval, an important component of conversational search, by addressing query ambiguities with query reformulation integrated into a multi-stage ad hoc IR system. Specifically, we propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting. For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals. For the latter, we reformulate conversational queries into natural, stand-alone, human-understandable queries with a pretrained sequence-to-sequence model. Detailed analyses of the two CQR methods are provided quantitatively and qualitatively, explaining their advantages, disadvantages, and distinct behaviors. Moreover, to leverage the strengths of both CQR methods, we propose combining their output with reciprocal rank fusion, yielding state-of-the-art retrieval effectiveness, 30% improvement in terms of NDCG@3 compared to the best submission of Text REtrieval Conference (TREC) Conversational Assistant Track (CAsT) 2019.


2021 ◽  
Author(s):  
Negar Arabzadeh ◽  
Amin Bigdeli ◽  
Shirin Seyedsalehi ◽  
Morteza Zihayat ◽  
Ebrahim Bagheri

2021 ◽  
pp. 016555152096869
Author(s):  
Xiaojuan Zhang

As a mechanism to guide users towards a better representation of their information needs, the query reformulation method generates new queries based on users’ historical queries. To preserve the original search intent, query reformulations should be context-aware and should attempt to meet users’ personal information needs. The mainstream method aims to generate candidate queries first, according to their past frequencies, and then score (re-rank) these candidates based on the semantic consistency of terms, dependency among latent semantic topics and user preferences. We exploit embeddings (i.e. term, user and topic embeddings) to use contextual information and individual preferences more effectively to improve personalised query reformulation. Our work involves two major tasks. In the first task, candidate queries are generated from an original query by substituting or adding one term, and the contextual similarities between the terms are calculated based on the term embeddings and augmented with user personalisation. In the second task, the candidate queries generated in the first task are evaluated and scored (re-ranked) according to the consistency of the semantic meaning of the candidate query and the user preferences based on a graphical model with the term, user and topic embeddings. Experiments show that our proposed model yields significant improvements compared with the current state-of-the-art methods.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kyoungsik Na

PurposeThis study explores the effects of cognitive load on the propensity to reformulate queries during information seeking on the web.Design/methodology/approachThis study employs an experimental design to analyze the effect of manipulations of cognitive load on the propensity for query reformulation between experimental and control groups. In total, three affective components that contribute to cognitive load were manipulated: mental demand, temporal demand and frustration.FindingsA significant difference in the propensity of query reformulation behavior was found between searchers exposed to cognitive load manipulations and searchers who were not exposed. Those exposed to cognitive load manipulations made half as many search query reformulations as searchers not exposed. Furthermore, the National Aeronautical and Space Administration Task Load Index (NASA-TLX) cognitive load scores of searchers who were exposed to the three cognitive load manipulations were higher than those of searchers who were not exposed indicating that the manipulation was effective. Query reformulation behavior did not differ across task types.Originality/valueThe findings suggest that a dual-task method and NASA-TLX assessment serve as good indicators of cognitive load. Because the findings show that cognitive load hinders a searcher's interaction with information search tools, this study provides empirical support for reducing cognitive load when designing information systems or user interfaces.


Author(s):  
Jia Chen ◽  
Jiaxin Mao ◽  
Yiqun Liu ◽  
Fan Zhang ◽  
Min Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document