Improving Retrieval Performance for Verbose Queries via Axiomatic Analysis of Term Discrimination Heuristic

Author(s):  
Mozhdeh Ariannezhad ◽  
Ali Montazeralghaem ◽  
Hamed Zamani ◽  
Azadeh Shakery
Author(s):  
Alex Kohn ◽  
François Bry ◽  
Alexander Manta

Studies agree that searchers are often not satisfied with the performance of current enterprise search engines. As a consequence, more scientists worldwide are actively investigating new avenues for searching to improve retrieval performance. This paper contributes to YASA (Your Adaptive Search Agent), a fully implemented and thoroughly evaluated ontology-based information retrieval system for the enterprise. A salient particularity of YASA is that large parts of the ontology are automatically filled with facts by recycling and transforming existing data. YASA offers context-based personalization, faceted navigation, as well as semantic search capabilities. YASA has been deployed and evaluated in the pharmaceutical research department of Roche, Penzberg, and results show that already semantically simple ontologies suffice to considerably improve search performance.


2003 ◽  
Vol 45 (1) ◽  
pp. 242-269 ◽  
Author(s):  
Hervé Moulin ◽  
Richard Stong
Keyword(s):  

Author(s):  
Antonio L. Alfeo ◽  
Mario G. C. A. Cimino ◽  
Gigliola Vaglini

AbstractIn nowadays manufacturing, each technical assistance operation is digitally tracked. This results in a huge amount of textual data that can be exploited as a knowledge base to improve these operations. For instance, an ongoing problem can be addressed by retrieving potential solutions among the ones used to cope with similar problems during past operations. To be effective, most of the approaches for semantic textual similarity need to be supported by a structured semantic context (e.g. industry-specific ontology), resulting in high development and management costs. We overcome this limitation with a textual similarity approach featuring three functional modules. The data preparation module provides punctuation and stop-words removal, and word lemmatization. The pre-processed sentences undergo the sentence embedding module, based on Sentence-BERT (Bidirectional Encoder Representations from Transformers) and aimed at transforming the sentences into fixed-length vectors. Their cosine similarity is processed by the scoring module to match the expected similarity between the two original sentences. Finally, this similarity measure is employed to retrieve the most suitable recorded solutions for the ongoing problem. The effectiveness of the proposed approach is tested (i) against a state-of-the-art competitor and two well-known textual similarity approaches, and (ii) with two case studies, i.e. private company technical assistance reports and a benchmark dataset for semantic textual similarity. With respect to the state-of-the-art, the proposed approach results in comparable retrieval performance and significantly lower management cost: 30-min questionnaires are sufficient to obtain the semantic context knowledge to be injected into our textual search engine.


2021 ◽  
Author(s):  
Matthieu Dogniaux ◽  
Cyril Crevoisier ◽  
Silvère Gousset ◽  
Étienne Le Coarer ◽  
Yann Ferrec ◽  
...  

2017 ◽  
Vol 28 (11) ◽  
pp. 4008-4022 ◽  
Author(s):  
Adrian W Gilmore ◽  
Steven M Nelson ◽  
Farah Naaz ◽  
Ruth A Shaffer ◽  
Kathleen B McDermott

Sign in / Sign up

Export Citation Format

Share Document