search and retrieval
Recently Published Documents


TOTAL DOCUMENTS

329
(FIVE YEARS 36)

H-INDEX

25
(FIVE YEARS 2)

2022 ◽  
Vol 24 (3) ◽  
pp. 0-0

Content-based recommender system is a subclass of information systems that recommends an item to the user based on its description. It suggests items such as news, documents, articles, webpages, journals, and more to users as per their inclination by comparing the key features of the items with key terms or features of user interest profiles. This paper proposes the new methodology using Non-IIDness based semantic term-term coupling from the content referred by users to enhance recommendation results. In the proposed methodology, the semantic relationship is analyzed by estimating the explicit and implicit relationship between terms. It associates terms that are semantically related in real world or are used inter-changeably such as synonyms. The underestimated features of user profiles have been enhanced after term-term relation analysis which results in improved similarity estimation of relevant items with the user profiles.The experimentation result proves that the proposed methodology improves the overall search and retrieval results as compared to the state-of-art algorithms.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
John H. Gennari ◽  
Matthias König ◽  
Goksel Misirli ◽  
Maxwell L. Neal ◽  
David P. Nickerson ◽  
...  

Abstract A standardized approach to annotating computational biomedical models and their associated files can facilitate model reuse and reproducibility among research groups, enhance search and retrieval of models and data, and enable semantic comparisons between models. Motivated by these potential benefits and guided by consensus across the COmputational Modeling in BIology NEtwork (COMBINE) community, we have developed a specification for encoding annotations in Open Modeling and EXchange (OMEX)-formatted archives. This document details version 1.2 of the specification, which builds on version 1.0 published last year in this journal. In particular, this version includes a set of initial model-level annotations (whereas v 1.0 described exclusively annotations at a smaller scale). Additionally, this version uses best practices for namespaces, and introduces omex-library.org as a common root for all annotations. Distributing modeling projects within an OMEX archive is a best practice established by COMBINE, and the OMEX metadata specification presented here provides a harmonized, community-driven approach for annotating a variety of standardized model representations. This specification acts as a technical guideline for developing software tools that can support this standard, and thereby encourages broad advances in model reuse, discovery, and semantic analyses.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0256874
Author(s):  
Iknoor Singh ◽  
Carolina Scarton ◽  
Kalina Bontcheva

The Coronavirus (COVID-19) pandemic has led to a rapidly growing ‘infodemic’ of health information online. This has motivated the need for accurate semantic search and retrieval of reliable COVID-19 information across millions of documents, in multiple languages. To address this challenge, this paper proposes a novel high precision and high recall neural Multistage BiCross encoder approach. It is a sequential three-stage ranking pipeline which uses the Okapi BM25 retrieval algorithm and transformer-based bi-encoder and cross-encoder to effectively rank the documents with respect to the given query. We present experimental results from our participation in the Multilingual Information Access (MLIA) shared task on COVID-19 multilingual semantic search. The independently evaluated MLIA results validate our approach and demonstrate that it outperforms other state-of-the-art approaches according to nearly all evaluation metrics in cases of both monolingual and bilingual runs.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256463
Author(s):  
John M. Franchak ◽  
Brianna McGee ◽  
Gabrielle Blanch

How are eyes and head adapted to meet the demands of visual exploration in different tasks and environments? In two studies, we measured the horizontal movements of the eyes (using mobile eye tracking in Studies 1 and 2) and the head (using inertial sensors in Study 2) while participants completed a walking task and a search and retrieval task in a large, outdoor environment. We found that the spread of visual exploration was greater while searching compared with walking, and this was primarily driven by increased movement of the head as opposed to the eyes. The contributions of the head to gaze shifts of different eccentricities was greater when searching compared to when walking. Findings are discussed with respect to understanding visual exploration as a motor action with multiple degrees of freedom.


Author(s):  
N. Anastopoulou ◽  
M. Kavouras ◽  
M. Kokla ◽  
E. Tomai

Abstract. Research on knowledge discovery in the geospatial domain currently focuses on semi-structured, even on unstructured rather than fully structured content. The attention has been put on the plethora of resources on the Web, such as html pages, news articles, blogs, social media etc. Semantic information extraction in geospatial-oriented approaches is further used for semantic analysis, search, and retrieval. The aim of this paper is to extract, analyse and visualize geospatial semantic information and emotions from texts on climate change. A collection of articles on climate change is used to demonstrate the developed approach. These articles describe environmental and socio-economic dimensions of climate change across the Earth, and include a wealth of information related to environmental concepts and geographic locations affected by it. The results are analysed in order to understand which specific human emotions are associated with environmental concepts and/or locations, as well as which environmental terms are linked to locations. For the better understanding of the above-mentioned information, semantic networks are used as a powerful visualization tool of the links among concepts – locations – emotions.


2021 ◽  
pp. 016555152110221
Author(s):  
Usashi Chatterjee

Dietary practices are governed by a mix of ethnographic aspects, such as social, cultural and environmental factors. These aspects need to be taken into consideration during an analysis of food-related queries. Queries are usually ambiguous. It is essential to understand, analyse and refine the queries for better search and retrieval. The work is focused on identifying the explicit, implicit and hidden facets of a query, taking into consideration the context – culinary domain. This article proposes a technique for query understanding, analysis and refinement based on a domain specific knowledge model. Queries are conceptualised by mapping the query term to concepts defined in the model. This allows an understanding of the semantic point of view of a query and an ability to determine the meaning of its terms and their interrelatedness. The knowledge model acts as a backbone providing the context for query understanding, analysis and refinement and outperforms other models, such as Schema.org , BBC Food Ontology and Recipe Ontology.


2021 ◽  
Author(s):  
Deidre Simmons

This thesis considers how artist-run centres are creating access to their film collections by using the Canadian Filmmakers Distribution Centre (CFMDC) as its case study. It looks at current literature on accessibility, including controlled vocabularies, keywords, folksonomies, and social tagging, and how two other institutions- Institut National de l’Audiovisuel (INA), located in Paris, France and IsumaTV, located in Igloolik, Nunavut, Canada- are currently creating access to their film collections to discover how different forms of accessibility are being used in real time. It looks at how the CFMDC is currently creating access to its film collection and finally, recommends the ways accessibility at an artist-run centre could be improved to help the artist-run centre reach a wider audience, help the researcher in the search and retrieval process, and to keep the film object itself accessible.


Sign in / Sign up

Export Citation Format

Share Document