Design and implementation of information retrieval mechanism for the virtual museum creation

Author(s):  
Anastasiia Sochenkova ◽  
Natalia Podzharaya ◽  
Pavel Trofimov ◽  
Galina Novikova
2009 ◽  
pp. 468-483
Author(s):  
Efrem Mallach

The case study describes a small consulting company’s experience in the design and implementation of a database and associated information retrieval system. Their choices are explained within the context of the firm’s needs and constraints. Issues associated with development methods are discussed, along with problems that arose from not following proper development disciplines.


Author(s):  
JIAN PING LI ◽  
DAWEI SONG ◽  
YONG QIN YANG ◽  
YUAN YAN TANG

Traditional benchmarking methods for information retrieval (IR) are based on experimental performance evaluation.1–14 Although the metrics precision and recall can measure the effectiveness of a system, it cannot assess the underlying model. Recently, a theory of "aboutness" has been used for functional benchmarking of IR. Latest research shows that the functionality of an IR model is largely determined by its retrieval mechanism, i.e. the matching function. In particular, containment and overlapping (either with or without a threshold value) are the core IR matching functions. The objective of this paper is to model the containment and overlapping matching functions using an aboutness-based framework and analyze them from an abstract and theoretical viewpoint. Separate aboutness relations for containment, pure-overlapping (i.e. without threshold) and threshold-overlapping matching functions are defined, and the sets of properties supported by them are derived and analyzed respectively. These three relations can be used to determine and explain the functionality of an IR system and how the functionality affects the performance of the system. Moreover, they can provide the design guidelines for new IR systems.


2019 ◽  
Vol 28 (08) ◽  
pp. 1960011
Author(s):  
Nikolaos Polatidis ◽  
Elias Pimenidis ◽  
Andrew Fish ◽  
Stelios Kapetanakis

Recommender systems’ evaluation is usually based on predictive accuracy and information retrieval metrics, with better scores meaning recommendations are of higher quality. However, new algorithms are constantly developed and the comparison of results of algorithms within an evaluation framework is difficult since different settings are used in the design and implementation of experiments. In this paper, we propose a guidelines-based approach that can be followed to reproduce experiments and results within an evaluation framework. We have evaluated our approach using a real dataset, and well-known recommendation algorithms and metrics; to show that it can be difficult to reproduce results if certain settings are missing, thus resulting in more evaluation cycles required to identify the optimal settings.


Sign in / Sign up

Export Citation Format

Share Document