information retrieval evaluation
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 21)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Vol 11 (13) ◽  
pp. 5913
Author(s):  
Zhuang He ◽  
Yin Feng

Automatic singing transcription and analysis from polyphonic music records are essential in a number of indexing techniques for computational auditory scenes. To obtain a note-level sequence in this work, we divide the singing transcription task into two subtasks: melody extraction and note transcription. We construct a salience function in terms of harmonic and rhythmic similarity and a measurement of spectral balance. Central to our proposed method is the measurement of melody contours, which are calculated using edge searching based on their continuity properties. We calculate the mean contour salience by separating melody analysis from the adjacent breakpoint connective strength matrix, and we select the final melody contour to determine MIDI notes. This unique method, combining audio signals with image edge analysis, provides a more interpretable analysis platform for continuous singing signals. Experimental analysis using Music Information Retrieval Evaluation Exchange (MIREX) datasets shows that our technique achieves promising results both for audio melody extraction and polyphonic singing transcription.


2021 ◽  
Vol 55 (1) ◽  
pp. 1-11
Author(s):  
Parth Mehta ◽  
Thomas Mandl ◽  
Prasenjit Majumder ◽  
Surupendu Gangopadhyay

This report gives an overview on the Forum for Information Retrieval Evaluation (FIRE) initiative for South-Asian languages 1 . The FIRE conference was conducted online in December 2020. The event combined a conference including keynotes, peer reviewed paper session with an Evaluation Forum. This report will present an overview of the conference and provide insights into the evaluation tracks. Current domains include legal information access, mixed script information retrieval, semantic analysis and social media posts classification. The tasks are discussed and connections to other evaluation initiatives are shown.


2020 ◽  
Vol 54 (2) ◽  
pp. 1-2
Author(s):  
Dan Li

The availability of test collections in Cranfield paradigm has significantly benefited the development of models, methods and tools in information retrieval. Such test collections typically consist of a set of topics, a document collection and a set of relevance assessments. Constructing these test collections requires effort of various perspectives such as topic selection, document selection, relevance assessment, and relevance label aggregation etc. The work in the thesis provides a fundamental way of constructing and utilizing test collections in information retrieval in an effective, efficient and reliable manner. To that end, we have focused on four aspects. We first study the document selection issue when building test collections. We devise an active sampling method for efficient large-scale evaluation [Li and Kanoulas, 2017]. Different from past sampling-based approaches, we account for the fact that some systems are of higher quality than others, and we design the sampling distribution to over-sample documents from these systems. At the same time, the estimated evaluation measures are unbiased, and assessments can be used to evaluate new, novel systems without introducing any systematic error. Then a natural further step is determining when to stop the document selection and assessment procedure. This is an important but understudied problem in the construction of test collections. We consider both the gain of identifying relevant documents and the cost of assessing documents as the optimization goals. We handle the problem under the continuous active learning framework by jointly training a ranking model to rank documents, and estimating the total number of relevant documents in the collection using a "greedy" sampling method [Li and Kanoulas, 2020]. The next stage of constructing a test collection is assessing relevance. We study how to denoise relevance assessments by aggregating from multiple crowd annotation sources to obtain high-quality relevance assessments. This helps to boost the quality of relevance assessments acquired in a crowdsourcing manner. We assume a Gaussian process prior on query-document pairs to model their correlation. The proposed model shows good performance in terms of interring true relevance labels. Besides, it allows predicting relevance labels for new tasks that has no crowd annotations, which is a new functionality of CrowdGP. Ablation studies demonstrate that the effectiveness is attributed to the modelling of task correlation based on the axillary information of tasks and the prior relevance information of documents to queries. After a test collection is constructed, it can be used to either evaluate retrieval systems or train a ranking model. We propose to use it to optimize the configuration of retrieval systems. We use Bayesian optimization approach to model the effect of a δ -step in the configuration space to the effectiveness of the retrieval system, by suggesting to use different similarity functions (covariance functions) for continuous and categorical values, and examine their ability to effectively and efficiently guide the search in the configuration space [Li and Kanoulas, 2018]. Beyond the algorithmic and empirical contributions, work done as part of this thesis also contributed to the research community as the CLEF Technology Assisted Reviews in Empirical Medicine Tracks in 2017, 2018, and 2019 [Kanoulas et al., 2017, 2018, 2019]. Awarded by: University of Amsterdam, Amsterdam, The Netherlands. Supervised by: Evangelos Kanoulas. Available at: https://dare.uva.nl/search?identifier=3438a2b6-9271-4f2c-add5-3c811cc48d42.


2020 ◽  
Vol 54 (2) ◽  
pp. 1-9
Author(s):  
Iván Cantador ◽  
Max Chevalier ◽  
Massimo Melucci ◽  
Josiane Mothe

The Joint Conference of the Information Retrieval Communities in Europe (CIRCLE 2020) is the first joint conference of the French, Italian, Spanish, and Swiss information retrieval communities. Although these communities had conceived the CIRCLE conference as a meeting and networking venue, because of the COVID-19 pandemic, they had to make the conference as fully virtual event. Nonetheless, the three days of conference gathered interesting studies and research work on a wide range of topics on information retrieval, such as topic and document modelling, query and ranking refinement, information retrieval in e-government, social media, recommender systems, information retrieval evaluation, indexing and annotation, user profiling and interaction, frameworks and systems, and semantic extraction.


Sign in / Sign up

Export Citation Format

Share Document